Ben Nadel
On User Experience (UX) Design, JavaScript, ColdFusion, Node.js, Life, and Love.
I am the chief technical officer at InVision App, Inc - a prototyping and collaboration platform for designers, built by designers. I also rock out in JavaScript and ColdFusion 24x7.
Meanwhile on Twitter
Loading latest tweet...
Ben Nadel at CFinNC 2009 (Raleigh, North Carolina) with: Asif Rasheed

Converting A ColdFusion Query To CSV Using QueryToCSV()

Posted by Ben Nadel
Tags: ColdFusion

UPDATE: I have posted an updated, faster more accurate version of this UDF here.

The other day, someone asked me about converting a ColdFusion query to a "comma separated values" file, or CSV. I was about to point them to my QueryToCSV() user defined function when I realized something crazy - I didn't have one. I have done a lot of work with converting CSV values into queries, but I guess I just never did as much going the other way as the task is significantly less complicated. However, in an effort to make future explanations easier, I have created one below:

  • <cffunction
  • name="QueryToCSV"
  • access="public"
  • returntype="string"
  • output="false"
  • hint="I take a query and convert it to a comma separated value string.">
  •  
  • <!--- Define arguments. --->
  • <cfargument
  • name="Query"
  • type="query"
  • required="true"
  • hint="I am the query being converted to CSV."
  • />
  •  
  • <cfargument
  • name="Fields"
  • type="string"
  • required="true"
  • hint="I am the list of query fields to be used when creating the CSV value."
  • />
  •  
  • <cfargument
  • name="CreateHeaderRow"
  • type="boolean"
  • required="false"
  • default="true"
  • hint="I flag whether or not to create a row of header values."
  • />
  •  
  • <cfargument
  • name="Delimiter"
  • type="string"
  • required="false"
  • default=","
  • hint="I am the field delimiter in the CSV value."
  • />
  •  
  • <!--- Define the local scope. --->
  • <cfset var LOCAL = {} />
  •  
  • <!---
  • First, we want to set up a column index so that we can
  • iterate over the column names faster than if we used a
  • standard list loop on the passed-in list.
  • --->
  • <cfset LOCAL.ColumnNames = {} />
  •  
  • <!---
  • Loop over column names and index them numerically. We
  • are going to be treating this struct almost as if it
  • were an array. The reason we are doing this is that
  • look-up times on a table are a bit faster than look
  • up times on an array (or so I have been told).
  • --->
  • <cfloop
  • index="LOCAL.ColumnName"
  • list="#ARGUMENTS.Fields#"
  • delimiters=",">
  •  
  • <!--- Store the current column name. --->
  • <cfset LOCAL.ColumnNames[ StructCount( LOCAL.ColumnNames ) + 1 ] = Trim( LOCAL.ColumnName ) />
  •  
  • </cfloop>
  •  
  • <!--- Store the column count. --->
  • <cfset LOCAL.ColumnCount = StructCount( LOCAL.ColumnNames ) />
  •  
  •  
  • <!---
  • Now that we have our index in place, let's create
  • a string buffer to help us build the CSV value more
  • effiently.
  • --->
  • <cfset LOCAL.Buffer = CreateObject( "java", "java.lang.StringBuffer" ).Init() />
  •  
  • <!--- Create a short hand for the new line characters. --->
  • <cfset LOCAL.NewLine = (Chr( 13 ) & Chr( 10 )) />
  •  
  •  
  • <!--- Check to see if we need to add a header row. --->
  • <cfif ARGUMENTS.CreateHeaderRow>
  •  
  • <!--- Loop over the column names. --->
  • <cfloop
  • index="LOCAL.ColumnIndex"
  • from="1"
  • to="#LOCAL.ColumnCount#"
  • step="1">
  •  
  • <!--- Append the field name. --->
  • <cfset LOCAL.Buffer.Append(
  • JavaCast(
  • "string",
  • """#LOCAL.ColumnNames[ LOCAL.ColumnIndex ]#"""
  • )
  • ) />
  •  
  • <!---
  • Check to see which delimiter we need to add:
  • field or line.
  • --->
  • <cfif (LOCAL.ColumnIndex LT LOCAL.ColumnCount)>
  •  
  • <!--- Field delimiter. --->
  • <cfset LOCAL.Buffer.Append(
  • JavaCast( "string", ARGUMENTS.Delimiter )
  • ) />
  •  
  • <cfelse>
  •  
  • <!--- Line delimiter. --->
  • <cfset LOCAL.Buffer.Append(
  • JavaCast( "string", LOCAL.NewLine )
  • ) />
  •  
  • </cfif>
  •  
  • </cfloop>
  •  
  • </cfif>
  •  
  •  
  • <!---
  • Now that we have dealt with any header value, let's
  • convert the query body to CSV. When doing this, we are
  • going to qualify each field value. This is done be
  • default since it will be much faster than actually
  • checking to see if a field needs to be qualified.
  • --->
  •  
  • <!--- Loop over the query. --->
  • <cfloop query="ARGUMENTS.Query">
  •  
  • <!--- Loop over the columns. --->
  • <cfloop
  • index="LOCAL.ColumnIndex"
  • from="1"
  • to="#LOCAL.ColumnCount#"
  • step="1">
  •  
  • <!--- Append the field value. --->
  • <cfset LOCAL.Buffer.Append(
  • JavaCast(
  • "string",
  • """#ARGUMENTS.Query[ LOCAL.ColumnNames[ LOCAL.ColumnIndex ] ][ ARGUMENTS.Query.CurrentRow ]#"""
  • )
  • ) />
  •  
  • <!---
  • Check to see which delimiter we need to add:
  • field or line.
  • --->
  • <cfif (LOCAL.ColumnIndex LT LOCAL.ColumnCount)>
  •  
  • <!--- Field delimiter. --->
  • <cfset LOCAL.Buffer.Append(
  • JavaCast( "string", ARGUMENTS.Delimiter )
  • ) />
  •  
  • <cfelse>
  •  
  • <!--- Line delimiter. --->
  • <cfset LOCAL.Buffer.Append(
  • JavaCast( "string", LOCAL.NewLine )
  • ) />
  •  
  • </cfif>
  •  
  • </cfloop>
  •  
  • </cfloop>
  •  
  •  
  • <!--- Return the CSV value. --->
  • <cfreturn LOCAL.Buffer.ToString() />
  • </cffunction>

As you can see, you have to pass in the query object and the list of field names that you want to output (in a given order). By default, the QueryToCSV() UDF assumes you want to create a header row and to use the comma as your field delimiter. When I create the CSV values, I am qualifying every field value with double quotes. While this is not necessary, it makes the process faster; if I had to evaluate each field to see if it required a field qualifier, the process would be much slower.

And, just a little sample code to see how easily this can be applied:

  • <!--- Create a new part query. --->
  • <cfset qPart = QueryNew(
  • "id, serial_number, price",
  • "cf_sql_integer, cf_sql_varchar, cf_sql_decimal"
  • ) />
  •  
  •  
  • <!--- Create some record. --->
  • <cfloop
  • index="intI"
  • from="1"
  • to="1000"
  • step="1">
  •  
  • <!--- Add a row to the query. --->
  • <cfset QueryAddRow( qPart ) />
  •  
  • <!--- Populate row with random data. --->
  • <cfset qPart[ "id" ][ qPart.RecordCount ] = JavaCast( "int", qPart.RecordCount ) />
  • <cfset qPart[ "serial_number" ][ qPart.RecordCount ] = JavaCast( "string", CreateUUID() ) />
  • <cfset qPart[ "price" ][ qPart.RecordCount ] = JavaCast( "float", RandRange( 1, 100 ) ) />
  •  
  • </cfloop>
  •  
  •  
  • <!--- Get the CSV value. --->
  • <cfset strOutput = QueryToCSV(
  • qPart,
  • "id, serial_number, price"
  • ) />

Not much to see here, this is mostly for future reference on how to take a ColdFusion query and quickly and easily convert it to a CSV value.




Reader Comments

I thought it might be helpful to add the following to the end of your "little sample code" (assuming someone want's to make the .csv file available for download):

<cfheader name="content-disposition" value="attachment; filename=download.csv" />
<cfcontent type="text/plain" /><cfoutput>#strOutput#</cfoutput>

(Or, you could substitute "application/msexcel" for the content type and achieve the same thing.)

Reply to this Comment

I can't believe you went about it this way.
I guess knowing about underlying java techniques makes you overthink things, wich just aren't needed in coldfusion.

I think just using cfcontent and cfoutput is enough.
the columnnames can be done with as much as:
#listChangeDelims(query.columnnames,';',',')#

I realize this sometimes too, that Im just overthinking.

Reply to this Comment

Hi Ben,

I noticed that you store column values as a struct, as opposed to looping over the list. You stated that you do this for speed reasons. I found this not to be true.

I took your example code and re-wrote it to use lists instead of structs.

Running the example, there was a no difference between the lists and structs. However, I noticed a slight speed increase (2%) using the list method, when I increased the list size to 20.

--j

Reply to this Comment

I have been working on a similar issue lately and I have found that the arrayAppend/ArrayToList is SIGNIFICANTLY faster than the string buffer. This is especially noticeable when working on particularly large datasets--100k records or so.

Reply to this Comment

I have to agree with David. I also try to use Arrays whenever possible as I have found it faster than looping through lists or structures. I havent tested the whole array/list/struct looping in 8 and see the speed differences.

Reply to this Comment

@Steve
Don't forget to include ";charset=utf-8" (CF's default output mode) to support non-ascii characters! Or else whatever charset you specified in <cfprocessingdirective> if you did so.

<cfcontent type="text/plain; charset=utf-8">

Reply to this Comment

@All,

I am a bit shocked that looping over a list is faster than looping over a collection. I didn't actually test the speed, but maybe you guys are right. I will run some tests on this.

As far as the use of Arrays, Jim did demonstrate to me that using an Array for each row, then appending that to the StringBuffer was very performant.

@David,

Again, I would be shocked if using just arrays for such large records would be faster on such huge files.

I think I need to dig into this a bit more. Thanks for the great feedback.

Reply to this Comment

Hi guys, thanks for share experience, I would share the way I do CSV. Im using SQL query to create CSV line like:

SELECT (
'"' + Data1 + '",'
+ '"' + Data2 + '",'
+ '"' + Data3 + '",'
+ '"' + Data4 + '"' ) as Line
FROM ......

and then just put result into the file:

<cfset filecontent = ValueList(q_csv.line,"#CHR(13)#")>
<cffile action="write" file="#fName#" output="#filecontent#" addnewline="yes">

Reply to this Comment

@Misha

This a great solution--assuming the data is coming from a DB table AND you know ahead of time which fields you'll be pulling.

In my case, I'm actually using a Text Driver DSN to read in a CSV with commas, then create a new version of the file that is double pipe delimited (||). Each file could have a different set of columns. . .

Reply to this Comment

Devid. yes, but you always can convert all data into the string like:

convert(nvarchar(50),qty)

and put some NULL validation as well like:

convert(nvarchar(50),ISNULL(SUM(QtyOrdered),0))

ISNULL(data1,'')

Reply to this Comment

@Misha

I should have added before that my process of replicating and then modifying a CSV file would probably be best handled OUTSIDE of CF altogether but I'm working on an existing process.

I'm not sure whether that last bit on converting the data and setting nulls would work as my goal is to avoid, as much as possible, changing the data that my user has placed in the original--but I will look into it. I'm also not sure that, in my case, this will relieve CF at all, since I'm assuming that the driver for reading the original file isn't relying on the SQL server in any way.

Reply to this Comment

Eric. From that point regardless what method you use to create CSV you should care about.

Reply to this Comment

Hello,

I was wondering if you had som more examples of on how to use your QuerytoCSV calling a stored procedure or som actual queries.

Thanks in advance. I like what you have created.

Reply to this Comment

@SDaniel,

As this is a ColdFusion function, it cannot be called from a stored procedure. Also, there is no difference between a manually constructed query as I have and query returned from the database in terms of how the QueryToCSV() method would be called.

Reply to this Comment

@Anthony,

You could set the fields arguments to be optional and then give it a default value of #arguments.query.columnList#. Of course, then you'd have to be sure to use named-arguments rather than ordered ones.

Reply to this Comment

For several years now, I have been using a solution similar to Misha's for creating a CSV file from a dynamic list of fields (concatenating all the columns in a sql query).

Unfortunately, some of the list of fields have become very large (300+ columns) and I am getting a SQL server error "The query processor ran out of stack space during query optimization." In one case, eliminating a single concatenation allows the query to run.

I have searched but have not found any suggestions on how I might increase the stack space to eliminate the issue. Most of the solutions involve rewriting the query.

I am therefore going to have to implement a new solution, and hope that Ben's excellent approach will work.

Has anyone tried it with very large data sets (columns and rows)?

Reply to this Comment

@Ben

You could make an arg for the field delim, and set your preferred delim as the default. Then if someone wanted no delims they could pass delim="". Should be just as fast to do """ as #delim#, right?

Reply to this Comment

@Grant,

My good man, there is already an argument for a field delimiter that defaults to comma - this can be overridden with the method is invoked. Unless you're referring to the line delimiter (new line, etc.).

Reply to this Comment

@Ben,

Actually, I meant the column escape character (triple quotes). Thought there was a comment about qualifier being faster to "just do". But I may have misunderstood the comment.

Apologies about the wrong terminology. Loooong couple of days and hit enter too soon.

Reply to this Comment

@Misha

I tried your:

SELECT (
'"' + Data1 + '",'
+ '"' + Data2 + '",'
+ '"' + Data3 + '",'
+ '"' + Data4 + '"' ) as Line
FROM ......

Unfortunately I'm using MYSQL, using a different concat operator.

Reply to this Comment

@vlad

I'm using MySql 5.5 & this works fine:

SELECT (CONCAT(
tbltollfree.tfNumber, ",1,",
tbltollfree.tfRingTo)) as Line
FROM tbltollfree
ORDER BY tbltollfree.tfNumber

Question to Misha or all?
- How do I "Append" some header & footer text to this query?
- Header1 = ;NVFILE 2.0
- header2 = DN,W1,DN1
- Footer = ;NVFILE END

Example of output would be:

;NVFILE 2.0
DN,W1,DN1
8002081111,1,1234567890
8002251112,1,1234567891
8002211113,1,1234567892
8002311114,1,1234567893
;NVFILE END

Reply to this Comment

Post A Comment

?
You — Get Out Of My Dreams, Get Into My Comments
Live in the Now
Oops!
Comment Etiquette: Please do not post spam. Please keep the comments on-topic. Please do not post unrelated questions or large chunks of code. And, above all, please be nice to each other - we're trying to have a good conversation here.