Ben Nadel
On User Experience (UX) Design, JavaScript, ColdFusion, Node.js, Life, and Love.
Ben Nadel at cf.Objective() 2012 (Minneapolis, MN) with: Jason Seminara
Ben Nadel at cf.Objective() 2012 (Minneapolis, MN) with: Jason Seminara@jseminara )

ColdFusion CFHttp To Query Much Faster Than Java Buffered Reader

By Ben Nadel on
Tags: ColdFusion

In response to some posts on CF-Talk that have discussed parsing large files, I thought I would do some investigation of my own. I have never parsed in a large file, so this was new an exciting. I was going to try two different approaches: Using the CFHttp to read in a file to a query and using the Java Buffered File Reader to read in a line of a file at a time and build a query based on the records. From my other experiences, I had assumed that the Java method was going to clean the floor with the CFHttp tag in terms of speed.

I was absolutely floored by the speed of CFHttp! Not only did it parse data files like no body's business, it did it so much faster than the Java method. When the test cases got too big, the Java method would often timeout.

NOTE: I am not a great Java programmer, so my methods could have been horrible.

I started building up a data file by repeating these data lines:

Cox Christina Blonde 5'5" 135.0 Muscular Yes
Rodruigez Michelle Black 5'3" 125.0 Muscular Dog Yes
Otting Franics Blonde 5'4" 140.0 Muscular Cat Yes
Turgot Ayse Blonde 5'2" 125.0 Curvey Yes
Clarke Molly Burnette 5'4" 122.0 Lean Cat Yes
Skerret Laura Brunette 5'4" 130.0 Lean Rabbit Yes
Parker Sara Blonde 5'2" 105.0 Skinny No
Vivenzio Sarah Brunette 5'3" 130.0 Curvey Cat Yes
Matzukata Yuu Black 5'4" 120.0 Curvey Yes

... where each field was separated by tab characters (not demonstrated in the above text). I then used the CFHttp tag to read in the file:

  • <cfhttp
  • url="http://...../read_large_files/test.txt"
  • method="GET"
  • name="qGirl"
  • columns="last_name,first_name,hair_color,height,weight,body_type,pet,has_nice_smile"
  • delimiter=" "
  • textqualifier=""
  • firstrowasheaders="no"
  • />
  • <!--- Display the number of records read in. --->
  • Read in #NumberFormat( qGirl.RecordCount )# records

As you can see, the data file was made up of 8 distinct column: last_name, first_name, hair_color, height, weight, body_type, pet, and has_nice_smile. The delimiter was a tab character. If you don't realize that the textqualifier attribute defaults to double-quotes, you are gonna pull your hair out trying to figure out why it can't find all the columns (and throws errors) if you use quotes in your field values!

To test my Java theory, I set up this test that read in a single line at a time:

  • <!--- Create the query. --->
  • <cfset qGirl = QueryNew( "last_name, first_name, hair_color, height, weight, body_type, pet, has_nice_smile" ) />
  • <!--- Create the file reader. --->
  • <cfset jobjReader =
  • CreateObject( "java", "" ).Init(
  • CreateObject( "java", "" ).Init(
  • CreateObject( "java", "" ).Init(
  • ExpandPath( "./test.txt" )
  • )
  • )
  • ) />
  • <!--- Create a variable for the line. --->
  • <cfset REQUEST.Line = jobjReader.ReadLine() />
  • <!--- Set up counter (for shorthand) />
  • <cfset intCounter = 1 />
  • <!--- Loop while we have a line. --->
  • <cfloop condition="StructKeyExists( REQUEST, 'Line' )">
  • <!--- Add a record to the query. --->
  • <cfset QueryAddRow( qGirl, 100000 ) />
  • <!--- Split the line up into an array. --->
  • <cfset arrFields = REQUEST.Line.Split( " " ) />
  • <!--- Set field values. --->
  • <cfset qGirl[ "last_name" ][ intCounter ] = arrFields[ 1 ] />
  • <cfset qGirl[ "first_name" ][ intCounter ] = arrFields[ 2 ] />
  • <cfset qGirl[ "hair_color" ][ intCounter ] = arrFields[ 3 ] />
  • <cfset qGirl[ "height" ][ intCounter ] = arrFields[ 4 ] />
  • <cfset qGirl[ "weight" ][ intCounter ] = arrFields[ 5 ] />
  • <cfset qGirl[ "body_type" ][ intCounter ] = arrFields[ 6 ] />
  • <cfset qGirl[ "pet" ][ intCounter ] = arrFields[ 7 ] />
  • <cfset qGirl[ "has_nice_smile" ][ intCounter ] = arrFields[ 8 ] />
  • <!--- Read in the next line. --->
  • <cfset REQUEST.Line = jobjReader.ReadLine() />
  • <!--- Update the counter. --->
  • <cfset intCounter = (intCounter + 1) />
  • </cfloop>
  • <!--- Display the number of records read in. --->
  • Read in #NumberFormat( qGirl.RecordCount )# records

In this test, I am creating a buffered file reader that is reading in one line at a time. For each line, it breaks it up into an array and adds those values to the query.

When I ran the test on 490,000 rows of data, CFHttp executed in about 25 seconds! I can't give you Java file buffer speeds as the page kept timing out. I must be doing something horribly wrong in my methodology as Java is what is really going on underneath any way. I chalk that up to my inexperience.

But, here are some stats for CFHttp:

490,000 rows in 25 seconds. 20 megabyte file.
790,000 rows in 44 seconds. 37 megabyte file.
950,000 rows in 97 seconds. 45 megabyte file.

This is crazy! I am floored at how fast that is. So the question is, why are people still having problems reading in files. For starters, this only works with structured query-esque data. You can't read in an XML file object like this. Secondly, it requires a URL to work, so if you need to read a file that is outside the webroot, you are out of luck. That would be the benefit of the Java methodology (though clearly one better than the one I put together), you can read in files from anywhere in the file system.

Reader Comments

Do you think it was the reading of the file,
or cfaddquery?

what is the difference between the java buffered file reader and


To be honest, I am not sure what is making it faster. I assume that the parsing in the ColdFusion engine is just much faster and more efficient than anything that I was doing.

As far as the different for the file IO and the buffered file reader, the Buffered File read is actually a decorator that wraps around the file IO object and adds additionally functionality, like reading in large chunks at a time rather than one huge read or a ton of smaller reads. It just allows for more efficient file reading when you want to examine parts of the file content at a time.