Skip to main content
Ben Nadel at InVision In Real Life (IRL) 2019 (Phoenix, AZ) with: Keeley Hammond
Ben Nadel at InVision In Real Life (IRL) 2019 (Phoenix, AZ) with: Keeley Hammond ( @keeleyhammond )

Experimenting With Flat-File ColdFusion CFML Caching

By on
Tags:

After reading Isaac Dealey's blog post on caching performance of ColdFusion struct access vs. ColdFusion query object access, I wanted to do some experimentation with flat-file based caching of ColdFusion code. By this, I mean I wanted to see how ColdFusion would perform if data was cached using dynamically generated CFML files that would be written to disk and then CFInclude'd to "access" cached data. Of course, I know that file access is expensive, but I'm not talking about thousands of file reads per page request, I'm talking about a handful of file reads per page request. But, we have to keep in mind that this needs to hold up under the load of simultaneous users.

When dealing with flat files, we need to keep in mind that ColdFusion needs to compile its CFML templates the first time it accesses them. Therefore, using CFML code caching is gonna have a large up-front cost, but then an exponentially smaller residual cost. For this reason, my experimenting had to be run several times after the CFML templates were "published" in order to get realistic results.

The following code compares a large in-memory (struct-based) cache to a CFInclude-powered cache:

<!---
	Prepare the cache. One cache will be using a struct-based
	cache and the other will be using a file-based cache that
	writes out ColdFusion code.
--->
<cfset objCache = {} />

<!--- Get the cache directory. --->
<cfset strCacheDirectory = ExpandPath( "./cache/" ) />


<!--- Set the cache test size. --->
<cfset intCacheSize = 10000 />


<!--- Loop over a large set to create a large cache. --->
<cfloop
	index="intIndex"
	from="1"
	to="#intCacheSize#"
	step="1">

	<!--- Create an in-memory cached item. --->
	<cfset objCacheItem = {
		Index = intIndex,
		Message = "This is cached item: #intIndex#",
		DateCreated = Now()
		} />

	<!--- Add the item to the cache struct. --->
	<cfset objCache[ intIndex ] = objCacheItem />

	<!---
		Check to see if we are building the flat file cache. We
		do NOT want to do this every run of the page because that
		will cause ColdFusion to re-compile the templates each
		time which had a LARGE up-front load. In reality, these
		would be written to file and then called many times,
		allowing ColdFusion to compile them efficiently.
	--->
	<cfif StructKeyExists( URL, "publish" )>

		<!---
			Create the ColdFusion code for this item. When doing
			this, we have to escape the openning CF tags so they
			don't get evaluated. We also have to evaluate all the
			variables so they can be written out.
		--->
		<cfsavecontent variable="strCFCode">

			<[cfset $cache = {
				Index = #intIndex#,
				Message = "This is cached item: #intIndex#",
				DateCreated = "#Now()#"
				} />

		</cfsavecontent>

		<!--- Write ColdFusion code to flat file. --->
		<cffile
			action="write"
			file="#strCacheDirectory##intIndex#.cfm"
			output="#Trim( Replace( strCFCode, '<[', '<', 'all' ) )#"
			/>

	</cfif>

</cfloop>


<!---
	If we just published the flat-file templates, don't bother
	running the test (the combination of CFFile writing and then
	compiling the templates will run the page out of time). Let's
	just publish once and then run separately.
--->
<cfif StructKeyExists( URL, "publish" )>

	<cfabort />

</cfif>


<!---
	Now that we have our cache in place, let's loop over each to
	see how fast they each respond. The Flat-file system will be
	slower, but how much?
--->
<cftimer type="outline" label="In-Memory Cache">

	<cfloop
		index="intIndex"
		from="1"
		to="#intCacheSize#"
		step="1">

		<!--- Get cached item. --->
		<cfset $cache = objCache[ intIndex ] />

	</cfloop>

	Done.

</cftimer>

<br />

<!---
	Now, let's do this again, but with the flat file. We
	have to do a CFInclude tag to get the cached data back
	into memory.
--->
<cftimer type="outline" label="Flat-File Cache">

	<cfloop
		index="intIndex"
		from="1"
		to="#intCacheSize#"
		step="1">

		<!--- Get cached item. --->
		<cfinclude template="./cache/#intIndex#.cfm" />

	</cfloop>

	Done.

</cftimer>

As expected, there was an ENORMOUS up-front cost to the publishing; the first run of this page took 250 seconds to run as ColdFusion compiled the 10,000 CFML files that were included for the first time. This was to be expected, but since this compilation would be distributed across page requests over long periods of time, it is not to be taken into account. Subsequent page requests demonstrated the following times:

In-Memory Cache: 15 ms

Flat-File Cache: 6,266 ms

These numbers were quite consistent as I refreshed the page a bunch of times.

So, what does this all mean? Six seconds might seem like a really long time to include files; however, we have to remember, that we are never going to be including 10,000 files one after another; we are gonna be doing a few per page. And, if you take into account that ColdFusion is already including things like Header and Footer files, we can see that ColdFusion can handle simultaneous page requests that have (sometimes) many CFInclude tags.

My belief is that for just a few includes, the difference between flat-file caching and in-memory caching is gonna be non-existent. So, which way should we lean? Well, if we go with in-memory caching, we are gonna be taking up RAM. RAM is a relatively rare asset. File space, on the other hand is not. The 10,000 files I cached above took about 1.3 mega bytes. On a drive that has 60+ gigs, that is merely a drop in the pond. By using flat-file caching, we can store an exponentially larger amount of data.

Of course, we have to move back up the thought-chain and ask ourselves why we are even caching data to begin with? Really, the only reason we want to cache data is so that we don't have to go to the database to get this information. So the ultimate question is, even if flat-file caching is slightly slower than in-memory caching, is flat-file caching still faster than database access? If it's slower than database access, then it loses all value. If, however, it is faster than calling the database, then we are in a good place.

Unfortunately, I don't have a database hooked up locally for my test environment (which is why so many of demos build ColdFusion queries manually), so I cannot test this comparison. But, I have to assume (making an ass out of me) that the larger a database gets the more that flat-file caching is gonna be more and more performant in comparison.

Anyone have more experience with file-based caching?

Want to use code from this post? Check out the license.

Reader Comments

36 Comments

I once had a project with caching such as this and about 350,000 files to cache/store. No way to put it in memory efficiently. One thing to note about file caching to HD is that you can't simply put all those files in a /cache folder. I am guessing this is OS dependent but after about 5k files in one folder on my win server the time to read the file took WAY TOO LONG. What was fast as lightning with a hundred test files became a dog when I had many thousands. So I had to create a directory structure based on categories and id's so that no one folder had more then a few thousand items. /cache/maincat/subcat/subsubcat/file001.cfm for example.

So if your building something from scratch that may only have a few files to start (such as in dev) but grow to many thousands, make sure to build the storage structure to match.

15,640 Comments

@Josh,

That is good to know. I think I would definitely break it up by category. Ideally, I'd like all the file names to be #ID#.cfm, so they would need to be broken up into the appropriate database-style categories. Of course, I am just theorizing right now, so I don't know. But, even so, if my tables are large, this also runs into the same problem.

I guess I would have to go more intention based than table-based. Still thinking it out. Of course, my directory just had 10,000 and seemed fairly fast, so there is going to be some sort of constraint there.

I guess you can always do some sort of arbitrary sub-directory structure like:

/table/#(ID MOD 100)#/#ID#.cfm

This would simply use math to break the IDs up into buckets.

3 Comments

I'm using file-based partial page caching to store generated static content on a large/high-load site and it performs very well.
When I was working on the project, I had quite a few very large, slow queries and chose security over performance... using cfqueryparam, I needed another way to cache the results without the benefit of cached queries (pre-CF8). I also have some complex display logic which benefits from being cached and not regenerated on every request.

I recently added a (unfortunately named) version to RIAForge which is based on this approach: http://cfcache.riaforge.com

The main problem I initially ran into once in production was that CFFILE did not perform well under load, so I had to replace the writes with Java which is a thousand times faster!

16 Comments

@Ben,

i'm gonna disagree with you a little bit about memory being rare - there will always be situations where people don't have enough RAM or situations where people don't have enough storage - but I think you'll find most webservers run with extra RAM available.

This is why a lot of sites are using things like memcached as they can use up a lot of they're spare RAM - even if it's just an extra gig - it gives way faster access than a file ever will..

I guess i'm just having trouble finding an example where I would ever consider cacheing to a disk, but none the less a nice experiment, thanks for sharing it!

15 Comments

If a web app is always asking for the same data from the db then it's crazy not to cache it in memory or disk. One of my apps requires unique data for every user for every page so there's no benefit to cache results or pages at the application level.

The only solution it seems is to increase the memory of the database server to 32GB or 64GB so it can cache the whole db in memory, or we go down the solid state disk (SSD) route which provides an incredible performance boost to dbs. (At least 10x faster). Unfortunately silicon costs money whereas new code doesn't!

19 Comments

Your results are interesting, if a bit rough. As some other folks have pointed out, reading from disk, especially a fat32, or ntfs file structure is going to slow you down when you start working with big numbers of files on disk. I'm not as familiar with the MacOS File structure, but ext2/3 (the standard Linux filesystem) is also bad with large numbers of files, though JFS (a common alternative) is actually pretty good with huge file listings.

Also, one thing you might be able to do to speed up your file access is to use cffile and cfoutput instead of cfinclude. Since all you're getting is HTML, not CF for execution, then running it through the coldFusion compile/execute look is probably overkill

I'd also be interested in seeing your results if you were to cache these items to a database table, just for comparison purposes.

14 Comments

We use disk caching a lot at UGAL, a CMS that powers a few hundreds websites now. Each response is created by assembling up to 12 processes, and each process can be cached on file. the cache file is CFINCLUDED when it exists, created if it does not. We experience response times of less than 50ms, so I would say too that it works very well for us. We have implemented a cache directory architecture similar to the one described by the first commenter, in order to avoid having thousands of files in the same directory.

Thanks
Jean

15,640 Comments

@Nicholas,

Its very interesting that CFFile acted poorly and Java calls to the file did not. I wonder how this compares to CFInclude and ColdFusion's template caching.

15,640 Comments

@Mat,

Clearly, there are times when this simply is not viable. If you look at Hal's comment re: Toy R' Us (as references in Isaac's blog post), they didn't have enough RAM to store all of their cached content. So, it happens. Of course, I have had servers that have actually run out of physical storage space as well (damn you LOG files!).... so, agreed, it happens both ways.

I guess the mentality that I'm working off of is that I would assume most people have more HD space than they do RAM. But, maybe I am wrong; or maybe there is enough RAM to not have an issue.

Plus, I think it is important to consider that you may have sites that do have aspects of RAM-intensive processing (such as generating reports). I think it just seems better to err on the side of flat-file caching for large amounts of mostly static-data (stuff that does not need to be re-published very often)... just theorizing.

15,640 Comments

@Adam,

That is an interesting question re: CFInclude vs. CFFile; but, realize that in my example, I DO need to run it through ColdFusion because it is actually storing CFML to the flat file, not just rendered output.

I have issues with the idea or caching "output" rather than "data" because I don't like the idea of having to do mass-republishing if formatting changes. I feel like I want that to remain separate from the data that is being cached.

Of course, if you still have to render the output, you do add processing overhead. I guess it depends on what your priorities are.

3 Comments

I've done a LOT with caching, and was caching so much stuff that I had to put some of it on the HD to free up some RAM.

Like the first comment said, you need to store the cache files in smaller subdirectories or the OS will have a hard time dealing with it.

Also important is cleaning up those cache files when they're no longer useful. Unlike your scenario where you built all the cache files up front, I made them as the pages were requested. I ran a scheduled task in tandem with that that cleared out all the cache files older than four hours or so. Otherwise, the cache directory gets crazy big.

15,640 Comments

@Erikv,

Good advices. I guess the cache size would dictate that. I could easily see in a system that I am working on that a cache wouldn't get too big if it was cleared out like one a month. But, its a low traffic site that doesn't even require caching - just starting to experiment.

But true, I hadn't thought about clearing out old data. Good tip.

14 Comments

Another approach is to delete the cache files when an update is made to their content, and have the application re-create them the next time the page is requested.

3 Comments

Yeah I experimented with that using SQL triggers but it wasn't super reliable. So I went with age instead.

My primary application here is a big commerce site, and I cache the listing and detail pages of product.

3 Comments

@Ben
My understanding when I was researching this some time ago, is that ColdFusion loads a lot of packaged java libraries when you use cffile, many of which can be unnecessary depending on what you're doing. If you have 6-7 cached instances on a page load, then the difference really adds up fast

@Erikv @Jean,
I also found it necessary to clear stale cache files that persist... currently have a daily scheduled task to take care of that

3 Comments

Yup, I found that, too, so I cached everything using a wrapper around all CFINCLUDES. ANd there are a lot of them because this is a Fusebox 3 app.

16 Comments

We used to do a lot of disk caching but moved away from it for a couple reasons/pains that are on the top of my head:

* Being able to use ColdFusion's Trusted Cache was probably biggest reason. With trusted cache the only way to use file based cfm caching is to flush all the cache or clear the cache of the file holding the data that was modified. (we could never get this to work consistently). We experienced a HUGE gains from using trusted cache (almost more gain than the file cache was offering).

* Compile times added a considerable delay when the app starts-up (As mentioned in the post). On a site with highly traffic we even experienced issues of not being able to start the application because the request que would fill up faster than the application would start/respond. We worked around this but it still was a headache.

We are now happy with Memory only cache (managed), but will be looking into shared memory cache (memcache) soon because application memory cache pools in a cluster have been a pain. (note to self: check out Railo's cluster scope)

I'd also suggest the scaling at the database level using master/slave dbs, or even easier, with a relational database store a record's assembled relational data as WDDX in a cached table so when you need to get a record you do something like this:

SELECT wddxData
FROM ObjectCache
WHERE ID = NNN

Rather than:

SELECT t1.*, t2.*, t3.*, etc..
FROM t1, t2, t3...
WHERE t1.ID = t2.ID....

You can also save trips to the database for 1-N data using this approach since the WDDX can store compound data and can be quickly serialized into a CF variable.

Interesting Post!

15,640 Comments

@Brett,

The upfront-load time of the file-cached system occurred to me. But, I figured it would be gradual enough to not matter; I assumed that not all "Cached" files would be access at the same time, but rather over a period of time as people started accessing pages. I guess on a high-traffic site, this becomes more of an issue.

The WDDX approach is a very interesting idea, though. Its like creating a de-normalized database structure cache. Cool concept.

14 Comments

Re: "The WDDX approach is a very interesting idea, though." Isn't that what Macromedia Spectra was all about?

It seems to be that it quickly becomes a nightmare, because everything has to be processed by CF, instead of the database. Think about a mass database update (update table set field='string' where otherField > 100). That's a lot of CFWDDX to do!

15,640 Comments

@Jean,

I guess is depends on how often stuff is updated. If you have, lets say an article / press release type situation, you can probably assume that 99.99% of those things can be cached and never touched again.

16 Comments

@Jean

Yep there are still instances that would need to be worked around. The way I would handle that is if there was data on a row that can be updated en mass I would join to that data and not include it in the WDDX. Approval Code would be a good example.

Another approach would be to only delete the WDDX data on mass update (trigger), and then use condition SQL/CF to select the relational data if the WDDX is null.

@Ben

I believe the upfront-load time really became an issue when we would do updates and have to clear 10s of thousands of cache files, and ColdFusion doing the if missing, write file when generating the page. Perhaps we were caching too much to file (every object, ~20 files for a single page).

15,640 Comments

@Brett,

Hmm, maybe. I am just starting to think about this, so I don't really know what ColdFusion can handle in practice in this area.

14 Comments

This is where caching providers like Ehcache come in handy. They let you cache to memory and/or disk. It's possible to configure Ehcache such that it will cache to memory up to a size limit, then spill any overage to disk. Very handy and easy to use.

I recently did a session on advanced caching strategies for CF at MAX. I've made the slides available on slidesix.com if anyone's interested. The preso covers CF's built in caching, distributed caching, and various caching strategies.

http://slidesix.com/view/Advanced-ColdFusion-Caching-Strategies

78 Comments

@Ben - I was actually just getting ready to post the URL to Rob's presentation slides, but he beat me to it, the bastard! ;)

I wanted to highlight briefly a couple of points in his presentation compared to your blog entry here and the previous comments. In particular the notion that your caching routine isn't necessarily intended to be faster than a database query. Although it's true that we often use caching for that reason, that's only one of a number of use cases and as Nicholas mentioned basically any kind of complex logic has the potential to be slow enough to benefit from cache without necessarily needing database access as an ingredient. I think that as an industry it's really easy for us to fall into a habit of thinking of caching as merely being a way of improving on the performance of queries simply because we use it that way so often (heck, it's built-in to ColdFusion that way, but not in any other way), but I think that's a little oversimplified.

But as Rob's slides mention, performance isn't always the primary concern with regard to caching. I think I was sort of vaguely thinking that before I posted my article, but it was nice to read Rob's slides and see that thought more formalized. I think he described it as "scaling up vs. scaling out". If I read it correctly, "scaling up" describes adding more load on the server and "scaling out" describes adding more content to the site. So a server may have little traffic, but still benefit from caching merely because it's content volume is so high (scaling out - which is the circumstance in which I think I personally would be most apt to look toward some kind of file-based caching, although he describes memcached as filling that need also).

And just to be thorough ;) I think I'll expand on this idea a little further here. I'm imagining a hypothetical situation in which fetching a particular record from the database only takes say 5ms on average, and you've got a really comprehensive cache management utility, but in order to perform the fetch operation it takes an average of 8-10ms. Given the example of "we only cache if it's faster than the query" we would automatically choose not to cache in this scenario. But the real question here is: is that the most effective view, or are there use cases in which the slower cache provides a greater benefit than the faster query?

One scenario that jumps to mind for me right away is that if we're talking about a very high-traffic site there is some potential that a very large volume of these fast queries could degrade the overall performance of the system on the db side, without necessarily making each individual query slower than the cache solution. That's just one example and the honest truth is I'm not sure what that risk potential is - it could be negligible, it could be considerable, I just don't know enough about it to comment. And I imagine there are a variety of similar scenarios in which cached content provides some value other than a direct performance gain over a particular query.

I imagine I'll have more thoughts on that as I work more with the CacheBox project and see how the centralized monitoring and management of plugable cache strategies works there. I suspect I'll discover some new benefits on the management side of having that cache centrally managed (aside from performance). One that I've anticipated thus far is that the machine is likely to make more accurate predictions about the use patterns for content than we usually do. And so I suspect that the system will be able to better tune the cache on its own than a human can on average because where the human programmer is usually guessing about use patterns (I know I have), the machine <bold>knows</bold> the access patterns for different collections of cache. And so it may be that even knowing that the query runs faster in the present tense, you end up preferring to allow the system to choose a caching strategy for the content because it requires less maintenance and the best caching strategy may even change throughout the course of the day. The machine can make those adjustments periodically throughout the day to get the most bang from the cache in a given hour, which you're not very likely to do as a programmer.

Okay, I think I'm done. :)

And it looks like you've got some good material for a follow up article. Thanks Ben! :)

78 Comments

Oh grr... nevermind... I used <bold> tags... D'oh! I guess all the testing at the neurologists office today wiped me out. ;)

78 Comments

Oh duh... "it's built into ColdFusion that way but not in any other way" ... I forgot about cfcache again! ;) I obviously haven't used it very much.

15 Comments

I see caching as being required mainly to reduce load on servers, more specifically, database servers. Depending on the circumstance, coldfusion shouldn't be responsible for this at all, especially if you plan to scale out your caching solution with more boxes. Why would anybody buy a CF license for every server just to use it's memory for caching? Even if it were free, CF just isn't the best tool for the job.

15,640 Comments

@Ike,

That's an interesting point in scaling up vs. scaling out. I guess in my mind I had never really separated those out. What I think is really interesting is this idea of small computations adding up to create drag; this is interesting because in my initial thoughts, I would only cache that data structures, not the generated output. I like the idea of still being able to render the data as a separate work-flow to be nice. Plus, if you change one thing, you don't have to re-publish massive amounts of data. Something about it just feels cleaner.

But, cleaner or not, perhaps its silly to think about caching in a half-way mindset. Perhaps it just needs to be all or nothing to get the real benefit of it.

78 Comments

@Ben - try the preso again later -- I've been able to get it open whenever I've tried, so it's probably just an intermittent thing between you and there.

I honestly hadn't separated them out in my mind either -- at least not consciously, until I'd read Rob's slides. I had just had a sort of vague feeling that there were multiple use-cases beyond what's covered by say CF Query caching, so for me his text just really helped to solidify the concept in a very concrete and simple way for me.

But getting back to caching the result vs. caching the data -- it's definitely a nuanced subject. I can see advantages to what you see as a "cleaner" approach, but I can also see advantages to the alternative, and even see how someone might describe caching the output as "cleaner" because then you're only dealing with the flat end result and you don't have any of the "fiddly bits" of how that content was generated still hanging out. But then for example when we're using ColdSpring or LightWire or really any kind of IoC framework and we create a singleton object, we're also caching, we just don't typically describe it with the word "caching" - so it's more a question of the semantics of dialect at that point than it is a question of technology.

Rob's advice in the slideshow is to cache as late in the process as is feasible. But both with Rob's suggestion and with your suggestion (which is sort of the opposite), I can also see scenarios in which there's a fair amount of duplicated content. I'm not sure if you can totally eliminate duplicated cache content - it may depend a lot on what kind of application it is you're creating.

Long-story-short, there still doesn't seem to be any magic-bullet-style simplified answer for me with regard to knowing what, when or how to cache.

15,640 Comments

@Ike,

Caching as far down in the chain as is feasible makes sense. At first, I only considered the data because I am so used to the databases being the (theoretical) bottle neck. However, if you consider that after the data is loaded, there is still looping and conditional logic and translation and that all adds up in a lot of processing!

Of course, each situation requires its own strategy. You might have one piece of data (page) that has elements that display random data or "Attached" data that is not highly predictable? In that case, you definitely have to move the caching up higher and higher.

What I like about the idea of data-caching is that you can hide it behind the "data abstraction layer" and the View / Controller wouldn't have to know anything about it. From the Controller standpoint, we are still just requesting data from a singleton (yes, I understand that Singletons are a form of "caching"). The actual use of cache or no-cache implementation is hidden.

78 Comments

Agreed. :)

The singleton comment was more about exploring the concept openly in the discussion than about pointing anything out to you in particular. I kinda figured the thought had already occurred to you. ;) But as evidenced by my forgetting about things like cfcache, it might not immediately jump to mind for someone else who's reading simply because the way their mind compartmentalizes the information, "singleton" lights up in a different area than "cache".

That happened to me during the neuro testing yesterday too (which may be why that's fresh on my mind in particular). I'm given a picture containing several "characters" (previously designated as grandmother, grandfather, father, mother, son, daughter and dog). I'm then asked to remember everything I can about the picture of a scene containing these characters so that I can answer questions about them later. The first question is "which characters were in this scene", to which I answer "grandfather, grandmother and father", forgetting "dog". Why did I forget "dog"? Because for whatever reason, even though he was designated as one of the "characters" at the beginning, the word "dog" doesn't light up the same mental bin that the word "character" lights up. Don't ask me why. ;)

39 Comments

I've found cf_accelerate to work very well under high load (in a lab with intense load testing, not in the real world), and could be easily modified to write to disk or database. Brandon Purcell already worked out a lot of the performance issues with where to put the data in CF structures, and so it scales quite well. The code could use a little work, but not too much. ;)

code: http://www.bpurcell.org/blog/index.cfm?entry=963&mode=entry

admin: http://www.bpurcell.org/blog/index.cfm?mode=entry&entry=1045 (alpha-quality, but gets you most of the way there)

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel