Skip to main content
Ben Nadel at CFUNITED 2010 (Landsdown, VA) with: Sandy Clark
Ben Nadel at CFUNITED 2010 (Landsdown, VA) with: Sandy Clark ( @sandraclarktw )

Ask Ben: CFImage And Dynamic Image Compression With File Size Limits

Published in , Comments (23)

Thanks so much! Still wish there was a function that would get the image's quality to see whether it needs to be reduced or not. Apparently, the way the quality attribute works is that if set at say .8, it reduces the existing quality by 20%. If user has already reduced quality before uploading, then setting quality to .8 or something reduces it even further. At a few Adobe CF gatherings, I've pointed out this need, but it doesn't seem to be coming after hearing more about the next CF release. Again, thanks! Appreciate the good and quick help.

Your understanding of the way image compression works is correct. Once an image is compressed, it essentially becomes a completely new image with 100% quality. At that point, any further compression will be based of the new image rather than the old image. While this can be frustrating, it's the only thing that's feasible; keeping the concept of compression over time would require the image to maintain its original image data.

However, I would guess that your concern isn't really about image quality at all; more likely, your concern is over file size. If file size wasn't the true concern, then you'd always use 100% quality. This would make the file size large, but the photo would look its best. The reason that we ever really want to reduce the quality of an image is to make the resultant file smaller in size (not to be confused with dimension, which remains constant). As such, I think that what you really want to do is reduce the quality of the target image only enough to get the resultant file size to drop under some arbitrary limit.

To be honest, I've never really worried about this problem before; typically, I would just pick a decently balanced image quality and use that as the quality going forward. As such, I can't really say that this algorithm is the right approach. And, if anyone else has a better suggestion, I'd love to hear it - I think this problem is quite fascinating. That said, the approach that I came up with below is a simple, iterative approach: we start with a given photo quality and try to save the image. We then get the file size of the resultant image and if it's too high, we slightly reduce the quality and repeat the compression process, overwriting the target file.

This might seem like a very intensive process, and in fact, I can practically hear some of you blowing steam out your ears at the thought of having the server do so much file access. In reality, however, the iterative approach outlined below is quite efficient. If you've ever played around with photo quality in a graphics program, you learn quickly that small changes in quality can lead to rather large changes in file size. As such, there's a naturally small limit to the number of iterations that we would ever need to execute.

On top of the natural limit, I've also opted for an initial quality of 95% rather than a 100%. This is because 100% quality (zero compression) is almost always too large. In fact, if you read in an image and then re-save it with 100% quality, the new image will generally be significantly larger than the original (which was, in its own right, already at 100% quality).

With that in mind, let's take a look at the code:

<!--- Param form variables. --->
<cfparam name="form.upload" type="string" default="" />

<!--- Check to see if the form was uploaded. --->
<cfif len( form.upload )>

	<!--- Upload the photo to a temp directory. --->
		destination="#expandPath( './' )#"

		Read the image from the temp directory so that we
		can resize it to fit in the given area.

	<!--- Resize the image to fit in the specified box. --->
	<cfset imageScaleToFit(
		) />

		Now that we have our resized image, we need to write it
		to disk. Because we want to impose a file size limit,
		we are going to keep trying to save the image until we
		have a decent file size.

		Start out with 95% quality. The reason that we are not
		starting out with full quality is that 100% usually
		creates a file that is *surprisingly large*.
	<cfset imageQuality = 0.95 />

	<!--- Set the max file size. --->
	<cfset maxFileSize = (60 * 1024) />

	<!--- Set the name of the resized image file. --->
	<cfset photoFile = (
		upload.serverFileName &
		"-resized." &
		) />

	<!--- Keep track of the number of iterations. --->
	<cfset saveCount = 0 />

	<!--- Start image save loop. --->
	<cfloop condition="true">

			Write the image to the disk with the current level
			of compression.

		<!--- Increment the save count. --->
		<cfset saveCount++ />

		<!--- Get the file size of the new image file. --->
		<cfset fileSize = getFileInfo(
			ExpandPath( "./#photoFile#" )

			Check to see if the file size is greater than the
			target file size. If it is not, then we need to
			decrease the photo quality and try again.
		<cfif (fileSize gt maxFileSize)>

			<!--- Reduce the quality by 5%. --->
			<cfset imageQuality -= .05 />


				The file size is fine, so just break out of
				the loop.
			<cfbreak />





	<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
		<title>CFImage Dynamic Compression Demo</title>

			CFImage Dynamic Compression Demo


				<input type="file" name="upload" size="40" />

				<input type="submit" value="Upload Photo" />


		<!--- Check to see if we have an upload photo. --->
		<cfif structKeyExists( variables, "photo" )>

				Resized Image<br />

				<!--- Image properties. --->
				Size: #numberFormat( fileSize, "," )#
				(Max: #numberFormat( maxFileSize, "," )#)<br />

				Quality: #imageQuality#<br />
				Iterations: #saveCount#

				<img src="./#photoFile#" />




As you can see, the algorithm is really simple - for each iteration, I reduce the image quality by 5%. As seen in the video, I actually spend about 2 hours trying to come up with something much more intelligent based on the differences in file size between save iterations; but, the differences in quality and efficiency were not worth the complexity of the overall code. As such, I reverted back to the uniform decrement in quality.

Like I said in the beginning, I've never worried about this before, but I hope that maybe it can inspire someone to come up with an even better idea.

Want to use code from this post? Check out the license.

Reader Comments


Great stuff Ben. Right now I am working on a similar project myself for the back end of a shopping cart. I am also adding in border option and possibly a text option if I have time. I think I will include the looping trick of yours...seems slick. However I am useing the CF* image functions and not the cfimage tag there a advantage or disadvantage either way? I found with the function I can do something like..

<cfset ImageSharpen(myImage,"#sharpen#")>
<cfset ImageResize(myImage,"#imageWidth##widthPx_prcnt#","#imageHeight##heightPx_prcnt#","#quality#")>

I can then simply add a list of functions on upload. (the variables are input from a database).

Anyway...first try around with images in CF...awesome stuff



Thanks, Ben, for ongoing thought on this. Checked a bit to see if cfimage data was any different on a lower quality image than higher one. Nothing different, so nothing to read there that might give info.

Building on your filesize idea, I tried to see if any trends that would show up among various sizes (dimensions) of images as well as quality in terms of something like bytes/sq pixel (thinking something like density). Couldn't see anything that made sense, as data was quite varied and didn't seem to translate to something usable. (I had thought this might be a little more valuable than just total file size.) Some higher quality pics had lower *density* than other much lower quality pics.

In the end, you are absolutely correct: the issue is file size (not actual quality). So, you iterations to reduce file size to a max level seems most practical and straightforward.

will keep on this, though to see if anything else makes sense as a way to get original quality so can then take care of reduction in one step (if necessary). I use Fireworks for most image work: wonder what data it "reads" on a jpg to get its *quality* number?

Thanks for sharing your approach. From it, I'm also learning more about the value of some of the image tags and file funtions (rather than cffile and cfdirectory).




No, there should be no difference between using the CFImage tags and using the Image-based functions. Of course, other than that more functionality is available via the various functions.


Yeah, I have also noticed that Fireworks does seem to get some sense of "Quality" when it opens an image. I am not sure where that comes from.

As far as trending, I couldn't come up with anything. For some images, a 5% change in quality created a large file size change where as in others, 5% change hardly affected the file size at all. In fact, I had some test images that needed to drop to 20% quality before I would drop below a given file size (keep in mind, all of these images fit into 500x500).

I am not sure that a trend can even be found; I tried for 2 hours to play but came up with nada. As such, I think the incremental drop (which can be made even smaller) is the way to go.


I tried a bit more today and nothing that would indicate any trend. Must just be too many variables at work. Your file size stuff makes the most sense--until perhaps, someone comes up with a way to read quality directly.

Thanks for all the help! Will post again if I find anything of interest on this topic.



Thanks so much for showing me how to go about this! This is awesome! I love coldfuison!!! I am currently building a shopping cart for my company and needed a way to dynamically resize and optimize images that they upload. Now I understand how to do that!

Thanks sooooo much!!!



Will do! I did have one could I make this generate a thumb, medium and large photo all at the same time? I am fairly new to CF and I tell you it is the easiest most powerful thing I have ever learned!

Any help would greatly appreciated!





If I understand the question, it's quite easy.
(In my app, I make a thumb right after the last save of the full file.)

Since you have the image object (In Ben's code--"photo") you just do another ImageScaleToFit (or whichever function for your needs) on the photo object; then do a "write" to wherever the thumb to be saved and name for it.

I want my full one to have max dimension of 475, so on my first ImageScaleToFit, I use that as the dimensions. Then for my thumb, I do the same but use 110 as my dimensions. All is done with the original "photo" object.
a. Make the photo obj (using the cfimage on the temp saved image)
b. Scale the obj for the full, saving that (by cfwrite). The obj is still available for further manipulation.
c. Now scale the obj for the thumb size and then write that as an image file.

For what you said you're trying to do, I'd assume you'd work from largest to smallest.

One thing I do before the scaletofit, is to use the data in the photo obj to see if resizing is necessary. Such as:
if photo.width gt maxDimension or photo.height gt maxDimension: if that, then do the resize, if not just write it as it is.

Hope this helps. Try a CFDump on your photo object and you can see the data available.



Hey Ben! I had a chance to work with it...but can't seem to get it working...
I tried looping over the image to resize it to the appropriate size. Then I looped through it to compress, or optimize it. I then printed the optimized image to the screen. This all worked...until I uploaded it to a webserver, it didn't care for the ./, being a n00b I don't really know why. Could it be that it needs the direct reference, like C:\Inetpub, etc.?

Thanks for any help!




The "./" just means look in "this" directory. It depends on where you upload your files to. If you use a separate upload directory or getTempDirectory(), then you have to look there.


So your topic is more about image file size and compression.

Let's back up a step on your process of meeting a file size by looping through and checking file size and adjusting quality. Let's just simply talk about the quality / compression. Are they the same?

I have 200 - 400 images being re-sized every day on one of my sites which makes server resource utilization important. There have been some problems with memory and CPU utilization.

I have never adjusted the QUALITY settings on my resize function. So the question is ... if quality is the same as compression, doesn't compression take more processing power than no compression?

any thoughts would be appreciated.



I am by no means an image expert, so I am sorry if I am missing something; but, how do you adjust the file size without changing the compression?



Guess my question was ...

Do you know which uses more CPU? quality = 1.0 or quality = 0.5? I haven't been able to isolate this and benchmark it.



Hmm, interesting question. The lower the quality, the more the image has to be compressed. That's really what quality is - the amount of compression that is taking place. The less compression, the higher the quality (and file size) and vice-vesa.

I suppose, the more it has to compress an image, the more calculations it has to make? But, that is just a total guess, not based on anything.


Ben, been using this code (modified to create unique random image names) and love it! Thank you.

I've run into a strange issue however. A user recently tried to upload a 900KB image w/ the dimensions: 2256x1496. Cfimage can't seem to get the image under 60KB in time for the loop to reach negative numbers (20% to 10% to 0% to -10% quality). Of course, when it gets to this point it craps out. I think the issue is stemming somewhat from the file size + the non-standard dimensions of the image. Also weird because my server can upload 2MB - 4MB files with no issue. AND... when I reduce the size (but keep dimensions in perspective) to 1128x748 it uploads fine.

Any suggestions on how to handle these kinds of situations? Thank you



Unless I'm missing something, maybe you just need to increase the timeout variable for the template running the code. (cfsetting requesttimeout)

Another thought might be to get the dimensions and if much more than your default size, do a major resize first, then start looping for desired quality. That *might* reduce the number of required loops, hence faster and less processing. ??

I had trouble with certain types of jpgs not working with the procedure. Never could figure out how to identify the problem type, though. The cfimage would get corrupted. Seems like it may have been jpgs that came directly from camera without any editor tweaking before the upload process. As a result, I haven't worked with this much any more. I should get back and experiment. Glad you're having a positive experience. Maybe the above 2 suggestions can help.

Good luck!


@Zachary, @Keith,

I suppose it's possible that some images simply can't drop below an image size under any compression settings. At some point, all of the pixels simply needs to be represented somehow.

From what you said, however, it sounds like once you scale the picture down, it can resize properly. As @Kieth mentioned, can you perform a scale before you start compressing?

Also, you probably want to add some logic to prevent the compression from dropping down too low (and raising an exception).


Hi, Ben. I'm working on a cfsharepoint integration project that deals with lots image libraries hosted on our sharepoint server. Because sharepoint is behind our firewall I am using cfhttp to retrieve the image then writing it to browser with cfimage for display to the public.

The problem I'm having that is related to this thread topic to I ensure the image size being reported by sharepoint winds up being the same image size cfimage creates when it retrieves the image??

I am using this code to force JPG since that is the source format of all our images:

<cfimage action="writeTobrowser" source="#objGet.FileContent#" name="test" format="jpg" quality="1">

I set quality to 1 because I want to retrieve the "original" image from sharepoint. However, in a lot of cases the file size being generated in temp and pushed to the browser is MUCH, MUCH larger than the source. So what gives?

The practical problem I have with this is that I will never be able to display the file size or tell the user how big of a file they are about to download if cfimage can't output the same file size as what's on sharepoint.

For example, the source file on sharepoint is 2.6M (it's 7735 x 5119 @ 600ppi so it's kinda large dimensionally). When I run it through cfimage using quality=1 I get a 9.5M file returned. When I don't set the quality param the default CF uses is 0.75, which yields a more appropriate 2.4M. But this means I'm still compressing the source, which I really don't want to do, right? Again, the issue is that I want to be able to report the file size that the user is about to download.

Any ideas how to get around this (and what exactly is CF doing to the image that blows it up so much higher than the original when using quality=1)?

Btw, I'm getting the image as binary from the cfhttp call to sharepoint.

Oh, and the sharepoint images are stored in document libraries NOT picture libraries otherwise I would just use the "download" action of cfsharepoint. And I can't use cffile because there are no physical file paths in sharepoint.


I solved the problem by forgetting about cfimage altogether and using cfheader/cfcontent instead. I was hoping to be able to view the full-res image in the browser (hence all the cfimage business) but since the file size issue was critical and cfimage kept changing it, now it just sends a pop-up prompt asking to save or open (with correct file size same as on sharepoint). Good enough. I have a thumbnail preview before they click the download anyway.

Although, I would still be interestd to know why cfimage can't duplicate exactly the file size from the sharepoint source. ;)


These tips and advices are really very much effective. Keep it up with sharing such useful information with us. Thanks for the post.

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel