Thanks so much! Still wish there was a function that would get the image's quality to see whether it needs to be reduced or not. Apparently, the way the quality attribute works is that if set at say .8, it reduces the existing quality by 20%. If user has already reduced quality before uploading, then setting quality to .8 or something reduces it even further. At a few Adobe CF gatherings, I've pointed out this need, but it doesn't seem to be coming after hearing more about the next CF release. Again, thanks! Appreciate the good and quick help.
Your understanding of the way image compression works is correct. Once an image is compressed, it essentially becomes a completely new image with 100% quality. At that point, any further compression will be based of the new image rather than the old image. While this can be frustrating, it's the only thing that's feasible; keeping the concept of compression over time would require the image to maintain its original image data.
However, I would guess that your concern isn't really about image quality at all; more likely, your concern is over file size. If file size wasn't the true concern, then you'd always use 100% quality. This would make the file size large, but the photo would look its best. The reason that we ever really want to reduce the quality of an image is to make the resultant file smaller in size (not to be confused with dimension, which remains constant). As such, I think that what you really want to do is reduce the quality of the target image only enough to get the resultant file size to drop under some arbitrary limit.
To be honest, I've never really worried about this problem before; typically, I would just pick a decently balanced image quality and use that as the quality going forward. As such, I can't really say that this algorithm is the right approach. And, if anyone else has a better suggestion, I'd love to hear it - I think this problem is quite fascinating. That said, the approach that I came up with below is a simple, iterative approach: we start with a given photo quality and try to save the image. We then get the file size of the resultant image and if it's too high, we slightly reduce the quality and repeat the compression process, overwriting the target file.
This might seem like a very intensive process, and in fact, I can practically hear some of you blowing steam out your ears at the thought of having the server do so much file access. In reality, however, the iterative approach outlined below is quite efficient. If you've ever played around with photo quality in a graphics program, you learn quickly that small changes in quality can lead to rather large changes in file size. As such, there's a naturally small limit to the number of iterations that we would ever need to execute.
On top of the natural limit, I've also opted for an initial quality of 95% rather than a 100%. This is because 100% quality (zero compression) is almost always too large. In fact, if you read in an image and then re-save it with 100% quality, the new image will generally be significantly larger than the original (which was, in its own right, already at 100% quality).
With that in mind, let's take a look at the code:
<!--- Param form variables. ---> <cfparam name="form.upload" type="string" default="" /> <!--- Check to see if the form was uploaded. ---> <cfif len( form.upload )> <!--- Upload the photo to a temp directory. ---> <cffile result="upload" action="upload" filefield="upload" destination="#expandPath( './' )#" nameconflict="makeunique" /> <!--- Read the image from the temp directory so that we can resize it to fit in the given area. ---> <cfimage name="photo" action="read" source="./#upload.serverFile#" /> <!--- Resize the image to fit in the specified box. ---> <cfset imageScaleToFit( photo, 500, 500 ) /> <!--- Now that we have our resized image, we need to write it to disk. Because we want to impose a file size limit, we are going to keep trying to save the image until we have a decent file size. ---> <!--- Start out with 95% quality. The reason that we are not starting out with full quality is that 100% usually creates a file that is *surprisingly large*. ---> <cfset imageQuality = 0.95 /> <!--- Set the max file size. ---> <cfset maxFileSize = (60 * 1024) /> <!--- Set the name of the resized image file. ---> <cfset photoFile = ( upload.serverFileName & "-resized." & upload.serverFileExt ) /> <!--- Keep track of the number of iterations. ---> <cfset saveCount = 0 /> <!--- Start image save loop. ---> <cfloop condition="true"> <!--- Write the image to the disk with the current level of compression. ---> <cfimage action="write" source="#photo#" destination="./#photoFile#" quality="#imageQuality#" overwrite="true" /> <!--- Increment the save count. ---> <cfset saveCount++ /> <!--- Get the file size of the new image file. ---> <cfset fileSize = getFileInfo( ExpandPath( "./#photoFile#" ) ).size /> <!--- Check to see if the file size is greater than the target file size. If it is not, then we need to decrease the photo quality and try again. ---> <cfif (fileSize gt maxFileSize)> <!--- Reduce the quality by 5%. ---> <cfset imageQuality -= .05 /> <cfelse> <!--- The file size is fine, so just break out of the loop. ---> <cfbreak /> </cfif> </cfloop> </cfif> <cfoutput> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <title>CFImage Dynamic Compression Demo</title> </head> <body> <h1> CFImage Dynamic Compression Demo </h1> <form action="#cgi.script_name#" method="post" enctype="multipart/form-data"> <p> <input type="file" name="upload" size="40" /> </p> <p> <input type="submit" value="Upload Photo" /> </p> </form> <!--- Check to see if we have an upload photo. ---> <cfif structKeyExists( variables, "photo" )> <h2> Resized Image<br /> <!--- Image properties. ---> Size: #numberFormat( fileSize, "," )# (Max: #numberFormat( maxFileSize, "," )#)<br /> Quality: #imageQuality#<br /> Iterations: #saveCount# </h2> <p> <img src="./#photoFile#" /> </p> </cfif> </body> </html> </cfoutput>
As you can see, the algorithm is really simple - for each iteration, I reduce the image quality by 5%. As seen in the video, I actually spend about 2 hours trying to come up with something much more intelligent based on the differences in file size between save iterations; but, the differences in quality and efficiency were not worth the complexity of the overall code. As such, I reverted back to the uniform decrement in quality.
Like I said in the beginning, I've never worried about this before, but I hope that maybe it can inspire someone to come up with an even better idea.
Want to use code from this post? Check out the license.