Skip to main content
Ben Nadel at NCDevCon 2016 (Raleigh, NC) with: George Garrett Neisler and Chris Bestall
Ben Nadel at NCDevCon 2016 (Raleigh, NC) with: George Garrett Neisler Chris Bestall ( @itallspins )

Showing Client-Side Image Previews Using Plupload Before Uploading Images To Amazon S3

Published in , Comments (1)

Last week, I took a look at generating per-file Amazon S3 upload policies using Plupload. As a follow-up experiment, I wanted to see if I could augment that demo by adding a client-side image preview before the images were actually uploaded to the S3 bucket. This would (or could) give the user a perceived performance increase since it would give them on-page data to look at while the normal, HTTP-intensive actions are taking place behind the scenes.

View this project on my GitHub account.

If you'll recall from last week, we were using the BeforeUpload event as a hook into the Plupload-queue processing. As each file was about to be uploaded, the BeforeUpload event was triggered. At that point, we could pause the queue and make a request to the server to save the image object and get back a short-lived, file-specific Amazon S3 upload policy.

The fact that the BeforeUpload workflow was a just-in-time, per-file action was what made the Amazon S3 policy so much more secure. Since we didn't have to generate the policy in advance, it meant that we didn't have to make it flexible or leave it active for very long. I didn't want to lose that advantage when it came to generating client-side image previews.

For this reason, I really wanted to add the client-side preview without actually changing much of the logic that was already in place. To do so, it meant that the client-side image preview would have to be constructed entirely with non-persisted data. Luckily, Plupload gives each selected file a UUID (Universally Unique Identifier). We can use this ID to match the non-persisted client-side preview with the persisted image object once it comes back from the server.

Now, generating the client-side preview is not exactly a lightweight task. In the following demo, I'm reading-in the image and resizing it. For small images, this is inconsequential; however, on larger images, there is a noticeable lag between file-selection and the "resize" event that is triggered after the image has been downsized. To help prevent this process from blocking the browser or delaying the actual uploads to Amazon S3, I try to cascade the "preview" events through a recursive $timeout() call that triggers one image proxy per tick of the event loop.

Take a look at the following AngularJS Plupload directive below. You'll see the "imageAvailable" event triggered in the FilesAdded event; this is where the client-side image proxy object is created and broadcast to the rest of the app.

	function( $window, $rootScope, $q, $timeout, plupload, naturalSort, imagesService ) {

		// I bind the JavaScript events to the scope.
		function link( $scope, element, attributes ) {

			// The uploader has to refernece the various elements using IDs. Rather than
			// crudding up the HTML, just insert the values dynamically here.
				.attr( "id", "primaryUploaderContainer" )
				.find( "div.dropzone" )
					.attr( "id", "primaryUploaderDropzone" )

			// Instantiate the Plupload uploader.
			var uploader = new plupload.Uploader({

				// For this demo, we're only going to use the html5 runtime. I don't
				// want to have to deal with people who require flash - not this time,
				// I'm tired of it; plus, much of the point of this demo is to work with
				// the drag-n-drop, which isn't available in Flash.
				runtimes: "html5",

				// The actual POST URL will be provided in the BeforeUpload event.
				url: "about:blank",

				// Set the name of file field (that contains the upload).
				file_data_name: "file",

				// The container, into which to inject the Input shim.
				container: "primaryUploaderContainer",

				// The ID of the drop-zone element.
				drop_element: "primaryUploaderDropzone",

				// To enable click-to-select-files, you can provide a browse button.
				// We can use the same one as the drop zone.
				browse_button: "primaryUploaderDropzone",

				// We don't have any parameters yet; but, let's create the object now
				// so that we can simply consume it later in the BeforeUpload event.
				multipart_params: {}


			// Initialize the plupload runtime.
			uploader.bind( "Error", handleError );
			uploader.bind( "PostInit", handleInit );
			uploader.bind( "FilesAdded", handleFilesAdded );
			uploader.bind( "QueueChanged", handleQueueChanged );
			uploader.bind( "BeforeUpload", handleBeforeUpload );
			uploader.bind( "UploadProgress", handleUploadProgress );
			uploader.bind( "FileUploaded", handleFileUploaded );
			uploader.bind( "StateChanged", handleStateChanged );

			// I provide access to the file list inside of the directive. This can be
			// used to render the items being uploaded.
			$scope.queue = new PublicQueue();

			// Wrap the window instance so we can get easy event binding.
			var win = $( $window );

			// When the window is resized, we'll have to update the dimensions of the
			// input shim.
			win.on( "resize", handleWindowResize );

			// When the scope is destroyed, clean up bindings.
				function() { "resize", handleWindowResize );



			// ---
			// ---

			// I create a proxy for the given file that can load a client-side preview of
			// the selected image file as a base64-encoded data URL.
			function getImageProxy( file ) {

				// I perform the actual loading and resizing of the client-side image.
				// This is an asynchronous event and returns a Promise. This allows the
				// calling code to defer the processing overhead until it is needed.
				function loadImage( resizeWidth, resizeHeight ) {

					var deferred = $q.defer();

					// Create an instance of the mOxie Image object. This utility object
					// provides several means of reading in and loading image data from
					// various sources.
					// --
					// Wiki:
					var preloader = new mOxie.Image();

					// Define the onload BEFORE you call the load() method since the
					// load() method is synchronous and will our event binding won't be
					// bound in time.
					preloader.onload = function() {

						// Now that the image has been loaded, resize it so that the
						// data-uri is not so long for the browser to render. This is
						// an asynchronous event and will raise a "resize" event when it
						// has completed.
						preloader.downsize( resizeWidth, resizeHeight );


					// Listen for the resize event - once the image is resized, we can
					// make it available to the application at large.
					preloader.onresize = function() {

							dataUrl: preloader.getAsDataURL()

						// Clean up object references for garbage collection.
						file = loadImage = deferred = preloader = preloader.onload = preloader.onresize = null;


					// Calling the .getSource() on the file will return an instance of
					// mOxie.File, which is a unified file wrapper that can be used
					// across the various runtimes. The .load() method can only accept a
					// few different types of inputs (one of which is File).
					// --
					// Wiki:
					preloader.load( file.getSource() );

					return( deferred.promise );


				// Return the proxy object.
					load: loadImage


			// I handle the before upload event where the meta data can be edited right
			// before the upload of a specific file, allowing for per-file settings. If
			// you return FALSE from this event, upload process will be halted until you
			// trigger it manually.
			function handleBeforeUpload( uploader, file ) {

				// Get references to the runtime settings and multipart form parameters.
				var settings = uploader.settings;
				var params = settings.multipart_params;

				// Save the image to the application server. This will give us access to
				// subsequent information that we need inorder to post the image binary
				// up to Amazon S3.
				var promise = imagesService.saveImage( ).then(
					function handleSaveImageResolve( response ) {

						// Set the actual URL that we're going to POST to (in this case,
						// it's going to be our Amazon S3 bucket.)
						settings.url = response.formUrl;

						// In order to uplaod directly from the client to Amazon S3, we
						// need to post form data that lines-up with the generated S3
						// policy. All the appropriate values were already dfined on the
						// server during the Save action - now, we just need to inject
						// them into the form post.
						for ( var key in response.formData ) {

							if ( response.formData.hasOwnProperty( key ) ) {

								params[ key ] = response.formData[ key ];



						// Store the image data in the file object - this will make it
						// availalbe in the FileUploaded event where we'll have both
						// the image object and the valid S3 pre-signed URL.
						file.imageResponse = response.image;

						// Manually change the file status and trigger the upload. At
						// this point, Plupload will post the actual image binary up to
						// Amazon S3.
						file.status = plupload.UPLOADING;
						uploader.trigger( "UploadFile", file );

					function handleSaveImageReject( error ) {

						// CAUTION: Since we explicitly told Plupload NOT to upload this,
						// we've kind of put Plupload into a weird state. It will not
						// handle this error since it doesn't really "know" about this
						// workflow; as such, we have to clean up after this error in
						// order for Plupload to start working again.

						console.error( "Oops! ", error );
						console.warn( "File being removed from queue:", );

						// We failed to save the record (before we even tried to upload
						// the image binary to S3). Something is wrong with this file's
						// data, but we don't want to halt the entire process. In order
						// to get back into queue-processing mode we have to stop the
						// current upload.

						// Then, we have to remove the file from the queue (assuming that
						// a subsequent try won't fix the problem). Due to our event
						// bindings in the "QueueChanged" event, this will trigger a
						// restart of the uploading if there are any more files to process.
						uploader.removeFile( file );

						// Since we announced an "imageAvailable" event when the file was
						// added to the queue (before we tried to save it to the server),
						// we have to make sure to tell the application that something
						// went wrong and the image is not actually available anymore.
						$rootScope.$broadcast( "imageUnavailable",, );


				// Clean up object references for garbage collection.
					function handleSaveImageFinally() {

						uploader = file = promise = settings = params = null;


				// By returning False, we prevent the queue from proceeding with the
				// upload of this file until we manually trigger the "UploadFile" event.
				return( false );


			// I handle errors that occur during intialization or general operation of
			// the Plupload instance.
			function handleError( uploader, error ) {

				console.warn( "Plupload error" );
				console.error( error );

				// TODO: If this error ocurred during the upload to Amazon S3, then it
				// means we have a server-side image record saved and no binary. This
				// can be "OK"; or, we can remove the record. Not sure what the good move
				// is, since I'm just learning this stuff.


			// I handle the files-added event. At this point, the files have already been
			// added to the queue; however, we can see which files are the new files.
			function handleFilesAdded( uploader, files ) {

				// ------------------------------------------------------------------- //
				// BEGIN: JANKY SORTING HACK ----------------------------------------- //

				// This is a real hack; but, the files have actually ALREADY been added
				// to the internal Plupload queue; as such, we need to actually overwrite
				// the files that were just added.

				// If the user selected or dropped multiple files, try to order the files
				// using a natural sort that treats embedded numbers like actual numbers.
				naturalSort( files, "name" );

				var length = files.length;
				var totalLength = uploader.files.length;

				// Rewrite the sort of the newly added files.
				for ( var i = 0 ; i < length ; i++ ) {

					// Swap the original insert with the sorted insert.
					uploader.files[ totalLength - length + i ] = files[ i ];


				// END: JANKY SORTING HACK ------------------------------------------- //
				// ------------------------------------------------------------------- //

				// We want to make the local file selection preview available to the
				// application; however, reading the file in as a data-url is a
				// synchronous process, which means it can lock up the user-experience.
				// As such, we're going to cascade the local image loading.
					function deferImageProxy() {

						if ( ! files.length ) {

							// Clean up object references for garbage collection.
							return( uploader = files = length = totalLength = null );


						$rootScope.$broadcast( "imageAvailable", getImageProxy( files.shift() ) );

						$timeout( deferImageProxy );


				// After this event, the QueueChanged event will fire. We don't want to
				// trigger a digest until after the internal queue is rebuilt. Using
				// $evalAsync will give the next event a chance to execute before the
				// $digest is triggered.


			// I handle the file-uploaded event. At this point, the image has been
			// uploaded to Amazon S3.
			function handleFileUploaded( uploader, file, response ) {

					function() {

						// Broadcast the response from the server that we received during
						// our previous request to saveImage(). Remember, the FileUpload
						// event is only for the successful push of the image up to
						// Amazon S3 - the actual image record was already saved during
						// the BeforeUpload event. At that point, the image response was
						// associated with the file, which is what we're broadcasting.
						$rootScope.$broadcast( "imageUploaded", file.imageResponse, );

						// Remove the file from the internal queue.
						uploader.removeFile( file );



			// I handle the init event. At this point, we will know which runtime has
			// loaded, and whether or not drag-drop functionality is supported.
			function handleInit( uploader, params ) {

				console.log( "Initialization complete." );
				console.log( "Drag-drop supported:", !! uploader.features.dragdrop );


			// I handle the queue changed event. When the queue changes, it gives us an
			// opportunity to programmatically start the upload process. This will be
			// triggered when files are both added to or removed from the list.
			function handleQueueChanged( uploader ) {

				if ( uploader.files.length && isNotUploading() ){



				$scope.queue.rebuild( uploader.files );


			// I handle the change in state of the uploader.
			function handleStateChanged( uploader ) {

				if ( isUploading() ) {

					element.addClass( "uploading" );

				} else {

					element.removeClass( "uploading" );



			// I get called when upload progress is made on the given file.
			// --
			// CAUTION: This may get called one more time after the file has actually
			// been fully uploaded AND the uploaded event has already been called.
			function handleUploadProgress( uploader, file ) {

					function() {

						$scope.queue.updateFile( file );



			// I handle the resizing of the browser window, which causes a resizing of
			// the input-shim used by the uploader.
			function handleWindowResize( event ) {



			// I determine if the upload is currently inactive.
			function isNotUploading() {

				return( uploader.state === plupload.STOPPED );


			// I determine if the uploader is currently active.
			function isUploading() {

				return( uploader.state === plupload.STARTED );



		// I model the queue of files exposed by the uploader to the child DOM.
		function PublicQueue() {

			// I contain the actual data structure that is exposed to the user.
			var queue = [];

			// I index the currently queued files by ID for easy reference.
			var fileIndex = {};

			// I add the given file to the public queue.
			queue.addFile = function( file ) {

				var item = {
					size: file.size,
					loaded: file.loaded,
					percent: file.percent.toFixed( 0 ),
					status: file.status,
					isUploading: ( file.status === plupload.UPLOADING )

				this.push( fileIndex[ ] = item );


			// I rebuild the queue.
			// --
			// NOTE: Currently, the implementation of this doesn't take into account any
			// optimizations for rendering. If you use "track by" in your ng-repeat,
			// though, you should be ok.
			queue.rebuild = function( files ) {

				// Empty the queue.
				this.splice( 0, this.length );

				// Cleaer the internal index.
				fileIndex = {};

				// Add each file to the queue.
				for ( var i = 0, length = files.length ; i < length ; i++ ) {

					this.addFile( files[ i ] );



			// I update the percent loaded and state for the given file.
			queue.updateFile = function( file ) {

				// If we can't find this file, then ignore -- this can happen if the
				// progress event is fired AFTER the upload event (which it does
				// sometimes).
				if ( ! fileIndex.hasOwnProperty( ) ) {



				var item = fileIndex[ ];

				item.loaded = file.loaded;
				item.percent = file.percent.toFixed( 0 );
				item.status = file.status;
				item.isUploading = ( file.status === plupload.UPLOADING );


			return( queue );


		// Return the directive configuration.
			link: link,
			restrict: "A",
			scope: true


Generating the image preview is only half the battle. Once it's generated, the Controller has to figure out where to put, whether or not the preview is still valid, and when to get rid of it. The asynchronous nature of the client-side image resize means that there is always a possibility that real image object will be saved before the client-side preview becomes available.

And, to make matters a bit more complicated, Plupload won't resize images that exceed certain dimensions, since such an action is known to crash the browser. In my experiment, I am not even taking that into account. For this attempt, if the image preview isn't available, then the promise is never resolved and I never add the client-side object to the local data collection.

To see this in action, take a look at the Controller that managed the list of images:

	function( $scope, imagesService ) {

		// I hold the uploaded images.
		$scope.images = [];

		// I handle the event when the selected image file is available locally for
		// preview - this is before it is actually saved to our server (or to S3).
		$scope.$on( "imageAvailable", handleImageAvailable );

		// I handle the event in which the selected file failed to save to our server.
		// This gives us an opportunity to remove any rendered preview.
		$scope.$on( "imageUnavailable", handleImageUnavailable );

		// I handle upload events for the images (ie, the response from the server after
		// the image has been uploaded to S3).
		$scope.$on( "imageUploaded", handleImageUploaded );

		// Load the remote data from the server.

		// ---
		// ---

		// I delete the given image.
		$scope.deleteImage = function( image ) {

			// Immediately remove the image locally - we'll assume best case scendario
			// with server-side communication; there's no reason that this should throw
			// an error on a normal usage basis.
			removeImage( );

			// Delete from remote data store.
			imagesService.deleteImage( ).then(
				function deleteImageResolve( response ) { "Image deleted scucessfully." );

					// Clean up object references for garbage collection.
					image = null;

				function deleteImageReject( error ) {

					alert( "Oops! " + error.message );



		// ---
		// ---

		// I apply the remote data to the local scope.
		function applyRemoteData( images ) {

			$scope.images = augmentImages( images );


		// I prepare an image for use in the local scope.
		function augmentImage( image ) {

			// Add the properties that we will need when showing a client-side preview
			// of the selected file.
			image.isPreview = false;
			image.previewImageID = 0;

			return( image );


		// I prepare the images for use in the local scope.
		function augmentImages( images ) {

			for ( var i = 0, length = images.length ; i < length ; i++ ) {

				augmentImage( images[ i ] );


			return( images );


		// I handle the event in which the locally-selected file can be previewed as a
		// data-url. At this point, we neither have a server-side record nor an S3
		// uplaod; but, we should have enough data to fake it 'til we make it.
		function handleImageAvailable( event, imageProxy ) {

			imageProxy.load( 150, 150 ).then(
				function loadResolve( preview ) {

					// Since the load of the data-uri and the client-size resizing take
					// place asynchronously, there is a small chance that the real image
					// has actually loaded before the local preview has become available.
					// In such a case, we obviously want to ignore this and just let the
					// true image stay on the page.
					if ( imagePreviewNoLongerRelevant( ) ) {



					// Build out our image preview scaffolding. This is are our "fake"
					// image record that we are rendering locally - here, we can translate
					// our Plupload data points to mimic image data points.
					var image = augmentImage({
						imageUrl: preview.dataUrl

					// Make sure we can identify this image as a "preview" later, once
					// the true image has been loaded.
					image.isPreview = true;
					image.previewImageID =;

					$scope.images.push( image );

					// Clean up object references for garbage collection.
					event = imageProxy = preview = image = null;



		// I handle the event in which a previewed-image record failed to save to our
		// server. In such a case, we need to remove it from the local collection.
		function handleImageUnavailable( event, previewImageID ) {

			removeImage( previewImageID );


		// I handle the image upload response from the server. This happens when the
		// image record has been saved to our server and the image binary has been
		// uploaded to the Amazon S3 bucket.
		// --
		// NOTE: The previewImageID is the plupload ID that was associated with the
		// file selection. This is what we used as the image ID when we generated the
		// image preview object.
		function handleImageUploaded( event, image, previewImageID ) {

			image = augmentImage( image );

			// Copy over the ID of the image proxy. We need to do this in case the
			// asynchronous nature of the loading / thumbnailing / cropping has made a
			// not-yet-loaded proxy image no longer relevant.
			image.previewImageID = previewImageID;

			// In the loop below, we're going to maintain use of the local image preview.
			// However, we want to load the true image in the bcakground so that the
			// browser cache will be populated when the view is refreshed.
			preloadBrowserCache( image, image.imageUrl );

			// Look to see if we have a local preview of the image already being rendered
			// in our list. If we do, then we want to swap the proxy image out with the
			// true image (keeping it in the same place in the list).
			for ( var i = 0, length = $scope.images.length ; i < length ; i++ ) {

				if ( $scope.images[ i ].id === previewImageID ) {

					// Copy over the "preview" image URL into the true image. We're doing
					// this so we don't create a flickering affect as the remote image is
					// renderd. We also don't incure an HTTP request during the rest of
					// the queue processing (less the browser pre-caching above).
					image.imageUrl = $scope.images[ i ].imageUrl;

					// Swap images.
					return( $scope.images[ i ] = image );



			// If we made it this far, we don't have a local preview (image proxy). As
			// such, we can just add this saved image to the local collection.
			$scope.images.push( image );


		// I determine if the "real" image associated with the given preview ID has
		// already been saved to the server and loaded locally.
		function imagePreviewNoLongerRelevant( previewImageID ) {

			// If any of the rendered images have a matching preview ID, then it means
			// we have a saved-image in the list; as such, we don't need the preview.
			for ( var i = 0, length = $scope.images.length ; i < length ; i++ ) {

				if ( $scope.images[ i ].previewImageID === previewImageID ) {

					return( true );



			return( false );


		// I get the remote data from the server.
		function loadRemoteData() {

				function getAllImagesSuccess( response ) {

					applyRemoteData( response );

				function getAllImagesError( error ) {

					console.warn( "Could not load remote data." );
					console.error( "Oops! " + error.message );



		// I preload the given image url so that it will be pre-populated in the browser
		// cache so that it will be available when the view is refreshed.
		function preloadBrowserCache( image, imageUrl ) {

			// NOTE: Using a slight delay to not hog the current HTTP request pool.
				function preloadBrowserCacheTimeout() {

					( new Image() ).src = imageUrl;

					// Clean up object references for garbage collection.
					image = imageUrl = null;



		// I delete the image with the given ID from the local collection.
		function removeImage( id ) {

			for ( var i = 0, length = $scope.images.length ; i < length ; i++ ) {

				if ( $scope.images[ i ].id === id ) {

					return( $scope.images.splice( i, 1 ) );





I feel like a third of this code does nothing but handle the integration of the client-side preview. And, here's the kicker - once I had this all in place, I didn't actually perceive much of a performance boost. In fact, for very large images, I felt like it took longer to generate the client-side previews than it did to upload the images to the server.

Now, granted, I have a very fast internet connection (Verizon FiOS). If your users have slower connections, generating the client-side preview may seem faster since their uploads take longer. But, I also have to consider that I have a new(ish) computer which means that my processing of the client-side images should be faster than most people. This means that your average user may have a slower connection and require more time to generate the client-side previews.

Overall, this was an awesome learning experience. But considering the added complexity and the processing overhead that is incurred by the client-side preview rendering, I am not sure that I would want to use this in production. If I want my users to perceive a performance increase, I would probably try to do that more in the queue rendering than in the preview. This way, they see progress, but absolutely nothing blocks the uploading of the files, which is the largest bottleneck. But, I'll stay open-minded and continue to think through possible use-cases.

Want to use code from this post? Check out the license.

Reader Comments

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel