Skip to main content
Ben Nadel at CFUNITED 2010 (Landsdown, VA) with: Ken Auenson
Ben Nadel at CFUNITED 2010 (Landsdown, VA) with: Ken Auenson ( @KenAuenson )

Managing Connection Pool Resources Using Closures In ColdFusion

By on
Tags: ,

Lately, I've been using a lot of Redis at InVision App. Which means, I've been using a lot of Jedis (a Java driver for Redis) and its connection pool implementation. Dealing with a connection pool (as I'm learning) requires a lot of cruft because you need to acquire a connection from the pool and then make very sure that you return the connection back to the pool when done. Very quickly, in a situation like this, the amount of cruft tends to overshadow the actual business logic. So, I wanted to see if I could factor out the cruft using ColdFusion closures.

Jedis connection pool resources can actually be in two different states: working and broken. And, the connection pool API has different methods for returning each kind of resource. As such, when dealing with Jedis connections, we need to implement both a Catch and a Finally control-flow block in order to return broken and working resources, respectively.

In more recent releases of Jedis, the resource itself can actually manage this duality with a single .close() method. However, I'm going to use the more verbose workflow in my experiment as a means to illustrate the degree of cruft that can be required.

That said, let's take a look at the amount of code that is required to increment a simple counter stored in Redis:

<cfscript>

	writeOutput( "New value: " & increment() );


	// ------------------------------------------------------------------------------- //
	// ------------------------------------------------------------------------------- //


	/**
	* I increment the current (hard-coded) key and return the new value.
	*
	* @output false
	*/
	public numeric function increment() {

		var redis = application.jedisPool.getResource();

		// Wrap in a try-catch in order clean up resource when done.
		try {

			return( redis.incr( javaCast( "string", "counter" ) ) );

		// Catch any errors that occurred during the operation so that we can
		// specifically handle broken resources, which need to be returned to the pool
		// using a special methods.
		// --
		// NOTE: In recent versions of redis, you can actually call .close() on resource
		// and it will be handled properly inside; however, I'm leaving the catch block
		// to illustrate the workflow.
		} catch ( any error ) {

			// If the error is due to a broken resource, we have to return it specially
			// so that the connection pool will know to destroy it.
			if ( structKeyExists( local, "redis" ) && isConnectionError( error ) ) {

				application.jedisPool.returnBrokenResource( redis );

				// Delete the local variable to protect our Finally-block.
				structDelete( local, "redis" );

			}

			// Now that we are done cleaning up, rethrow the error so it can bubble up.
			rethrow;

		// No matter what, if the resource still exists, return it to the pool.
		} finally {

			// NOTE: Variable may have been nullified in the Catch-block.
			if ( structKeyExists( local, "redis" ) ) {

				application.jedisPool.returnResource( redis );

			}

		}

	}


	/**
	* I determine if the given ColdFusion error is due to a Jedis connection exception,
	* indicating a broken resource.
	*
	* @error I am the ColdFusion error being checked.
	* @output false
	*/
	public boolean function isConnectionError( required any error ) {

		return( error.type == "redis.clients.jedis.exceptions.JedisConnectionException" );

	}

</cfscript>

The actual increment operation is a single line of code. But, there's about 10-times more code that does nothing but acquire and manage the connection to Redis. And, when you have a ColdFusion component that performs multiple Redis-based operations, your code becomes incredibly noisy.

To experiment with reducing the noise, I wanted to see if I could move the resource management into a function that would provide workflow hooks in the form of ColdFusion closures. Meaning, it would use a closure for consuming the resource; a closure for handling any errors; and, a closure for handling any "finally" operations.

Let's take a look at the refactored code:

<cfscript>

	writeOutput( "New value: " & increment() );


	// ------------------------------------------------------------------------------- //
	// ------------------------------------------------------------------------------- //


	/**
	* I increment the current (hard-coded) key and return the new value.
	*
	* @output false
	*/
	public numeric function increment() {

		// This time, rather than managing the try / catch / finally block ourselves,
		// we're going to delegate that responsibility to the getResource() method,
		// which will manage that workflow using the provided closures.
		var newValue = getResource(
			function( required any redis ) {

				return( redis.incr( javaCast( "string", "counter" ) ) );

			}
		);

		return( newValue );

	}


	/**
	* I acquire a resource from the Jedis connection pool. The resource is then passed
	* off to the success handler and returned to the connection pool when the success
	* handler finishes (either in success or in failure). The results of the success
	* handler are returned as the result of the getResource() invocation.
	*
	* NOTE: If you provide a catch-handler and return False, it will cancel the rethrow
	* of the error.
	*
	* @handleSuccess I handle a successful acquisition of the resource.
	* @handleCatch (Optional) I handle any errors during processing.
	* @handleFinally (Optional) I handle the finally block after processing is complete.
	* @output false
	*/
	public any function getResource(
		required function handleSuccess,
		function handleCatch,
		function handleFinally
		) {

		// NOTE: If this throws an error, it will not be handled by the handleCatch
		// closure. This is an unexpected error, unrelated to the Redis operation.
		var redis = application.jedisPool.getResource();

		// Wrap in a try-catch in order clean up resource when done.
		try {

			return( handleSuccess( redis ) );

		// Catch any errors that occurred during the operation so that we can
		// specifically handle broken resources, which need to be returned to the pool
		// using a special methods.
		// --
		// NOTE: In recent versions of redis, you can actually call .close() on resource
		// and it will be handled properly inside; however, I'm leaving the catch block
		// to illustrate the workflow.
		} catch ( any error ) {

			// If the error is due to a broken resource, we have to return it specially
			// so that the connection pool will know to destroy it.
			if ( structKeyExists( local, "redis" ) && isConnectionError( error ) ) {

				application.jedisPool.returnBrokenResource( redis );

				// Delete the local variable to protect our Finally-block.
				structDelete( local, "redis" );

			}

			// If a catch-handler was provided, invoke it.
			// --
			// CAUTION: May cancel the rethrow (like a boss).
			if ( structKeyExists( arguments, "handleCatch" ) ) {

				var catchResult = handleCatch( error );

				// If the catch-handler returned a Falsey, skip the rethrow. For whatever
				// reason, the calling context wants to swallow the error.
				if ( structKeyExists( local, "catchResult" ) && ( catchResult == false ) ) {

					return;

				}

			}

			// Now that we are done cleaning up, rethrow the error so it can bubble up.
			rethrow;

		// No matter what, if the resource still exists, return it to the pool.
		} finally {

			// NOTE: Variable may have been nullified in the Catch-block.
			if ( structKeyExists( local, "redis" ) ) {

				application.jedisPool.returnResource( redis );

			}

			// If a finally-handler was provided, invoke it.
			if ( structKeyExists( arguments, "handleFinally" ) ) {

				handleFinally();

			}

		}

	}


	/**
	* I determine if the given ColdFusion error is due to a Jedis connection exception,
	* indicating a broken resource.
	*
	* @error I am the ColdFusion error being checked.
	* @output false
	*/
	public boolean function isConnectionError( required any error ) {

		return( error.type == "redis.clients.jedis.exceptions.JedisConnectionException" );

	}

</cfscript>

Here, you can see that the actual increment code is much smaller, limited to the getResource() call and the increment Redis operation. To me, this is so much easier to read. And, since the cruft is encapsulated, it will necessarily be kept consistent across all operations. You'll also notice that while the getResource() function can take a handleCatch() and handleFinally() closure, I've opted not to use them in this demo since I had no need.

And, if you're curious, here's the Application.cfc for this demo. I am using ColdFusion 10's per-application Java settings to load the Jedis JAR file into the available class paths.

component
	output = "false"
	hint = "I define the applications settings and event handlers."
	{

	// Define the application settings.
	this.name = hash( getCurrentTemplatePath() );
	this.applicationTimeout = createTimeSpan( 0, 0, 10, 0 );

	// Get the current directory and the root directory.
	this.appDirectory = getDirectoryFromPath( getCurrentTemplatePath() );

	// Map the jars directory so we can load our external dependencies.
	this.mappings[ "/jars" ] = ( this.appDirectory & "jars/" );

	// Set up the custom JAR files for this ColdFusion application. In order to use
	// Redis, we need to the Jedis JAR and the Apache Commons Pool2 JAR file. Using
	// ColdFusion 10's per-application Java integration, we can make these available
	// in the class paths.
	this.javaSettings = {
		loadPaths: [
			this.mappings[ "/jars" ]
		],
		loadColdFusionClassPath: false,
		reloadOnChange: false
	};


	/**
	* I initialize the application.
	*
	* @output false
	*/
	public boolean function onApplicationStart() {

		// Create a JedisPool instance that talks to the locally-hosted Redis server.
		application.jedisPool = createObject( "java", "redis.clients.jedis.JedisPool" ).init(
			createObject( "java", "redis.clients.jedis.JedisPoolConfig" ).init(),
			javaCast( "string", "127.0.0.1" )
		);

		return( true );

	}


	/**
	* I initialize the request.
	*
	* @sciprtName I am the script being requested.
	* @output false
	*/
	public boolean function onRequestStart( required string scriptName ) {

		// Check to see if we need to reset the application.
		if ( structKeyExists( url, "init" ) ) {

			applicationStop();
			writeOutput( "Application stopped." );
			abort;

		}

		return( true );

	}

}

To be honest, I've never really used ColdFusion closures in production before (mostly because I was on ColdFusion 9 until very recently). But, this kind of usage pattern seems really inviting, especially when dealing with the relatively large amount of cruft code required to manage connection pool resources. I think it's time to take the plunge and refactor some code.

Want to use code from this post? Check out the license.

Reader Comments

3 Comments

Nice solution! We may need to borrow it for `cfredis`. :-) Our logs are riddled with "Could not get resource from pool" errors related to Jedis connection pools.

3 Comments

@Ben - We experience of high number of CF request timeouts in our application. When they occur while a Redis connection is open, we lose the pooled connection. We've experimented with several of the connection pool options in Jedis but can't seem to get the combination right.

We're also running CF8 and an older version of Jedis, which doesn't help. :-)

15,640 Comments

@Matt,

To be honest, all of the Java stuff is a bit of mystery to me. "Commons Pool" ... ok sure, why not :D That said, Redis is insanely fast, so I was guess that the timeout is coming from something tangential.

It took me a couple of weeks, but I finally solved a Redis problem we were having - lots of timeouts, like you. Started slow... then the logs showed increase and increase and increase and we were getting tens-of-thousands of timeouts an hour. When I looked at the log graphs, it looked like just a ton of timeouts. But, when I zoomed into the graphs, it showed a cycle - about ever 120 seconds, there was a massive spike in timeouts... but then, between the spikes, it was all good.

More digging, more digging, I discovered that there was a "health check" that was keeping track of the key-space size in Redis (to look for sudden spikes). The problem was, it was implemented with:

KEYS *

... which blocks while it gathers all the keys.

When the Redis store was small, it was fast enough. But, once the key-space grew enough, the time it took to gather all the keys was enough to surprise the 2-second default timeout of the connection. So, every time the health probe ran, it ironically, stopped Redis from working for about 4 seconds :D

Your problem may be totally different, but trying to offer up my experience.

3 Comments

Thanks @Ben! I almost think the KEYS command should be turned off by default, way too easy to get into trouble with it.

Your comment made me think more deeply about how we're using (and losing) Redis connection objects. Most of our Redis usage is very simple, where we open a connection, execute a Redis command, and close the connection. We have frequent request timeouts but I don't think many are happening during Redis requests.

Further down the rabbit hole...

15,640 Comments

@Matt,

Good luck! I wish I knew more about the Java stuff and could point you in the right direction. But, alas, I'm just a wonderer in the dark :)

Maybe your Redis server has some connection limit in place?

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel