Skip to main content
Ben Nadel at CFUNITED 2010 (Landsdown, VA) with: Michael Kassing and Simon Free and Steve Johnson
Ben Nadel at CFUNITED 2010 (Landsdown, VA) with: Michael Kassing Simon Free ( @simonfree ) Steve Johnson ( @zombiefusion )

On The Difficult Problem Of Logging Errors In Parallel Promises In JavaScript

By on

Yesterday, in response to my post on gathering data in parallel inside an asynchronous Generator-based workflow, Scott Rippey brought up a really interesting point: if several of the parallel requests error-out or get rejected, only the first error in the group will be caught. Not only is that a great catch (no pun intended); but, as it turns out, this isn't a problem specific to generator-based workflows - this affects any situation in which you have parallel promises. And, dealing with this problem is not straightforward. At least, not to me.

To articulate the problem, let's look at a simplified example. In the following code, we are using the native Promise.all() to run several promises in parallel and await their resolution. Notice, however, that all the promises are going to be rejections:

var things = [ "this", "that", "other" ];

// Take the Things collection and initiate requests to all things in parallel. Then,
// we can wait for all of them to resolve (or ONE to reject).
var promise = Promise.all(
	things.map(
		function operator( thing ) {

			// return( Promise.resolve( thing ) );
			return( Promise.reject( new Error( "Nope('" + thing + "')!" ) ) );

		}
	)
);

// Log the results, either in resolution or rejection.
// --
// NOTE: Promise.all() will "fail fast"; as such, we should only see one error below
// despite the fact that all of the parallel requests will have failed.
promise.then(
	function handleResolve( values ) {

		console.log( "Resolve:" );
		console.log( values );

	},
	function handleReject( reason ) {

		console.log( "Reject:" );
		console.log( reason );

	}
);

Here, we're mapping 3 values onto 3 promises, rejecting them all, and then logging the outcome. And, when we run this code in the terminal, we get the following output:

Reject:
[Error: Nope('this')!]

As you can see, while all three of our promises were rejected, we only learn about the first rejection in the collection. This is because the Promise.all() algorithm is a "fail fast" algorithm. Meaning, it only waits for the first rejection in the collection before rejecting the aggregate promise. As such, the last two errors disappear off into the ether and we never learn anything about them.

At first, you might be tempted to solve this problem by attaching some rejection logger to the end of each parallel promise:

var promise = Promise.all(
	things.map(
		function operator( thing ) {

			var indexPromise = Promise
				.reject( new Error( "Nope('" + thing + "')!" ) )

				// Log [and re-throw] any errors that come back from this request.
				.catch( logAndRethrow )
			;

			return( indexPromise );

		}
	)
);

Unfortunately, this approach is a non-starter because you're likely to end-up double-logging errors: once in your intermediary logger and once in the parent workflow's rejection handler. The core problem here is that it's not the job of the request to know how its own errors are going to be handled in the greater workflow. As such, it cannot and should not make assumptions about logging.

As a hold-over of the AngularJS $q service, I tend to use the Q Promise library as my goto Promise implementation. So, I thought I would look to see if Q had any features that might help. And, as it turns out, Q tracks non-handled rejections! So, I thought I would see what happens if we take the above demo and replace the native Promise module with the Q library, and then check for unhandled rejections:

// Use the Q library instead of the native Promise class.
var Q = require( "q" );

var things = [ "this", "that", "other" ];

// Take the Things collection and initiate requests to all things in parallel. Then,
// we can wait for all of them to resolve (or ONE to reject).
var promise = Q.all(
	things.map(
		function operator( thing ) {

			// return( Q.resolve( thing ) );
			return( Q.reject( new Error( "Nope('" + thing + "')!" ) ) );

		}
	)
);

// Log the results, either in resolution or rejection.
// --
// NOTE: Q.all() will "fail fast"; as such, we should only see one error below despite
// the fact that all of the parallel requests will have failed.
promise.then(
	function handleResolve( values ) {

		console.log( "Resolve:" );
		console.log( values );

	},
	function handleReject( reason ) {

		console.log( "Reject:" );
		console.log( reason );

	}
);

// Check to see if Q has tracked any uncaught errors.
setTimeout(
	function() {

		console.log( "Unhandled Errors:" );
		console.log( Q.getUnhandledReasons() );

	},
	250
);

As you can see, we've just replaced "Promise." calls with "Q." calls. And, when we run this code in the terminal, we get the following output:

Reject:
[Error: Nope('this')!]
Unhandled Errors:
[]

Unfortunately, this doesn't work. At least not when used in a Q.all() promise aggregation. Internally, Q.all() has to bind a rejection handler to each parallel promise so that it knows when and if it has to reject the aggregate. This is great for Q.all(); but, as a side-effect, it means that subsequent errors are technically considered "handled" while not actually being reported to the parent workflow.

If we move the parallel promises out of the Q.all() aggregate, however, we might have better luck. In this next demo, I'm using an approach similar to the one that I outlined in my previous blog post: instead of using Q.all(), I'm storing the parallel promises in a simple JavaScript object. Then, I'm yielding each property in that object. This allows the promises to run in parallel while the collection of yield operators acts as a functional equivalent to Promise.all():

// Use the Q library instead of the native Promise class.
var Q = require( "q" );

// Invoke the generator function as a "promise workflow". In this case, Q.spawn() will
// proxy the iteration of the resultant generator, taking yielded values and piping them
// back into the next iteration of the generator.
Q.spawn(
	function* generator() {

		try {

			// Initiate the requests
			var thread = {
				a: Q.reject( new Error( "Nope('a')!" ) ),
				b: Q.reject( new Error( "Nope('b')!" ) ),
				c: Q.reject( new Error( "Nope('c')!" ) )
			};

			var a = yield( thread.a );
			var b = yield( thread.b );
			var c = yield( thread.c );

		} catch ( error ) {

			console.log( "Handled Error:" );
			console.log( error );

		}

	}
);

// Check to see if Q has tracked any uncaught errors.
setTimeout(
	function() {

		console.log( "Unhandled Errors:" );

		Q.getUnhandledReasons().forEach(
			function(e ) {

				console.log( e.split( "\n" ).shift() );

			}
		);

		// Clear the rejection queue so that we don't re-track this errors on
		// subsequent inspections.
		Q.resetUnhandledRejections();

	},
	250
);

This time, when we run the code in the terminal, we get the following output:

Handled Error:
[Error: Nope('a')!]
Unhandled Errors:
Error: Nope('b')!
Error: Nope('c')!

As you can see, when we run the parallel promises outside the context of Promise.all() / Q.all(), the rejections that we missed due to the first yield operator are recorded in Q's unhandled rejections queue. And, if we were using Q as our Promise implementation, we could create a scheduled task that periodically checks this queue, logs the errors, and then resets the queue.

NOTE: This previous statement is theoretical - I've never actually done this in production. Until this morning, I didn't even know that Q tracked unhandled errors.

I love the fact that Q can help solve this problem; but, at the same time, I don't love the fact that I have to use a user-land Promise implementation in order to log these elusive errors; I wish there was a more native solution.

If anyone has any better suggestions, I'd love to hear them.

Want to use code from this post? Check out the license.

Reader Comments

15,674 Comments

@All,

I'd also like to throw out one big hairy statement:

** It doesn't really matter **

Think about this for a second. We're talking about multiple errors. We're not talking about a single error getting lost; we're talking about a "subsequent" error getting lost. One perspective to keep is that the overall workflow has already failed (most likely, depending the code). As such, we'll likely have *something* in the error logs to debug. So, once we fix that error, the previously-missed errors will start being the "first" error and will therefore be logged.

So, to some degree, this is a difficult problem ... that may not have to be solved. Yes, you'll miss some things; but, in the long run, it may not have much an affect on the success of the application.

15,674 Comments

@Simon,

I've only used the .allSettled() method a few times - mostly when doing data migrations where I don't want to "stop" on a failed SQL statement, but rather then it all run and then inspect the settles promises for failures.

In the context of a generator-oriented workflow, I am not sure how much this will actually help. I think you'd have to start mucking-up the actual workflow with control-flow logic that deals with the allSettled stuff.

var thread = {
a: getAsyncA(),
b: getAsyncB(),
c: getAsyncC()
};

testSettled( yield( Q.allSettled([ thread.a, thread.b, thread.c ]) ) );

var a = yield( thread.a );
var b = yield( thread.b );
var c = yield( thread.c );

.... you'd have to have something _outside_ of the individual yield statements that tests the yielding of all of them in aggregate.

And, another point I get stuck on is *what* error that would throw? If the thread came back and 2 of the 3 had settled in rejections, what error would make the most sense to propagate? An error that aggregates two other errors? What implications would this have on the upstream portion of the code that is handling the "catch"? It would have to handle two fundamentally different kinds of errors.

Mostly just thinking out-load here; but, I do think it's a hard problem.

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel