Skip to main content
Ben Nadel at cf.Objective() 2009 (Minneapolis, MN) with: Ryan Vikander and Kashif Khan and John Hiner
Ben Nadel at cf.Objective() 2009 (Minneapolis, MN) with: Ryan Vikander ( @rvikander ) Kashif Khan John Hiner

How Javascript Loading And Blocking Works By Default

Published in , Comments (29)

This morning, I was going to look into LABjs which Kyle Simpson recommended to me as a way to easily load Javascript files in parallel without blocking the rest of the page content. Before I did that, however, I wanted to take a look at how browsers handle Javascript loading by default. This way, I could get a good baseline to which the non-blocking approach could be compared. When I did this, however, I have to say that I was kind of shocked at what I saw; in the modern browsers, at least, it appears that Script tags are downloaded in parallel by default.


I am told that while this is the modern behavior (tested above in Chrome, IE8, FireFox), older browsers (including IE7) do not load Javascript files in parallel. And, while the Script tags in the above demo do download in parallel, they continue to block other page elements, such as images, from loading at the same time. While I have yet to look into it, I am told that the LABjs library solves both of these problems (older browsers and non-Javascript blocking) simultaneously.

Reader Comments



I've seen people talk about that one - it's a dependency management library, I believe. I'll definitely put it on the list.


You might want to check out jQuery Dominoes too:

We've opted to go with LABjs because Steve Souders (from Google/O'Reilly author) is working on it:

Please note that inline jQUery ready() blocks (and document.write) won't work. You'll need to move them to the header and add the blocks to the wait() function. I've added a request.OnPageLoadJs array to my framework and append js blocks to it (in the order that they are called) and simply loop through them when outputting the header.

One library issue I've encountered is Google Maps. You can't load that via LABjs (yet) as it uses it's own optimized method to load sub-scripts.



Yeah, it looks like the LABjs stuff requires a .wait( fn ) in lieu of the document-ready stuff. As for as Dominoes, that sounds familiar, but not sure I ever looked at what it does. I'll check that out too.


I don't think the jQuery getSCript will work in every case as some libraries are dependent upon other libraries in order to work. Running multiple getScript functions won't allow you to process. I have LABjs loading jQuery, waiting, load a couple of functions + jQueryUI, waiting and then load jQueryUI dependent code, load SWFObject2, wait, load oEmbed library (dependent upon SWFObject being loaded first), etc.

I don't know if multiple getScript() calls will enable you to have as much control.

I just returned from attending the recent Bay Area jQuery Conference. LABjs, Request.JS and Dominoes were all recommended, but the internal getScript() function wasn't even mentioned.


According to, getScript is a synonym for $.ajax({url: xxx, dataType: "script", success: completionfunction}). And $.ajax defaults to async:true. So it would at least provide async loading, if that's all you want.


@James, @Steve,

Certainly, it looks like jQuery would allow us to load multiple scripts in either series or parallel. Without knowing too much about these other libraries, it looks like the real power is that you can load things in parallel AND build dependency logic into the work flow.

I've never really thought about this before. I've always been a load-at-top script kind of guy. I bet thinking in terms of dependencies might will force me to think more about how my scripts are even organized.


I am also using jquery's getScript() function to load scripts asynchronously. Its callback allows to load dependent scripts easily.
However, one 'gotcha' with it is that getScript() in its default functionality prevents caching of loaded scripts.
A work around for this is to modify jquery source code and change getScript() to use $.ajax() instead of $.get() [which it uses by default], and pass chache:true option in the ajax call. [$.get() function does not accept 'cache' option]
Using $.getScript() also allows you to load scripts dynamically based on DOM content/browser features/etc...



@Azadi: The web has a built in mechanism for caching. In the HTTP request you can have a header "If-Modified-Since: ((date))". If the file has been modified since the date given, the server responds 200 OK and gives you the file. But if the file has not been modified, the server responds 304 Not Modified and no file, because the copy in cache (with that date) is still good. This cuts down on a lot of network traffic, because most "replaced elements" in the HTML (images, scripts, etc) have not been modified since they were last cached.

Thanks for picking up on this subtlety and telling us about it. It will cause our pages to load faster if they don't need to be loaded from the network at all.


... "they" being the scripts, that is.

I noticed at that the default for dataType:"script" is cache:false, so I for one will forgo the getScript() shorthand method in favor of $.ajax() with a object literal that sets cache:true.


if you look at the jquery source, or just check fired requests in firebug, you will see that each $.getScript() - and $.ajax() for dataType 'script' and 'jsonp' - is appended a 'cache-busting' unique timestamp url variable. this 'prevents' browsers from caching the requested script. or, to put it correctly, it makes the browser re-get the script as if it was a completely new script, even if the file was not modified at all.

the web's "built in mechanism for caching" does not work with $.getScript() or $.ajax({dataType:'script'}) by default.

more here:



FYI: LABjs with jQuery 1.4+ is perfectly safe and correct for $(document).ready() checks. I use LABjs on several sites with jQuery and ready() and it works fine.

Prior to 1.4, jQuery was incorrectly checking the dom-ready event (specifically, it wasn't properly checking in FF if the dom-ready had already passed when jQuery was loaded). So there was a chance (race condition) that jQuery loaded via LABjs could come in after the page's dom-ready, and thus any ready() waiting scripts would never get fired.


This having been said, you *do* have to think about $(document).ready() differently than you're probably used to thinking about it. The reason is, with any dynamic script loader, the loading of scripts by design no longer blocks the rest of the page (that's how the performance ramps up!).

The side effect is that $(document).ready() is ***NOT*** sufficient to test if all scripts have loaded as you are probably used to doing before.

In reality, this practice (while common) was actually a bad behavior and a misuse of what dom-ready is for. The loading of scripts and the DOM of a page being ready are actually separate concerns, and you need to be careful to defer/wait for ***BOTH*** to occur.

That is why LABjs lets you set up .wait() callbacks when scripts load. So, what I do is something like this:

$LAB.script("jquery.js").wait().script("plugin1.js", "plugin2.js").wait(function(){
// now the DOM is ready *and* the
// the scripts are all loaded.

So, bottom line, you need to wait for both dom-ready *and* script loading to be safe.


Also, wanted to comment on $.getScript() -- yes, it will load things in parallel using "async", but the thing it *won't* do is allow you to sequence the execution of the scripts.

So, if you call getScript() on 3 scripts, you can't control which one loads and executes first. If the 3 scripts in question have execution-order dependencies, you may have race conditions where you get dependency errors.

If you instead call getScript() on one script, then wait for it to load and call another getScript(), you can avoid the race condition on dependencies, but then you lose the performance boost of the two files loading in parallel.

LABjs will let you have the best of both worlds: loading in parallel *and* controlling execution order when necessary for dependencies.


@Azadi: Yes, I know that technique. I use it myself to force refreshes on log files in my workplace's performance logging system. MSIE likes to prefer cache on scripts (perhaps to win benchmarks). If you want to be kind-hearted about the lack of an If-Modified-Since call, you could say that MSIE "assumes idempotence", which is what we ourselves would be doing by turning on cache:true. Anyway, I'll bet that's why jQuery defauts to cache:false for dataType:"script". In any case, thank you again for bringing cache:true to our attention. At least with cache:true, the assumption of idempotence is under our own control.

@Kyle: Script dependencies are why I said earlier "It would at least provide async loading, if that's all you want." (emphasis added). Not everyone has complex dependencies. In fact, most people don't. In most cases, the completion routine is sufficient.

Here's a roll-your-own technique that's perfectly fine for most people: Suppose that you have 3 scripts, called TopMost, MidLevel and LowLevel, and all 3 have to be loaded before you execute TopMost. Just add this to the very beginning of all 3:

if (myScriptLoadCount < 3)

The purpose of that line is that the jQuery documentation says that, by the time you get control in the completion routine, the loaded script has already fired. By adding that simple if, you assure that the automatic firing does nothing.

Then, when you load them, do this:

var myScriptLoadCount = 0;
function CompletionRoutine()
if (myScriptLoadCount >= 3)
$.ajax({ ..., success:CompletionRoutine});// 3x

Do the $.ajax line 3 times for all 3 scripts. The >= (as opposed to ==) is a safer coding practice instilled into me in college.

I'm not trying to undermine the popularity of your LABjs tool. It's just that it calls for the loading of a larger, general-purpose script where smaller, targeted-purpose coding may suffice with less weight. And loading fast is the whole point here.

I appreciate your comments about how well LABjs was written, and for situations that warrant it, I'll keep it in mind.



It definitely seems like very cool stuff. While I have not yet used it, it might be cool if the LABjs load chain had both a .wait() and a .ready() method. The concept with ready() would that it would be a short-hand for:

$(function(){ })

It strikes me as a very common use case (again, I haven't used it, so might be crazy).



Of course, my suggestion makes the assumption that the underlying library is jQuery-powered, which I guess LABjs does not. I've lived in a jQuery world for so long, it doesn't occur to me that jQuery is not always around :)


@Steve -- you're right, you can always roll your own approaches and usually get the code much smaller than LABjs. If you're in a position to do so, great.

However, it's VERY common for web applications to grow in complexity, and as they do, things like remote scripts (or scripts you don't control) become an extremely common occurrence (analytics, sharing buttons, etc). When that happens, the roll-your-own approach that relies on modifying scripts becomes impossible.

I personally would rather use a general loader that can handle any level of complexity from simple to ridiculous without changing my usage. And I've strived to make LABjs small enough (2.2k gzip'd) that it's not too much a penalty for future proofing.


@Ben -- yes, you're correct, LABjs is totally separate of any framework, therefore I don't bake anything into it that is framework specific.

However, it's possible to wrap the $LAB API when necessary additional functionality is desired. For instance, fLABjs wraps the API to be adapted to handling proper loading of file:// type URIs.

I just whipped up this little snippet that adds a .ready() to $LAB</a> as you requested. You'd simply load this after $LAB and it does the rest. Make sure of course that jQuery is already present (ie, not loaded with LABjs) as this would be a chicken-and-the-egg problem otherwise.

My suggestion would be to inline this snippet in the same file as $LAB core, just at the bottom of the file.



Ah, looks cool my man. Let me run an idea by you - what do you think of just making the LAB object the prototype of a new object, jLAB. Like something like (pseudo code):

function jLAB(){ = function(){ .... }

jLAB.prototype = $LAB;

I am not too famillliar with how the LAB library works underneath; do you think this would be a sufficient approach; or would there be some internal conflict with the core LAB library.

I guess this would kind of be like a Decorator patterns that uses exctension rather than a wrapperd object.


@Ben -- It's a good idea, the problem is it won't work. :)

What's happening underneath the covers is that every time an API function executes, like .script(...), what's returned is *not* just another instance of $LAB, but actually a new object (new closure!) with what happens to be an identical looking API. It "fools" you into believing you're chaining off the original object, but you're not.

So, your solution would work if all you wanted to do was do jLAB.ready()... but jLAB.script(...).ready() would be undefined since .script() would return a new object that wasn't decorated by your ".this" extension.

This is why the solution I proposed actually wraps the API not only the first time, but also wraps the return value every time as well.



Ah, that makes sense. From looking at your code, I definitely got the feeling that more was going on that meets the eye. I guess this is like that Promise stuff in Dojo (I was just reading about it the other day). You can method chain, the the chain doesn't return the original object - it returns a new "Promise" object that you can chain off of.

Thanks for the clarification!


Hi all,

I have just converted my application to use labjs and it seems to be working really well. However, once a script is loaded I do not subsequently see 304 requests for that script (using fiddler). It also seems to be the case that if I then edit a script, that change is not picked up i.e. it is still using the cached copy in the browser. The only way I can pick up the changed script is to clear the cache which is a major pain for development.

I really hope one of you can tell me I'm doing something stupid otherwise I think it is a problem.

By the way I'm seeing this behaviour in IE7 if it is significant.

Thanks in advance ...


I haven't had a chance to look at the no-304 issue yet, but my initial guess is that maybe the use of XHR is what's exposing this bug. Perhaps set the "UseLocalXHR" setting to false and try again, see if that "fixes" it.


OK thanks, I will try that. I was also seeing some other strange behaviour which was a bit more serious for me in that it seemed to be arbitrarily missing one or two scripts out when loading with an empty cache. This would manifest itself in various null symbol script errors as I tried to reference things that weren't loaded.

Now I double checked that I have the correct code to load the scripts and if I hit F5 I can see in fiddler that it will subsequently load the one or two files that it had missed out and all was well, but I feel like I would need to understand what was going on there before I can proceed with it.

I'll keep you posted on what I learn....

Thanks again.


I personally don't want to see 304 requests as they negatively impact the page loading time.

I want all of my scripts to be permanently cached unless they have been updated. I wrote a script to recurse a shared script sub-directory and create a server-scoped struct using the hashes of all js/css files as the key and a YYYYMMDDHHMMSS timestamp as the value. I wrote a UDF to automatically determine if the key exists and rewrite js/css resource links to add the timestamp to the end of the URL. ie,

I used the following rule with Iirf (mod-rewrite for IIS):
RewriteRule ^(.*)(_[0-9]{14}\.)(css|js)$ $1.$3 [I,QSA,L]

This allows me to have a permanently cached scripts as well as have them be automatically updated whenever the file is changed. I can easily add additional rules and triggers to the server struct generator or UDF for development purposes.

When using Firefox, I generally use Web Developer's "Clear Private Data | Cache" or Fiddler's "Clear Firefox Cache" to forcefully clear the cache. I've also written a bookmarklet to append special URL parameters to the end of a URL and force a refresh of server-side caching.


I don't doubt that a mechanism can be developed as you suggest. In fact in production, my application uses distant expiry headers so this really isn't an issue.

What troubles me is that it appears not to be doing what it should. i.e. if a resource does not supply a cache control header and that resource is requested again, then the browser should do a conditional get to see whether it has changed and it appears not to be doing that.

This causes an irritation for development, but more importantly I am raising it since it may be a bug.

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel