About two years ago, the mobile Gmail team posted an article focused on reducing the startup latency of their HTML5 application. It described a technique which enabled bypassing parsing and evaluation of JavaScript until it was needed by placing it inside comments. Charles Jolley of SproutCore fame was quick to jump on the idea. He experimented with it and found that similar performance gains could be achieved by putting the code inside of a string rather then commenting it. Then, despite promises of building it into SproutCore, this technique pretty much fell into oblivion. That’s a shame because it’s an interesting alternative to lazy loading that suits CommonJS modules really well.

Close encounters of the text/javascript type

To understand how this technique works, let’s look at what happens when the browser’s parser encounters a script element with a valid src attribute. First, a request is sent to the server. Hopefully the server responds and the browser proceeds to download (and cache) the requested file. Once these steps are completed the file still needs to be parsed and evaluated.

Fig. 1

For comparison, here’s the same request hitting a warm HTTP cache:

Fig. 2

What’s worth noticing here—other than the obvious benefits of caching—is that parsing and evaluation of the JavaScript file still happen on every page load, regardless of caching. While these steps are blazing fast on modern desktop computers, they aren’t on mobile. Even on recent, high-end devices. Consider the following graph which compares the cost of parsing and evaluating jQuery on the iPhone 3, 4, 4S, iPad, iPad 2, a Nexus S and a Mac Book Pro. (Note that these results are indicative only. They were gathered using the test hosted at lazyeval.org, which at this point is still very much alpha.)

Graph 1

Remember that these times come on top of whatever networking costs you’re already facing. Furthermore, they’ll be incurred on every single page load, regardless of whether or not the file was cached. Yes, you’re reading this right. On an iPhone 4, parsing and evaluating jQuery takes over 0.3 seconds, every single time the page is loaded. Arguably, those results have substantially improved with more recent devices, but you can’t count on your whole user base owning last generation smartphones, can you?

Lazy loading

A commonly suggested solution to the problem of startup latency is to load scripts on demand (for example, following a user interaction). The main advantage of this technique is that it delays the cost of downloading, parsing and evaluating until the script is needed. Note that in practice—and unless you can delay all your JavaScript files—you’ll end up having to pay round trip costs twice.

Fig. 3

There are a number of downsides to this approach, however. First of all, the code isn’t guaranteed to be delivered: the network or the server can become unavailable in the meantime. Secondly, the speed at which the code is transferred is subject to the network’s quality and can thus vary widely. Lastly, the code is delivered asynchronously. These downsides force the developer to build both defensively and with asynchronicity in mind, irremediably tying the implementation to it’s delivery mechanism in the process. Unless the whole codebase is built on these premises—which is probably something you want to avoid—deferring the loading of a chunk of code becomes a non-trivial endeavor.

Lazy evaluation to the rescue.

Lazy evaluation avoids these issues altogether by focusing on delaying the parsing and evaluation stages only. The script can be either bundled with the initial payload or inlined. It is prevented from being evaluated during initial page load by being either commented-out or escaped and turned into a string (“stringified”?). In both cases, the content is simply evaluated when required.

Fig. 4

And again. for comparison, hitting a warm HTTP cache:

Fig. 5

As the following graph of an iPad 2 parsing and evaluating jQuery shows, both options consistently out-perform regular evaluation by at least a factor of ten. Similar tenfold performance improvements were observed on all tested devices.

Graph 2

Commented-out code has slightly better performance indices than “stringified” code does. It can however be quite complicated to extract when not inlined. It is also more brittle: some phone operators are known to strip out JavaScript comments. “Stringified” code, on the other hand is both more robust and a lot easier to access, that’s why its preferred.

Building lazy evaluation into CommonJS modules

It turns out that the CommonJS module’s extra level of indirection (the require call) makes it an ideal candidate for lazy evaluation. Since lazy evaluation is synchronous, the whole process can be made completely transparent to the developer. Enabling lazy evaluation becomes a one-liner in a config file, not a large architectural change. Even better, the dependency graph built through static analysis can be leveraged to automatically lazy evaluate all of the selected module’s dependencies.

Implementation wise, enabling lazy evaluation of CommonJS modules requires modifying the runtime so that it correctly evaluates and wraps modules which are transported in their “stringified” form. In modulr, my CommonJS module dependencies resolver, this is done like so:

if (typeof fn === 'string') {
  fn = new Function('require', 'exports', 'module', fn);

This implies lazy evaluated modules be escaped and surrounded by quotes at build time on the server-side, before transport.

The initial results are promising, but at this point, it is merely work in progress. Future plans for modulr include enabling full minification of it’s output (just minifying the output won’t do as it would miss modules transported as strings), instrumenting the runtime to be able to gather perf data and experimenting with a Souders-inspired per module localStorage cache. If there’s interest, I’d also like to automate lazyeval.org to allow it to measure performance gain of applying this technique onto other JavaScript libraries and reporting those results to browserscope.org.


Tobie Langel (@tobie) is a Software engineer at Facebook. He’s also Facebook’s W3C AC Rep. An avid open-source contributor, he’s mostly known for having co-maintained the Prototype JavaScript Framework. Tobie recently picked up blogging again and rants at blog.tobie.me. In a previous life, he was a professional jazz drummer.

24 Responses to “Lazy evaluation of CommonJS modules”

  1. mirazasx

    Cant wait to get it a try. I’m sure it’s gona be big improvement for big pages where many doms is excecuted onload :).

  2. Alexsandro

    Great trick!

    I will test it. ASAP :)

  3. Alexsandro

    Thinking more about it, Should have a demo.

  4. James Burke

    It would be great to contrast this technique with ApplicationCache:


    While appcache can be a bit tricky to set up, for mobile sites it is often a better choice for the concern about network unavailability and it can store other assets like images.

  5. Kurt Milam

    @JamesBurke : Lazy evaluation and ApplicationCache are techniques for dealing with two separate issues.

    Take a look at figure 1 again. ApplicationCache is a way to minimize the time spent on the blue side of the figure – Latency and Download. Lazy evaluation is focused on the orange side – Parsing and Evaluation.

    I don’t think there’s much to contrast between the two techniques. If anything, they’re complementary.

  6. Tobie Langel

    James, AppCache is a great complementary solution to lazy evaluation. By itself however, it won’t help you reduce parsing and evaluation costs unless you’re ready to send each module separately down the wire. That’s would currently be prohibitive, but I have hight hopes SPDY will help alleviate that if it proves not to be a battery hog on mobile.

  7. Broofa

    ‘Found myself wondering how devs might map this to their own codebase/devices. Based on the lazyeval.org data shown in your graphs, which tests a 90KByte wad of minified script (jquery), would it be fair to say developers could generally expect the following parse+eval costs:

    - iPhone 3: 9.5ms ms/KB (minified JS)
    - iPhone 4 & iPad 1: 3.5 ms/KB (minified JS)
    - iPad 2: 2 ms/KB (minified JS)
    - iPhone 4s & Nexus S: 1 ms/KB (minified JS)
    - MBP: .4 ms/KB (minified JS)

  8. Tobie Langel

    Broofa, I don’t have data to back this up as of yet, but my intuition is that there’s a strong correlation between size and parsing costs. On the other hand, evaluation costs are more dependent on content than on length. Typically libraries running a lot of feature tests upfront will take longer to evaluate. Given that evaluation seems to make the bulk of the perf costs I don’t think we’ll end up with a simple formula.

    That said, I’m hoping to find time to make lazyeval.org run the same tests on arbitrary JS code and I’m also hoping to instrument modulr sufficiently so that such data be available during development.

  9. Stefano Bortolotti

    Tobie, it would be nice to know which OS were installed on the devices used for the test “parsing and evaluating jquery”.

  10. Mikhail Davydov

    Sencha Ext JS 4 full version, packed – (1Mb of JS localhosted)
    iOS Safari 4 – load time 300ms, evaluation time 3000ms!!!
    Firefox 8 – load time 32ms, evaluation time 500-600ms

    Simmilar to the “Lazy CommonJS module” solution https://github.com/azproduction/lmd

  11. James Burke

    Toby: I would expect that a build is being done for this solution, and an appcache approach would put the stuff that would be stringified just as a separate build layer of JS and have it dynamically load when needed.

    What is not clear to me is how appcache prioritizes its caching requests. Hopefully the stuff that is actually loaded by the page would take priority over other files just listed in the manifest, but I have not done any tests, so that may not be true.

  12. JavaScript Weekly #58 | island205

    [...] 延迟执行CommonJS模块 Tobie Langel指出惰性加载JavaScript代码,即先将JavaSCript代码缓存到字符串或者注释中直到需要时再执行,能显著的提高性能。它是基于Gmail团队两年前的一个想法。 [...]

  13. Tobie Langel

    Stefano Bortolotti: good point. I’ll make sure to have that data available once I clean-up lazyeval.org.

    James: I’m not sure I understand your suggestion. What’s the point of stringifying if you’re lazy-loading? Lazy loading doesn’t maintain synchronicity, so you’d be loosing one of the key benefits of this technique. Not sure how AppCache comes in play here, but to answer your question, the AppCache update/download process only kicks in after the onload event was triggered.

  14. AMD is Not the Answer : Tom Dale

    [...] the source code for each file and wraps it as a string, as described by Tobie Langel in his post Lazy evaluation of CommonJS modules. All of the source code is downloaded in one HTTP request (great for high-latency 3G connections) [...]

  15. James Burke

    Tobie: the appcache suggestion is an alternative to this approach, one that I think gives better performance overall for mobile apps, since it allows caching of images and other assets.

    There can still be builds, but the build layers that are for second order features can just be regular JS files stored in the appcache instead of being stringified into one JS file. Those files could be async-loaded (although not required), but since they are already in the appcache, it would complete very quickly. The initial code footprint loaded in the browser is smaller since a set of stringified modules were not loaded into the page.

    It might be interesting to run some tests, tweaking the tooling for this article to use secondary file loading from an appcache instead of stringifying all the modules into one JS file. I can see that setting up a proper appcache manifest might not be so much fun, but hopefully these tools know at least what JS files are in play and can help build up a manifest.

    Perhaps one of the other difficulties is working out what module to load up front in the JS build layer that is default loaded on page load vs ones that are loaded later. I suppose it depends on the app. Maybe the stringify approach does not have that concern.

  16. Tobie Langel

    James: The problem with the AppCache approach you describe is that a) it forces the developer to decide upfront which modules will be called synchronously and which ones won’t, b) there’s no guarantee that the files will have already been pre-loaded by AppCache when they are required. The “stringified” approach alleviates both of these concerns.

  17. Não use jQuery no seu site mobile: conheça o Zepto.JS | blog.caelum.com.br

    [...] E tudo isso tem que ser lido e parseado antes mesmo de se pensar em executar seu código. As estatísticas de carregamento do jQuery em dispositivos móveis são assustadoras. Num iPad 2 topo de linha, só o parsing e eval do jQuery demora 5x mais que no Desktop. Um iPhone [...]

  18. Lindsay Cordier

    Desktop computers these days are very powerful and they offer also more features compared to those computers that were manufactured decades ago. ..’,,

    Warm regards

  19. Benjamin Shepeard

    Desktop computer nowadays are great because they are packed with so many features. ‘:.”

    Up to date article content on our homepage

  20. Rosalee Steadings

    desktop computers these days are very very powerful and the price is also very affordable. ..

    <a href="My favorite web-site

  21. games

    What’s up, just wanted to say, I liked this article. It was helpful. Keep on posting!

  22. C3PO: Common 3rd-party objects / Stoyan's phpied.com

    [...] is more than the whole of jQuery, which previous experiments show can take the noticeable 200ms just to parse and evaluate (assuming it's cached) on [...]

  23. Code Mana`o | Evaluating Strings

    […] The article I’m reading […]

  24. Não use jQuery no seu site mobile: conheça o Zepto.JS | Saldanha

    […] E tudo isso tem que ser lido e parseado antes mesmo de se pensar em executar seu código. As estatísticas de carregamento do jQuery em dispositivos móveis são assustadoras. Num iPad 2 topo de linha, só o parsing e eval do jQuery demora 5x mais que no Desktop. Um iPhone […]

Leave a Reply

You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
And here's a tool to convert HTML entities