Hello, fellow web perf enthusiast! Contribute to the 2024 edition happening this December. Click.

Web Performance Calendar

The speed geek's favorite time of year
2011 Edition
ABOUT THE AUTHOR

Tobie Langel (@tobie) is a Software engineer at Facebook. He’s also Facebook’s W3C AC Rep. An avid open-source contributor, he’s mostly known for having co-maintained the Prototype JavaScript Framework. Tobie recently picked up blogging again and rants at blog.tobie.me. In a previous life, he was a professional jazz drummer.

About two years ago, the mobile Gmail team posted an article focused on reducing the startup latency of their HTML5 application. It described a technique which enabled bypassing parsing and evaluation of JavaScript until it was needed by placing it inside comments. Charles Jolley of SproutCore fame was quick to jump on the idea. He experimented with it and found that similar performance gains could be achieved by putting the code inside of a string rather then commenting it. Then, despite promises of building it into SproutCore, this technique pretty much fell into oblivion. That’s a shame because it’s an interesting alternative to lazy loading that suits CommonJS modules really well.

Close encounters of the text/javascript type

To understand how this technique works, let’s look at what happens when the browser’s parser encounters a script element with a valid src attribute. First, a request is sent to the server. Hopefully the server responds and the browser proceeds to download (and cache) the requested file. Once these steps are completed the file still needs to be parsed and evaluated.

Fig. 1

For comparison, here’s the same request hitting a warm HTTP cache:

Fig. 2

What’s worth noticing here—other than the obvious benefits of caching—is that parsing and evaluation of the JavaScript file still happen on every page load, regardless of caching. While these steps are blazing fast on modern desktop computers, they aren’t on mobile. Even on recent, high-end devices. Consider the following graph which compares the cost of parsing and evaluating jQuery on the iPhone 3, 4, 4S, iPad, iPad 2, a Nexus S and a Mac Book Pro. (Note that these results are indicative only. They were gathered using the test hosted at lazyeval.org, which at this point is still very much alpha.)

Graph 1

Remember that these times come on top of whatever networking costs you’re already facing. Furthermore, they’ll be incurred on every single page load, regardless of whether or not the file was cached. Yes, you’re reading this right. On an iPhone 4, parsing and evaluating jQuery takes over 0.3 seconds, every single time the page is loaded. Arguably, those results have substantially improved with more recent devices, but you can’t count on your whole user base owning last generation smartphones, can you?

Lazy loading

A commonly suggested solution to the problem of startup latency is to load scripts on demand (for example, following a user interaction). The main advantage of this technique is that it delays the cost of downloading, parsing and evaluating until the script is needed. Note that in practice—and unless you can delay all your JavaScript files—you’ll end up having to pay round trip costs twice.

Fig. 3

There are a number of downsides to this approach, however. First of all, the code isn’t guaranteed to be delivered: the network or the server can become unavailable in the meantime. Secondly, the speed at which the code is transferred is subject to the network’s quality and can thus vary widely. Lastly, the code is delivered asynchronously. These downsides force the developer to build both defensively and with asynchronicity in mind, irremediably tying the implementation to it’s delivery mechanism in the process. Unless the whole codebase is built on these premises—which is probably something you want to avoid—deferring the loading of a chunk of code becomes a non-trivial endeavor.

Lazy evaluation to the rescue.

Lazy evaluation avoids these issues altogether by focusing on delaying the parsing and evaluation stages only. The script can be either bundled with the initial payload or inlined. It is prevented from being evaluated during initial page load by being either commented-out or escaped and turned into a string (“stringified”?). In both cases, the content is simply evaluated when required.

Fig. 4

And again. for comparison, hitting a warm HTTP cache:

Fig. 5

As the following graph of an iPad 2 parsing and evaluating jQuery shows, both options consistently out-perform regular evaluation by at least a factor of ten. Similar tenfold performance improvements were observed on all tested devices.

Graph 2

Commented-out code has slightly better performance indices than “stringified” code does. It can however be quite complicated to extract when not inlined. It is also more brittle: some phone operators are known to strip out JavaScript comments. “Stringified” code, on the other hand is both more robust and a lot easier to access, that’s why its preferred.

Building lazy evaluation into CommonJS modules

It turns out that the CommonJS module’s extra level of indirection (the require call) makes it an ideal candidate for lazy evaluation. Since lazy evaluation is synchronous, the whole process can be made completely transparent to the developer. Enabling lazy evaluation becomes a one-liner in a config file, not a large architectural change. Even better, the dependency graph built through static analysis can be leveraged to automatically lazy evaluate all of the selected module’s dependencies.

Implementation wise, enabling lazy evaluation of CommonJS modules requires modifying the runtime so that it correctly evaluates and wraps modules which are transported in their “stringified” form. In modulr, my CommonJS module dependencies resolver, this is done like so:

if (typeof fn === 'string') {
  fn = new Function('require', 'exports', 'module', fn);
}

This implies lazy evaluated modules be escaped and surrounded by quotes at build time on the server-side, before transport.

The initial results are promising, but at this point, it is merely work in progress. Future plans for modulr include enabling full minification of it’s output (just minifying the output won’t do as it would miss modules transported as strings), instrumenting the runtime to be able to gather perf data and experimenting with a Souders-inspired per module localStorage cache. If there’s interest, I’d also like to automate lazyeval.org to allow it to measure performance gain of applying this technique onto other JavaScript libraries and reporting those results to browserscope.org.