- WebP and JPEG XR are two useful new image formats supported by only some browsers.
- Both provide good compression and add valuable support for lossy compression with transparency.
- Today it takes some effort to serve the right images to the right browsers.
- Tools for creating and working with WebP images are much better than tools for JPEG XR images.
- Lossy quality settings don’t map directly between formats so care must be taken when creating images for multiple formats.
- Websites usually perform better with these new formats both from a visual aspect and a bytes saved aspect often saving 25% to 50% compared to a similar quality JPEG depending on the desired quality.
Images have been a core part of the web for as long as most of us can remember. Initially they were used sparingly but, over the decades, the web has evolved and we now expect a very rich — and graphical — experience. Image data now dominates the bytes used on most web sites and contributes to the majority of what we see… after the images are loaded that is.
Loading images quickly is imperative to loading a site quickly and, for the most part, the strategies people use for optimizing images for web sites hasn’t changed dramatically for quite a while. Choice of formats between JPEG, GIF and more recently PNG and the choice of lossy compression settings is the usual approach taken by most people today. The more ambitious will choose to use various forms of dynamic loading and spriting.
I have recently spent time implementing image optimizations as part of Akamai’s FEO solution and have learned a great deal on the topic in the process.
Today I’m going to focus on talking about image format selection and how the options to web developers have changed recently. Besides JPEG, GIF and PNG, web developers now have two new choices: WebP and JPEG XR… sometimes. These two new formats both offer some appealing features that can lead to real savings in byte size and improved performance. The problem, today at least, is that these formats aren’t nearly as universally supported as JPEG, GIF and PNG. Beyond non-universal browser support, there are a number of other obstacles that make taking advantage of these new formats difficult if not outright obtuse.
From a feature perspective, WebP and JPEG XR are very similar formats. WebP is developed by Google whereas JPEG XR is developed by Microsoft. Both formats support both lossless and lossy compression along with full transparency. Notably, JPEG XR supports progressive rendering whereas, today, WebP does not. These features suggest that these two formats may be effective replacements for all major web image formats. With lossless compression and transparency support, they can be used as a possible replacement for PNG. With lossy compression they can be used as a possible replacement for JPEG. With lossy compression and transparency support we have new capabilities that weren’t available with other formats; we can now use transparency without needing to pay the cost of the larger byte size of lossless compression with PNG. On top of all this, these formats promise better quality lossy compression with smaller byte sizes!
All of these new features sound great! Well… that is when they’re available… and when they work.
If these new formats were as good as their features suggest, why would we use any other image format? There are lots of reasons using these formats universally is not the right answer and lots of obstacles that will get in your way when using these formats is actually the right answer. I’ve had my fair share of frustrations dealing with these formats; hopefully sharing them will help ease the pain of anyone else looking to explore these image formats.
So, what browsers support WebP and what browsers support JPEG XR? While investigating these two formats I discovered that getting a straight answer to this question is harder than it would seem. I’ve seen lots of tables showing varying levels of support but I don’t think I’ve found one that is actually correct. Here is what I have discovered anecdotally for major desktop browsers:
|Internet Explorer||NO||>= 10|
For JPEG XR support, the answer is mostly clear; it’s supported only in Internet Explorer. If you look around, though, most people are saying that Internet Explorer 9 has JPEG XR support; this is only partially true and not at all true for actually useful scenarios. The problem with Internet Explorer 9 is that, while it can show JPEG XR images, it doesn’t show them correctly. Every JPEG XR image I’ve seen in Internet Explorer 9 displays with a dark halo around its edges; this anomaly isn’t present viewing the same images in Internet Explorer 10. This makes using JPEG XR images effectively useless for most scenarios in Internet Explorer 9.
WebP support is fuzzier. For the desktop, I’ve found that to reliably display WebP images you can use Chrome 23 or later or you can use Opera 12.1 or later. The other major browsers — Firefox, Internet Explorer and Safari — don’t support WebP at all today. I know that some form of WebP support has existed in Chrome since at least version 9 but if you are looking for full and reliable support, version 23 is what you will be interested in (see Chromium issue #141897 for details). I’ve also seen various sources saying that WebP support is available in Opera as early as version 11.10 but, when I tried this out myself on Ubuntu using Opera 12.02, support seemed to be absent.
So now that we know where we can use these new formats, let’s find some tools that will allow us to make some images. I’m going to focus on headless tools that can be used for automation but I will quickly mention Photoshop. Photoshop doesn’t support WebP or JPEG XR out of the box. What is available though, for Photoshop users, are third-party plugins that will add support for saving images in these formats. I will present these plugins with the advisory that I have never used them and have simply come across them while exploring these formats. These Photoshop plugins are AdobeWebM for WebP and a Microsoft Plugin for JPEG XR.
My go-to tool for image manipulation and conversion for automation is ImageMagick. It has support for converting to and from a plentitude of image formats both common and obscure and is usually pretty easy to use for common cases. Also, it seems we’re in luck! ImageMagick says they support both WebP and JPEG XR! Unfortunately, reality is not so kind…
WebP support in ImageMagick is real and it’s actually very pleasant to work with. Fortunately for us, WebP was designed to be a relatively self-optimizing format. Using ImageMagick to make WebP images works pretty much just like any other format in ImageMagick; it does have a few WebP specific options that are worth looking at though.
- This option turns on lossless compression for when you don’t want to lose any image fidelity; pretty self-explanatory.
- This sets the compression method to use. Valid values range from 0 to 6. All the compression method really means is how hard the encoder tries to compress the image. A low value means encoding time will be low but compression will also be low. Conversely, a high value means encoding time will be higher with the benefit of better compression (fewer bytes).
- This option tells the encoder to spend some extra time and effort automatically tuning the deblocking filter. When enabled, encoding time will be higher but your image should look better.
Here are some command lines I use when creating WebP images:
# Lossless compression convert input.img -quality 100 -define webp:lossless=true -define webp:method=6 output.webp # Lossy compression convert input.img -quality 80 -define webp:method=6 -define webp:auto-filter=true output.webp
In the lossy compression case, the
-quality option can vary from 1 to 100, lower values give higher compression at the expense of image fidelity. For my uses, encoding time doesn’t play a big factor so I always opt for better compression and maximized fidelity.
On the JPEG XR front, ImageMagick says there is support (labeled as JXR). Unfortunately this support is so broken that it would actually be better off non-existent. When trying to work with JPEG XR in ImageMagick, most of the time it simply flat-out doesn’t work; no image is produced. When ImageMagick does actually work in the rare case, only lossless compression is supported. There are a number of reasons JPEG XR support is so broken in ImageMagick but it primarily revolves around the support being implemented using delegates instead of being implemented as a first-class ImageMagick citizen (like WebP). The first problem is that the JPEG XR delegate definition in ImageMagick itself is broken in a few ways, I won’t get in to the details but here’s my patch to help improve the situation. The second problem is that the delegation system isn’t particularly robust. There’s no (documented) way to pass things like
-quality values and don’t try to do things like use
STDOUT for piping images between processes because that will always just fail in ways that are not immediately obvious.
Don’t use ImageMagick for creating JPEG XR images. Today it simply doesn’t work.
I didn’t want to give up on JPEG XR just yet, there had to be a better way to proceed. I decided I would investigate Microsoft’s jxrlib (which is delegated to by ImageMagick anyway). Using jxrlib directly would let me avoid some of the issues with ImageMagick’s delegation system and should give better access to all of JPEG XR’s features. Grabbing and building jxrlib is pretty straight forward for anyone who’s used to building code from source. If you’re not comfortable building code yourself or are just looking for a ready-to-go binary, it looks like you’re out of luck. I would imagine this would make JPEG XR adoption a little difficult for many people.
Once jxrlib is built, you will have two programs: an encoder named JxrEncApp and a decoder named JxrDecApp. These programs are a little fickle and unusually difficult to tame. The encoder takes three image formats as possible input: BMP, TIFF, and HDR. Inconveniently, none of these input formats are typical formats used on the web. To make things a little more difficult, the encoder only accepts subsets of these three image formats and is unable to figure out on its own what flavour of any particular format you are giving it. For my uses, I chose to use BMP as my input format to JxrEncApp because it’s a lossless format and I didn’t need support for colour definition beyond 8 bits per channel. I used ImageMagick to prepare my input images as BMP ensuring to use “
-type TrueColor” to get a supported format. If your input image didn’t have transparency you will tell JxrEncApp that the image is “
-c 0” but if your input image did have transparency you will tell JxrEncApp that the image is “
-c 22”. Needing to tell the encoder the pixel format of the input image is beyond frustrating and non-sensical especially considering that this is information that is stored in the input image itself and isn’t particularly difficult for a program to read out.
Troubleshooting the proper use of JxrEncApp is also pretty tedious. On a few occasions I had to dig in to the code itself in an attempt to understand why things weren’t working as expected. Every time you use the encoder incorrectly it spits out a ninety-two line help and usage message telling you how to use the program. What this help message hid from me was that sometimes the encoder will actually print out an error message in a cryptic way of trying to say what went wrong; who’s going to notice that if they’ve just been spammed with a wall of help they didn’t ask for though?
Sometimes JxrEncApp tries to be helpful with option selection. Unfortunately, sometimes it will chose options that are completely invalid. One specific case I ran in to was when I was saving an image at a variety of quality levels. JxrEncApp accepts quality levels from 0.0 (lowest quality) to 1.0 (highest quality). When I tried to save the image at high quality levels I was able to produce a valid JPEG XR image but, when I tried to save the image at lower quality levels (0.4 or less), the encoder would fail and print its wall of help text. I would think that if I can save an image at one quality level that I should be able to save the image at any quality level. What happened in my case was that the image I was saving was pretty narrow (fewer than seventeen pixels wide to be precise) and certain option combinations aren’t supported on narrow images. The problem is that these option combinations (sub-sampling and two-level overlapping) were being set automatically by the quality setting and didn’t adapt to the input image. I solved this problem by disabling sub-sampling for images that are fewer than seventeen pixels wide. When dealing with many images in an automated way, circumstances like this become pretty unsettling and a pain to diagnose.
Lastly, I’ve discovered that JxrDecApp is lazy and makes assumptions about pixel formats when writing BMP files (maybe other formats too). I have never been able to produce a correct BMP file from a JPEG XR image with transparency — the resulting BMP image always comes out opaque. I did some digging and tried loading the BMP image using a raw data loader (I used the GIMP) and it turns out that all the transparency information is written to the file but is ignored. It seems the output BMP file says that it is 24bit BGR data padded to 32 bits per pixel instead of saying that it is 32bit BGRA data. This behaviour seems to mirror the need to tell JxrEncApp what pixel format its input image is and is really just very lazy programming without good excuse. This means that for many cases, the JxrDecApp decoder is effectively useless.
If you’re dealing with many images in an automated system and want to create lossy JPEG, WebP and JPEG XR versions of the same or similar quality, you’ll have to take a few factors in to consideration. Naïvely you would save each version with the same quality setting; for example, a JPEG image at quality 80, a WebP image at quality 80 and a JPEG XR image at quality 0.8. This will, in actuality, usually give you very different results. The problem is that these quality settings don’t necessarily scale linearly and, further, don’t share a similar curve between formats. Choosing quality settings that produce similar perceived quality degradation can be accomplished using an algorithm like SSIM.
To illustrate, here are two images from Wikipedia’s featured pictures saved in various formats and quality settings. This compares what images look like using the naïve approach of using the same quality settings and compares what images look like using quality settings that produce similar SSIM values. Accompanying the comparisons is a graph showing the perceived quality degradation as determined by SSIM as the image is saved over the range of quality settings for each format. DSSIM is a transformation of SSIM where a value of 0 means an image is identical to the original image and higher values mean the image is less similar to the original image. DSSIM was chosen out of convenience and as a quality comparison algorithm that is “good enough” for my uses; I am, however, interested in exploring other quality comparison algorithms such as MS-SSIM and MS-SSIM* in the future. This isn’t a comparison of how good one image format is compared to another, just a comparison of each format’s quality settings. To make images of different formats that will have the same perceived quality, a quality setting must be chosen for each format that corresponds to a similar SSIM value.
Common quality setting
Common DSSIM value
Quality Setting vs. DSSIM Value
Common quality setting
Common DSSIM value
Quality Setting vs. DSSIM Value
One concern worth mentioning when dealing with browser-specific image formats is how they might be used outside of a web page. Facebook’s experience with WebP was discussed recently on CNET. Their problem was that people were having trouble viewing images when the images were downloaded and stored locally on their own computer. When people try to open the images locally, the computer doesn’t know how the image should be opened and viewed. For many people, this will be incredibly frustrating. Thankfully, Chrome now registers as a handler for opening WebP images making this less of an issue. Similarly, when people using browsers that support these new formats want to share direct links to images with others, receivers not using a supporting browser won’t be able to view the image unless you carefully set up your server to properly handle this case using Accept headers or User-Agents. Depending on the nature of your website and your images, these are two cases you may need to give some thought.
After having spent time developing support for these two formats for Akamai’s FEO solution I’ve found that there can be some good advantages to using JPEG XR and WebP.
For similar perceived quality, WebP and JPEG XR can often produce files that are notably smaller than you can achieve with JPEG. Smaller files mean faster download and load times! Using the same images as before, here are graphs comparing file size to DSSIM values. Lower curves are better.
We see that with Image 1, both WebP and JPEG XR show a general improvement over JPEG. With Image 2, JPEG XR has a big lead where WebP is at least comparable with JPEG for most of the curve until we reach higher quality levels.
WebP and JPEG XR file sizes can often be smaller than JPEG but this isn’t universal. Be sure to pick the right format for each image.
File Size vs. DSSIM
To further compare filesizes of WebP, JPEG XR and JPEG, I scraped all the images from the front pages of 100 of the top sites on the internet. From these images, I trimmed out all images that had transparency (because JPEG does not support it). I was left with 2308 images. These images have a bias towards graphics and UI elements with fewer photographic images; this bias represents typical images on most web sites. I recompressed the images using all three formats at DSSIM values of 0.15, 0.10 and 0.05. These DSSIM values produce images with a perceptual quality suitable for the web. From these file I observed the following results:
|Average improvement over JPEG file size||WebP||%56.57||%51.58||%40.11|
These are great averages. I will mention that there was a decent amount of variance; WebP and JPEG XR were almost always smaller than JPEG but there were cases where JPEG images were smaller. WebP’s stronger compression over JPEG XR can likely be attributed to the fact that it always uses 4:2:0 chroma subsampling for lossy compression whereas JPEG XR may, in addition, use 4:2:2 or 4:4:4 chroma subsampling instead depending on the quality parameter given to JxrEncApp. Colour (or chroma) only has a small impact on DSSIM values in the implementation I used.
Web Site Performance
Byte savings and improved features are great but what I really care about is web site performance. How much is using WebP or JPEG XR going to speed up my site? I used www.gilt.com as a test case:
Here we see that WebP and JPEG XR show very noticeable improvement over JPEG in their respective browsers. The main image as WebP showed up 600 milliseconds earlier than a comparable JPEG version in Chrome. Similarly, the same image as JPEG XR showed up 400 milliseconds earlier than a comparable JPEG version in Internet Explorer 10.
With JPEG compression, this site had 838KB of image data. Compressed with JPEG XR, the site had 619KB of image data. 606KB of image data when compressed with WebP. That’s roughly reducing the image bytes by a quarter.
While byte sizes and loading times are great, there’s one downfall to using WebP today. WebP does not support progressive loading. For some, this can be a big deal for the perceived performance of a page. This means WebP will always load “top to bottom” instead of “coming in to focus” like some JPEG files. The total loading time of a WebP image will usually be shorter but the perceived performance may sometimes be inferior compared to a progressive JPEG. In most cases though, the byte savings and decode time of WebP more than make up for the lack of progressive loading.
WebP and JPEG XR are two great new image formats that can show both byte savings and improved web page performance under the right circumstances but they should only be served to supporting browsers. Being able to use lossy compression on images with transparency is really cool and provides substantial byte savings that was previously not possible. Differences in quality settings should be considered when using multiple lossy formats. It’s easy to create WebP images but JPEG XR images take more effort with the tooling that exists today. There are great performance gains to be had using these new formats as long as care is taken to use each format to the best of its ability. If more popular browsers (notably Firefox and Safari) start supporting these new formats, we could have a much easier path forward to optimized images.
Nicholas Doyle (@njdoyle) is a Front End Optimization developer at Akamai Technologies focusing on optimization development and web performance. Previously he worked with IBM on their Java Virtual Machine. When he isn't optimizing the internet, Nick is a resident Ottawa DJ.