Web Performance Calendar

The speed geek's favorite time of year
2011 Edition
ABOUT THE AUTHOR
Sergey Chernyshev photo

Sergey Chernyshev (@sergeyche) organizes New York Web Performance Meetup and helps other performance enthusiasts around the world start meetups in their cities. Sergey volunteers his time to run @perfplanet twitter companion to PerfPlanet site. He is also an open source developer and author of a few web performance-related tools including ShowSlow, SVN Assets, drop-in .htaccess and more.

Images are the one of the oldest items on the web (right after HTML) and still so little has changed since we started to use them. Yes, we now got JPEG and PNG in addition to original GIF, but other then that there were not many improvements to make them better.

That if you don’t count lots of creative talent that went into creating them, so much in fact that it created the web as we know now, reach, shiny and full of marketing potential! Without images we wouldn’t have the job of building the web, and without images we wouldn’t worry about web performance because there would be no users to care about experience and no business people to pay for improvements.

That being said, images on our websites are the largest payload sent back and forth across the wires of the net taking big part in slowing down user experience.

According to HTTPArchive, JPEGs, GIFs and PNGs account for 63% of overall page size

and overall image size has 0.64 correlation with overall page load time.

Still we can safely assume that we are going to have only more images and they will only grow bigger, along with the screen resolutions on desktop computers.

Lossy compression

There are a few different ways to optimize images including compression, spriting, picking appropriate format, resizing and so on. There are many other aspects of handling images that include postloading, caching, URL versioning, CDNs and etc.

In this article I wanted to concentrate on lossy compression where quality characteristics of the images are changed without significant visual differences for the user, but with significant changes to performance.

By now most of us are familiar with loss-less compression, thanks to Stoyan and Nicole who first introduced us to image optimization for web performance with an awesome on-line tool called Smush.it (now ran by Yahoo). There are a few other tools now that have similar functionality for PNG, for example.

With smush.it and alike image quality is preserved as is with only unnecessary meta-data removed, it often saves up to 30-40% of file size. It is a safe choice and images will be intact when you do that. This seems the only way to go, especially for your design department who believe that once image comes out of their computers it is sacred and must be preserved absolutely the same.

In reality, quality of the image is not set in stone – JPEG was invented as a format that allowed for size reduction at a price of quality. Web got popular because of images, it wouldn’t be here if they were in BMP, TIFF or PCX formats that were dominating prior to JPEG.

This is why we need to actually start using this feature of JPEG where quality is adjustable. You probably even saw it in settings if you used export functionality of photo editors – here’s the screenshot of quality adjusting section of “export for web and devices” screen in Adobe Photoshop.

Quality setting ranges from 1 to 100 with 75 usually being enough for all of the photos with some of them looking good enough even with the value of 30. In Photoshop and other tools you can usually see the differences using your own eyes and adjust appropriately making sure quality never degrades below certain point that mainly depends on the image.

Resulting image size heavily depends on the original source of the image and visual features of the picture sometimes saving up to 80% of the size without significant degradation.

I know these numbers sound pretty vague, but that is exactly the problem that all of us faced when we needed to automate image optimization. All images are different and without having a person looking at them, it’s impossible to predict if fixed quality setting will be damaging the images or not saving enough. Unfortunately having a human editor in the middle of the process is costly, time consuming and sometimes simply impossible, for example when UGC (user-generated content) is used on the site.

I was bothered by this problem since I saw smush.it doing great job for lossless compression. Luckily, this year two tools emerged that allow for automation of lossy image compression: one open source tool was developed specifically for WPO purposes by my former co-worker, Ryan Flynn and called ImgMin and another is a commercial tool called JPEGmini which came out of consumer photo size reduction.

I can’t talk for JPEGmini, their technology is private with patents pending, but ImgMin uses a simple approach of trying different quality settings and then picking the result that has the picture difference within a certain threshold. There are a few other simple heuristics, for more details you can read ImgMin’s documentation on Github.

Both of the tools work pretty well providing different results with ImgMin in it’s simplicity being less precise. JPEGmini offers dedicated server solution with cloud service coming soon.

In this table you can see my twitter userpic and how it was automatically optimized using loss-less (smush.it) and loss-y (JPEGmini) compression:

Original:
10028 bytes
Lossless:
9834 bytes
2% savings
Lossy:
4238 bytes
58% savings

Notice no perceivable quality degradation between original and optimized images. Results are astonishingly similar on larger photos as well.

This is great news as it will finally allow us to automate lossy compression which was always a manual process – now you can rely on a tool and reliably build it into your image processing pipeline!