Web Performance Calendar

The speed geek's favorite time of year
2019 Edition
ABOUT THE AUTHOR

Sergey Chernyshev

Sergey Chernyshev (@sergeyche) is a web performance enthusiast, open source hacker and web addict. He works as a principal engineer at Meetup and organizes New York Web Performance Meetup Group, local community of web performance geeks in New York and helps kick-start local groups about Web Performance around the world. Sergey teaches a live on-line course about performance with O'Reilly and often speaks on performance-related topics at various local New York events and global conferences, including Velocity and QCon.

Many of us are facing the challenge of promoting the idea of speed of user experience within our organizations and crave any support data to help “sell” it to business stakeholders, product managers and engineers.

Real User Measurement (RUM) is a great tool to capture speed from actual users in the wild, measure their engagement and with one swift gesture point at an A/B test comparing speeds of two versions of the app or even use awesome “what-if” diagram to predict the impact the planned improvement.

Unfortunately, sometimes we just didn’t fund the purchase of a RUM tool, did not integrate it or simply don’t have A/B test set up and did not roll out new code to any of our user yet. This leaves us without data and therefore without a powerful convincer and visualization tool.

I was struggling with some of these problems on multiple occasions and at one point I realized that many conversations do not require actual user data.

  • some of these conversations were just about training team members who needed better grasp in statistics of web speed, distributions, percentiles and effect variability plays
  • some of them needed showing the reasoning behind prioritization of speed and visualization of known industry trends like conversion rate decay or increase in bounce rate as experiences slow down
  • and some other conversations were about supporting the very purchase of a RUM solution so business decisions could be made with real data

One day in early spring, frustration of luck of data overpowered lazy opensourcerer’s brain and I spend couple weekends coding up a visualization tool that lets you pretend you have the data simulate data and visualize it to drive your point across.

The tool is called “UX Speed Calculator” and you can use it at https://ux-speed-calculator.netlify.com/ and contribute to it on Github – it is open source and you can help add bells and whistles to satisfy your inner data geek.

UX Speed Calculator does a few useful things:

  • it calculates a log-normal distribution based on base speed and variability so you can simulate your page’s distribution and show how it is different from a normal, “bell curve” distribution many people assume by default
  • shows median, 90 and 95 percentiles so you can try and match your existing stats or simply educate engineers on what those numbers are
  • calculates bounce rate distribution and let’s you adjust the slope of degradation to your hearts desire to help explain to marketing teams the impact of speed on their marketing campaigns
  • calculates exponential decaying conversion rate distribution and helps you explain conversion poverty line and why running experiment by slowing down pages is not necessarily predictive of how much savings you’ll get by speeding them up the same amount
  • it also tries to shine light on the phenomenon on of low conversion / high bounce rates on the very left of the chart (near the zero) where despite the logic of “faster means more business” we often see unexplainable data. Many RUM experts believe it is due to the low numbers which do not produce statistically significant results, but my work with UX metrics and general lack of understanding of statistics always made it hard for me to accept that as the whole story. I postulate that is is caused by a high percentage of errors in that part of the chart and experiences that are measured as “fast” were actually incomplete, functionally broken experiences or outright error pages. So in the tool you can adjust decaying failure rate distribution. Charts also show theoretical bounce and conversion rate distributions as dotted lines and distributions that reflect error rates as solid lines
  • based on a combination of failure rates, bounce rates and conversion rates, charts also show color-coded bars for each of the populations to make it sleazier time relate to each pixel as a user who had different experience and help consumers of this visualization empathize with the data.
  • ohh yeah, I almost forgot – it also shows you the $$$ all these conversions amount to so those of your peers who don’t like math or can’t bear the cognitive load of charts can still translate the value of speed to something tangible. Feel free to substitute $ for €, £, ¥ or even ₽ if that works better for you.

If so many lines and labels confuse you and your stakeholders, feel free to click on the legends on the right to hide those that do not matter for your story.

And if you need to share a chart, just copy a URL and send it over, it updates as you tweak the knobs and dials so you can always jump to the right chart and don’t need to re-create perfection from scratch.

Hope this tool will help you with your creative story telling and you’ll be able to push the limits even further.

Happy #webperf-orming!

And remember, there are lies, damn lies and statistics! So don’t overuse this or other scientific data tools in your battles for the better within the fuzzy world of user experience!