Boris Schapira (@boostmarks) is a French web professional that helps companies understand how to integrate web performance into their quality management and culture. He is one of Contentsquare's Customer Success Manager.
.Dear fellow web professional, you should be interested in Web Performance. It’s a topic that is at the heart of the web, which needs specialists, and is booming. I know you don’t believe me, or that you think that this is not easy and you’re right. So let me tell you where we stand. and why we need you to join us.
I have been doing Web Performance, or webperf, for more than ten years. I have spoken publicly for the first time in 2012 [1]. I was suggesting that it was necessary to approach the subject in four steps: first become aware of the stakes, then learn to optimize, monitor performance and finally, engage internal discussions about it internally in order to avoid future damage.
Eight years later, I still think it’s the best way to approach the subject. But around me, people don’t necessarily talk a lot about web performance. For all those who have not necessarily followed where we are, let’s take a look at the subject.
The field knowledge is dense, but my goal is not to stifle you with information. If you want to go further, you’ll find lots of footnotes leading to additional readings.
A status update
Let’s start with some definitions.
Webperf is all you can do to make your site be feel as more efficient by a user. You start from the user experience of your site or web app (including animations), then you explore the underlying layers: HTML, CSS and JS code, the server configuration and finally an analysis of the infrastructures and third party services that you allow yourself to use, and their configuration.
Since a high-performance site consumes less energy, web performance shares 90% of its DNA with eco-design. But some good web performance practices, such as the use of CDN, are not compatible with ecological considerations [2].
It is usually implied that a faster application is more satisfying, but with web perf, we have figures that confirm it. Many brands have commissioned optimization projects and testify to the gains obtained, especially on the conversion. WPOStats.com provides case studies and experiences demonstrating the impact of web performance optimization on user experience and commercial success. This is an essential aspect because it means that webperf is an area of web quality whose return on investment is provable. As a result, investment is facilitated.
Optimising web performance is a skill, and by recognising that skill with the appropriate compensation, there is a better chance of attracting talent to this area. There is a good dynamic in the community, which is nourished by articles on different expertises within web performance. It helps the whole community to grow.
Vitaly Friedman’s “Front-End Performance Checklist” is one of the most comprehensive resources. It’s an in-depth review that is updated annually in Smashing Magazine. The problem is that you can’t simply point anyone to this kind of file: 53 pages resulting in a 13-page operational checklist [3] is not within everyone’s reach: it’s a high barrier to entry.
To get to the heart of the matter faster, you can use one of the many analysis tools and services available in browsers or as services. These tools, called “synthetic” or “lab” tools, are very useful: they analyze a page load and instantly produce a contextualized audit report that helps you target the essential, potential issues [4]. They can often be used to perform tests with custom constraints. You can make assumptions like “I think this JavaScript script execution slows down the page rendering”, and then confirm it by blocking this file and seeing if the display really happens faster. You can also easily measure the performance of a page against the same page of competitors’ website.
But the strength of these services is also their weakness: they only measure what they are asked to measure. They can’t tell if the sites are fast for people. If you need to get this information, you will have to turn to analytics or real user metrics (RUM). The analytics services you already use, like Google Analytics, may track sufficient web performance Real User Metrics (RUM) [5], but to go further in assessing user frustration, you will have to turn to more elaborate solutions [6]. However, you’ll often need to get your hands into your code to implement this.
However, if your website is public and receives a lot of visits, you can use field data collected by others. These datasets are handy, because they are accessible without installing anything on the sites and therefore, as with synthetic tools, you can make comparisons between sites. The data is not necessarily difficult to retrieve, but may be incomplete (for example, only reporting the performance of Chrome’s users, not the other browsers [7]).
It can often take time to understand field data. Interpreting the results is very complicated because many factors can affect the measured performance [8]. If you have a hunch about something, see if you can validate it by cross-checking with other data, or through synthetic tests.
In a nutshell
Today, the technical knowledge on web performance is largely consensual and testable via automated analysis tools. Generic field data is relatively easily accessible if you know where to look, and provides feedback on the implemented optimizations, the cost of which is easy to finance and the gain to measure.
And yet… if you look at the field data, you can see that the performance of most sites isn’t really improving, for example looking at First Contentful Paint, one of the well-known metrics for measuring when the page starts becoming useful, we see no real change over time:
This means that the situation must not improve, as you may think. Let’s take a little tour of the things that are still difficult.
The road ahead of us
Like many web quality topics, web performance is everyone’s business. To improve the performance of a site, you must not only understand where problems come from, but also be able to determine who are the right people to contact to solve them. Since webperf concerns all the aspects of the web, you will soon find yourself having to talk to very different people. Even on seemingly simple issues.
Let’s take the example of image optimization. When you say it like that, it sounds simple and a lot of people will feel like they understand. But in order to approach the topic in its entirety, you have to:
- Train people to choose the best format based on the image
- Control the addition of images and trigger the optimization and generation of several image variants depending on the context and artistic direction, to avoid oversized images
- Modify the HTML markup and/or possibly set up an image proxy with content negotiation (internal or external), so that every user receives an image adapted to their own context.
And I haven’t even talked about lazy loading yet.
When you start talking about web performance in the enterprise, you’ll quickly find, the hardest part is not necessarily the technique: It’s also about getting different professions to talk together to get an overview of the situation and the actions to be taken.
This is especially true, since it is sometimes difficult to talk about the same thing. There are really a lot of different metrics and as if that wasn’t enough, new metrics appear regularly. There are some metrics that have been used a long time, such as the Speed Index [9], a reference indicator for almost 10 years. Others are misnamed, or do not allow for decision-making, or inform about something that is not necessarily paramount. It is therefore necessary to constantly question not only the metrics themselves, but also the way in which they are measured.
Often the best way of addressing your own, specific needs is when you use your own metrics using customized time milestones. On the server side, these are called Server Timings [10]. On the browser side, these are called User Timings
[11]. Contrary to other metrics, they correspond to what your team chooses to monitor. You can find them when you analyze the page in your browser’s development tools and they are collectable by all the analysis tools. Why not use them?
When you want to optimize the performance of a site, you will sometimes come up against external libraries used by others to meet business needs. These libraries often address the business need in terms of functionality. but often do not consider their impact on web performance. As a result, the people who use them sometimes have never heard of performance. And require extra work to ensure they do not impact on performance.
Concretely, it often means that optimization is going to be difficult, technically or because the people you’re going to talk to won’t understand this need. When optimization is really impossible, don’t forget to write it down in a technical debt repositoryso that it can be considered in the next sprint or redesign.
Why we need you
The subject of Web Performance, like many subjects of web quality, is a technical subject. But the knowledge is available and the skills can be learned with time and investment.
I’m not going to lie to you: it takes a long time to become proficient. But the acquired skills often do not depend on any framework, any solution on the market, any trend. It’s all about understanding how web standards work: HTTP, HTML, JS and CSS. It’s hard to learn anything more reusable on the web and believe me: it’s worth it.
You will see that whatever your role in the web production chain, you can acquire these skills and put them at the service of better web performance.
We need people ready to optimize websites – there is a real demand for this. But it’s far from being the only way to participate, in fact it’s often at the end of the chain.
To begin to understand the challenge, we need Data Analysts capable of assessing usage data and refining metrics, to better balance between investments and gains. Knowing that a topic is relevant because others say so, is not the same as being able to adjust one’s investment according to one’s needs and savings.
Web Performance must be an integral part of the development of libraries of all types. Whether it is back-end or front-end, these projects must integrate performance as a fundamental need, in the same way as security or accessibility should be.
We need designers who think about the way their interfaces are progressively displayed and prioritize the elements most expected by users in order to improve their experience[12].
Are your skills more related to back-end? Great – we also need people like you to optimize code execution, set up server caching systems, or help develop edge proxies with a real-time optimization engine that can help optimize a site quickly while waiting for underlying fixes to be made [13].
Finally, we also need people to “mediate” all these professions, to nurture and supervise dialogue, to provide the conditions for success and in particular training in a “common base of competence” (using something like Opquast [14], for example).
For all these reasons, you should look to improve web performance.
And, maybe, you already do it.
The illustrations in this article are from the Undraw project, created by Katerina Limpitsoun. You can use them in any project, commercial or personal, without attribution or fees.
Thanks to Barry Pollard who took the time to rewrite a good third of the article in an English that is a little more understandable than what I produce myself!
Footnotes
- “Mettre en place une stratégie de performance web” during Sud Web 2012.↩︎
- Romuald Priol talks about it very well in his post “The digital impact: best practices for a greener web“.↩︎
- “Front-End Performance Checklist 2020: the PDF“, 166 Kb.↩︎
- Dareboost, which I work on, is one of those tools and I’m very proud of it, but it’s far from being the only one. Calibre, Speedcurve, GTMetrix, Pingdom, among others, also offer instant reports and tracking. Development teams can also use Lighthouse or WebHint on their computers.↩︎
- For example, Google Analytics. Personally, I use Matomo.↩︎
- About this, see “User Experience & Performance: Metrics that Matter“, by Philip Tellis.↩︎
- I’m talking about the Chrome User Experience Report, also known as CrUX.↩︎
- “Comment interpréter les mesures de performance réelles (RUM metrics)“, by Gilles Dubuc.↩︎
- “Speed Index: what is it and how to properly use this performance metric?“, by Damien Jubeau.↩︎
- “MDN Server-Timing Documentation“.↩︎
- “Custom Timing : attendez la prochaine frame quand vous utilisez la User Timing API“.↩︎
- “Mind over Matter:Optimize Performance Without Code“, by Stéphanie Walter.↩︎
- The French reference solution is Fasterize.↩︎
- Opquast, the benchmark certification for all web professionals.↩︎