
Clark Gunn (@clarkgunn, @clark_gunn) is a Senior Frontend Engineer at Formidable, working on performance, accessibility, and sustainability.
Background
Next.js is one of the most popular React frameworks and is seeing heavy adoption.
I am working on performance remediation for a large e-commerce site built with Next. While the site has numerous 3rd party analytics, observability, and clientside A/B testing scripts, the performance bottlenecks I am facing are primarily due to large app, vendor, and framework JavaScript bundles.
Mobile Web Performance
While modern phones often sport impressive specs similar to desktop ones, due to thermal constraints, network volatility, and the additional necessary background work, only a portion of that device power translates to web performance. Progressive Performance (Chrome Dev Summit 2016) is a great talk that goes into the constraints of mobile devices and the impact on performance. Youtube link
For my project, mobile performance is my most significant concern; desktop scores are mostly acceptable. 75% of site traffic comes from mobile, and the site has failing Core Web Vital (CWV) scores for many page types. I wanted to see where my project sits in the performance landscape of mobile sites built with Next.js, so I sought some data to compare.
Methodology
Next.js includes a Showcase page for sites built with the framework. The list contains many enterprise and Fortune 500 companies. Submissions are proposed via a github discussion thread.
I scraped each link and verified Next.js is used on the page. I threw out giveindia.org, which redirected to a site not built with Next. Jet.com seems to have been acquired by Walmart since it now redirects there; fortunately, walmart.com is using Next, so I just swapped Jet for Walmart.
With this list of URLs, I generated a list of links pointing to the PageSpeed Insights scores of each Next URL. I manually copied the Web Vital and Lighthouse scores to a spreadsheet. (I could have automated this but was leery of introducing bugs that might impact scores)
PageSpeed Insights includes Web Vital scores from the Chrome User Experience Report (CrUX) for the last 28 days. It also provides a Lighthouse audit run with preconfigured specs and a Pass/Fail assessment score. A Passing score is given if the three Core Web Vital scores, Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS) are green for the last 28-day period.
In my spreadsheet, I included: The Pass/Fail assessment, the three Core Web Vital scores, and other web vitals: First Contentful Paint (FCP), Interaction to Next Paint (INP), and Time To First Byte (TTFB), and just the main Lighthouse performance score.
Results
The spreadsheet is available as a Numbers file, Excel spreadsheet, PDF and in this repo. The scores in the spreadsheet were fetched on December 5, 2022. The links in the URL column should open the relevant PageSpeed Insights page and may show varying scores from what is recorded here.
Interpretation
Out of 110 sites:
✅ 27 are Passing with all green CWV.
❌ 80 are failing with one or more CWV scores.
⏸️ 3 Had insufficient data and were not included in CrUX
These results are not a glowing endorsement of Next performance-wise. Given the work I’ve been doing for the past eight months, ~73% of Next sites failing CWV on mobile isn’t surprising to me now. For me, two years ago and still in the honeymoon phase with Next.js, this would have been a kick in the gut. I believed that Next would provide a “pit of success” and that most of my performance concerns would involve React Memo. I was not alone in this belief.
There are many instances of the Lighthouse score being at odds with the CrUX data, highlighting that we should not rely on any one type of data.
One explanation for why the Lighthouse score for Staples.com is at such odds with the CrUX, is that they have made some recent performance improvements and that CrUX data still needs to reflect this.
PageSpeed Insights uses MOTO G4, considered a good low-end device for performance testing, for the Lighthouse audit. Next.js sites perform abysmally with this device emulation, with 87 / 110 getting “poor” results.
As device access increases worldwide, median specs will decrease rather than increase, so performance on devices with specs like the Moto G4 should not be thrown out as an outlier.
Overview Table
The spreadsheet includes a Median and Average row, but to get a more intuitive picture, I grouped the Good, Needs Improvement, and Poor scores for each metric and graphed them as a stacked bar chart.
Good | Needs Improvement | Poor | |
---|---|---|---|
LCP | 41 | 42 | 24 |
FID | 89 | 9 | 6 |
CLS | 77 | 18 | 12 |
FCP | 43 | 48 | 16 |
INP | 13 | 43 | 48 |
TTFB | 38 | 55 | 14 |
Lighthouse | 2 | 18 | 87 |
FID and INP scores are the opposite of what I would have intuited. FID is mostly a load time score, and I expect sites with large Javascript bundles and long hydration tasks to have poor FID. INP is still an experimental metric but is meant to cover all interactions, not just the first. I would expect SPA-like client-side interactivity to do better with INP after hydration, but the opposite is true for Next, good FID, and poor INP.
LCP ideally is independent of any accompanying JavaScript framework, so long as LCP images are shipped in the initial HTML and/or preloaded. However, out of all the CWV scores, this was the one Next.js sites struggled with the most.
An interesting follow-up inquiry might be what percent of Next sites serve LCP image sources in the server-rendered HTML and what percent use the next/image
package.
In my case, where constraints prevent server rendering and preloading of LCP images, the performance bottleneck is the framework and app bundles. It is a bit unfair of me to say, “I need the framework to show my images,” and also, “The framework is preventing my images from loading quickly,” but finding ways to break up Next app bundle sizes has led to the most LCP gains in my work.
These sites have different requirements in terms of third-party scripts and interactivity and are likely hosted on a variety of platforms. Any given performance issue for any individual site can’t be attributed to Next, but the one thing these sites have in common is Next.js, and the overall picture is negative.
Http Archive
We can compare Next to other technologies using a report available from Http Archive.
According to this report only about 40% of sites are getting a good score for all three Core Web Vitals.
That’s not great for mobile users of the web, but if we add Next to this report, we get an even starker picture.
Next performs worse on mobile than the average site. Only 25.9% of Next sites had good CWV scores in October. This percentage is close to the 24.5% of passing sites from the Showcase data.
Going Forward
Next 13 is introducing a new architecture with React Server Components meant to decrease the amount of JavaScript sent to the client. However, server components require logic to parse the transfer protocol, and my limited testing with unstable versions has yet to reveal substantial performance gains.
Additionally, the changes required to achieve these benefits would nearly constitute a rewrite of my current project. Overall, I am not bullish on Next.js for projects where performance is vital.
Great article and analysis. With anything though, I imagine the performance comes down to “it depends”. NextJS attempts to take a lot of the complexity of site building away from the average developer. Because of this, I wouldn’t be shocked if a lot of sites are built using it by the average dev because it’s an easier system to get up and running, especially with Vercel hosted sites. Those performance concerns are likely not going to be a concern for the average dev, and so they don’t take time to optimize.
Also, in your analysis you showed that TTFB is 55% in needs improvement. Both FCP and LCP greatly depend on that piece, so if TTFB is slow, the rest are going to be slow as well. However, it’s hard to determine what the issue is with TTFB on the aggregate without understanding how each site measured responds. If the average dev haphazardly used NextJS and did not have static generation or pages on, it would be slower. If they make each page render on the server without any caching, that will be slower. Perhaps the way they construct pages doesn’t follow a good architectural pattern that takes advantages of the benefits of NextJS, and therefore, through naivety of the framework, they end up treating everything the same way.
I wouldn’t be shocked that for devs that use NextJS, they end up thinking it’s a hammer to solve every problem so they try and hammer in everything, whether it’s a nail, a screw, or as devs often do, their own thumb.
Hi Steven, thanks for your feedback. Several people have expressed similar sentiments to me after sharing this data.
This data alone is insufficient for any given site to place the blame solely on Next.js’ doorstep. Each site has different requirements and considerations which impact performance. My goal was to look at sites built with Next.js, in aggregate, with all their varying infrastructure and architectural considerations, and see how well they perform on average. My interpretation of this data was that these sites are not performing well.
There are numerous ways to improve or footgun performance with Next.js, and you are correct that a lot of this comes down to how well the developers utilize the framework. I encourage you to re-look at the sites in the list; netflix, tiktok, twitch, hulu, notion, target, nike, hbomax, realtor.com, att, etc. These are not sites deployed by inexperienced developers with little mastery of their chosen tech stack.
I agree that Next abstracts away a lot of the complexity of building a site. Still, this data demonstrates that relinquishing that responsibility to Next does not produce a good result.
Excellent post! Thank you for sharing
Thank you for sharing, nice job!
Thanks for the response Clark. Overall, I agree about one trend that I see is to load heavier and heavier sites via frameworks. The paradigm of server side rendering with hydration I think is a bit problematic. You want to load sites quickly, so the html is generated and sent out, but then you also require to load lots of extra JavaScript to get the site working. Including the data needed to hydrate once again on the page along with the html included in the react code. So for every page, you load the data twice and the html twice. That can’t be a good formula for success for performance.
I’m intrigued by frameworks like Qwik that use the data that is sent down once, and then start downloading JavaScript as it’s used. This creates a potential lag on input delay if you are not careful.
I see the future though being a combo of compiled code sent down (WASM) to avoid the burden of parsing and compiling in the browser, and also having a more robust system around web components that allow you to easily ditch a framework like React or NextJs all together and use native capabilities.
On Mastodon Annie Sullivan provided me with some great insight into what might be going on with some of these FID and INP scores. mastodon.social/@anniesullie@indieweb.social
[…] Read More […]
[…] Mobile Performance of Next.js Sites […]
[…] Mobile Performance of Next.js Sites […]
[…] Mobile Performance of Next.js Sites […]
[…] Mobile Performance of Next.js Sites […]
I’ve been the tech lead on two big corporate projects using Next.js where mobile performance is critical (eCommerce sites). Looking at the PageSpeed Insights for the SimplyBe (a British-based retail site that was relaunched in September 2022) then the CrUX and LightHouse scores are reasonably decent for mobile: https://pagespeed.web.dev/report?url=https%3A%2F%2Fwww.simplybe.co.uk%2F
Next.js out-of-the-box performance is a good starting point but you’ll need to do a lot of work beyond just the defaults. Performance considerations need to be baked into the architecture and the devs approach to mobile-first development.
Of note, we spent a lot of time optimizing our use of Akamai, without which we wouldn’t have achieved those performance scores.
[…] Next.js Sitelerinin Mobil Performansı […]
[…] Cell Efficiency of Subsequent.js Websites […]
[…] Read More💦 […]