This article was originally published on theheadless.dev
The need for fast and responsive applications has never been greater because of the move from desktop to mobile. Still, web applications have been increasing in complexity and size, with rising load times. It is therefore clear why the topic of webpage performance is more popular today than it likely ever was.
This article aims at giving a practical introduction to the whys and hows of web performance, without getting lost in the depth or breadth of this massive topic.
The time it takes for a service to become usable, as well as its general responsiveness, bear a lot of weight on the user's perception of that service. Helpful features, great design and other prominent characteristics all become irrelevant when an online service is so slow that users navigate away.
You can build the best web application in the world, but be mindful that each user will have a specific amount of time they are willing to invest in your service to solve their problems. Exceed that amount, and you risk losing them to a different, more performant solution. This is even truer for new users, who haven't yet been given proof of the quality of your service, and are essentially investing their time up-front, hoping for a return.
There is a brighter side to the topic: if low performance can sink an online platform, high performance can very well help it rise to the top. Speed and responsiveness can be a differentiating characteristic for a service, prompting users to choose it over the competition. Therefore an investment in this area will almost always pay off. Some notorious real-world examples from known businesses include:
- Pinterest decreasing wait time for their users, increasing both traffic and conversions.
- Zalando applying small improvements in load time and finding a direct correlation with increased revenue per session.
- The BBC discovering that every extra second that a page took to load led to 10% of users leaving the page.
Given the importance of page performance, it is no coincidence that browsers expose a ton of insights into performance metrics. Being aware of how your application scores against these across time will provide you the feedback you need to keep it performant for your users. There are several approaches that can be combined to achieve the best results:
- Real user monitoring to understand what performance actual end-users of your service are experiencing.
- Synthetic monitoring to proactively gather intel on service performance, as well as to find issues before users stumble into them.
- Performance testing to avoid releasing performance regression to production in the first place.
- Regular audits to get an overview of your page's performance and suggestions on how to improve it, e.g. with tools such as Google Lighthouse.
As much as we should be striving to build performant applications, we should commit to monitoring and testing performance to enable continuous feedback and rapid intervention in case of degradation. Puppeteer and Playwright give us a great toolkit to power both synthetic monitoring and performance testing.
- Access to the Web Performance APIs, especially PerformanceNavigationTiming and PerformanceResourceTiming.
- Whenever testing against Chromium, access to the Chrome DevTools Protocol for traffic inspection, network emulation and more.
- Easy interoperability with performance libraries from the Node.js ecosystem.
Navigation timings are metrics measuring a browser's document navigation events. Resource timings are detailed network timing measurements regarding the loading of an application's resources. Both provide the same read-only properties, but navigation timing measures the main document's timings whereas the resource timing provides the times for all the assets or resources called in by that main document and the resources' requested resources.
We can use the Navigation Timing API to retrieve timestamps of key events in the page load timeline.
The Resource Timing API allows us to zoom in to single resources and get accurate information about how quickly they are being loaded. For example, we could specifically look at our website's logo:
The Chrome DevTools Protocol offers many great performance tools for us to leverage together with Puppeteer and Playwright.
One important example is network throttling, through which we can simulate the experience of users accessing our page with different network conditions.
The DevTools Protocol is quite extensive. We recommend exploring the documentation and getting a comprehensive overview of its capabilities.
Lighthouse can easily be used programmatically with Playwright and Puppeteer to gather values and scores for different metrics, like Time To Interactive (TTI):
- The comprehensive MDN Web Performance documentation
- web.dev's performance section
- Web Performance Recipes With Puppeteer by Addy Osmani
- Getting started with Chrome DevTools Protocol by Andrey Lushnikov
- Get Started with Google Lighthouse