DEV Community

Bipul Sharma
Bipul Sharma

Posted on • Updated on

Web Performance Optimization- I

Critical Rendering Path (CRP) and its Optimization, the PRPL pattern and Performance Budget.


Web performance is all about making web sites fast, including making slow processes seem fast. Good or bad website performance correlates powerfully to user experience, as well as the overall effectiveness of most sites. Websites and applications need to be fast and efficient for all users no matter what conditions the users are under. To make that happen we use performance optimizations. The MDN web docs breaks down performance optimization into four major areas.

  1. Reducing overall load time

    • Compressing and minifying all files.
    • Reducing the number of file and other HTTP requests sent back and forth between the server and the user agent.
    • Employing advanced loading and caching techniques and conditionally serving the user with only what they need when they actually need it.
  2. Making the site usable as soon as possible

    • This is done by loading critical components first to give the user initial content and functionality and then deferring less important features for later using lazy loading to request and display content only when the user gets to or interacts with it. And by pre-loading features, the user is likely to interact with next.
  3. Smoothness and Interactivity

    • Improving the perceived performance of a site through skeleton interfaces, visual loaders and clear indication that something is happening and things are going to work soon.
  4. Performance measurements

    • Tools and metrics to monitor performance and validate up to station efforts. The thing to keep in mind here is that not every performance optimization will fit your solution and needs.
    • Browser tools measuring performance include Lighthouse (Chrome), Network monitor, Performance monitor. There are also hosted third-party tools like PageSpeed Insights (Google), WebPage Test, GTMetrics(actually Lighthouse) which help measure performance.
    • Key indicators that these tools use ro describe the performance are:
      • First paint- The time it takes before the user sees changes happening in the browser. Largest Contentful Paint (LCP)- The time it takes before the user sees content, so text images, something else in the browser.
      • First Meaningful Paint (FMP)- The time it takes before the user sees content that is actually meaningful. So when above the full content and web fonts are loaded and the user can derive meaning from what they are seeing.
      • Time To Interactive- The time it takes before the content has finished loading and the UI can be interacted with so the user can actually click on buttons, fill forms or do whatever else is going to happen on the site.

The longer it takes for a site to hit each of these points, the higher the chance of the user either getting annoyed or abandoning the user experience altogether. So good performance is better for your visitors, better for you because you don't have to pay as much for your hosting, better for your Google rankings, and finally, better for the environment.

Critical Rendering Path (CRP)

To understand performance optimization, you first need a solid understanding of how typing something into the address bar of a browser results in the page being rendered in the viewport.

It all starts with the browser sending a request for some to its Internet Service Provider.
Screenshot (137)

The ISP then sends the request immediately to a DNS domain name service, a phone book for the web which maps the website you're seeking to the address for the website.
Screenshot (170)

This DNS lookup is done for each unique hostname. So if the site you're requesting is using externally hosted fonts, or JavaScript libraries, or images, or videos or other services, this DNS lookup happens for each of those different services. Anytime there's a new domain name, a new DNS lookup have to take effect. This is the first major performance bottleneck.
Screenshot (171)

To do away with some of this performance overhead, the domain name to IP address association will probably be cached at numerous different steps, your ISP will cached as information, it will also likely be cached in your router and on your computer. That way when you send a request to the same domain you requested before, instead of having to go through the whole DNS lookup again, we're just pulling a cache from somewhere closer to the computer, but that also means if the DNS has changed in the meantime, you'll get an incorrect address pointing and things won't work as expected.
Screenshot (172)

Once the IP address is established, the browser and server now perform what's called a TCP handshake, where they exchange identity keys and other information to establish a temporary connection and working relationship. This is also where the type of connection is determined this is there's a regular HTTP connection or is it an encrypted HTTPS connection? If the latter, encryption keys are exchanged and if both the browser and the server support it, the transaction is updated from HTTP 1.1 to HTTP two, which provides substantial performance enhancements.
Screenshot (173)

We now have a connection and everything is ready to go. At this point, the browser sends an HTTP GET request for the resource it's looking for. This initial GET request will be for whatever the default file on the server location is, typically index.html or index.php or index.js or something similar to that.
Screenshot (174)

The time it takes for the browser to finally receive the first byte of the actual page it's looking for, is measured in time to first byte or TTFB. The first piece of data called the packet that the browser receives is always 14 kilobytes, then the packet size doubles with every new transfer. That means if you want something to happen right away, you need to cram it into those first 14 kilobytes.
Screenshot (175)

The browser now gets a file an HTML document, and it starts reading it from top to bottom and then parsing that data. This means the HTML is turned into a DOM tree, the CSS is turned into a CSSOM tree and object model for the CSS of the page, which makes it possible for the browser to render the CSS for JavaScript to interact with it. And as the document is parsed, the browser also loads in any external assets as they are encountered. That means anytime it encounters a new CSS file, or reference to anything else, it'll send a new request, the server responds by sending the request back, then it gets placed into the system, and the browser starts rendering that as well.

In the case of JavaScript, though, the browser stops everything else and waits for the file to be fully downloaded. Why? Because there's a good chance of JavaScript wants to make changes to either the DOM or the CSSOM or both. This is what's known as render blocking, whatever rendering was happening, stops and is literally blocked for as long as the browser is waiting for the JavaScript to be fully loaded and then fully executed. Once all of this parsing is done, the rendering can begin in earnest and here the browser combines the DOM and CSSOM to style, layout, paint and composite the document in the viewport.

The metric time to first Contentful paint refers to how long it takes for all of this to happen. What's important for our purposes is to remember what's actually happening, that way we can identify bottlenecks and add performance enhancements to get past them as quickly as possible.
Screenshot (176)

Optimizing the CRP

When you interact with content on the web today, you're using one of two different versions of the HTTP protocol, either the old HTTP/1.1 or the more modern HTTP/2. Which protocol version is in use has a significant impact on the performance of the site. In HTTP/1.1, all files requested by the browser are loaded synchronously, one after the other. So a typical HTML page with two style sheets, a couple of images, and some JavaScript would require the browser to first load the HTML document, then the CSS files, then the JavaScript files, and finally the image files one after the other. This is slow, inefficient, and a recipe for terrible performance.

To work around this obvious issue, browsers cheat by opening up to six parallel connections to the server to pull down data. However, this creates what's known as head of line blocking, where the first file, the HTML file, holds back the rest of the files from downloading. It also puts enormous strain on the internet connection and the infrastructure, both the browser and the server, because you're now operating with six connections instead of one single connection.
Screenshot (182)

In HTTP/2, we have what's known as multiplexing. The browser can download many separate files at the same time over one connection, and each download is independent of the others. That means with HTTP/2, the browser can start downloading a new asset as soon as it's encountered, and the whole process happens significantly faster.
Screenshot (177)

Now, for HTTP to work, a few key conditions need to be met. Number one, the server must support HTTP/2. Number two, the browser must also support HTTP/2. And number three, the connection must be encrypted over HTTPS. If any of these conditions are not met, the connection automatically falls back to HTTP/1.1. So bottom line, for instant performance improvements with minimal work, get an SSL certificate for your domain and ensure your server supports HTTP/2.

Identifying which bottlenecks cause performance issues for you is the key to performance optimization.The server itself can contribute to poor performance.
Screenshot (178)

The next bottleneck is the connection made between the browser and the servers hosting the files necessary to render the page. For each of these connections, that whole DNS and TCP handshake loop needs to take place, which slows down the whole process.

Screenshot (186)

How many files are downloaded and in what order those files are downloaded has an impact on performance.
Screenshot (185)

Caching(or storing of assets) is also one of the methods for performance optimization. This can be done on the server, on the CDN or in the browser.

  • Caching on the Server

If you're running a site relying on server-side rendering, meaning each page or view is generated on the fly by the server when it is requested, caching may provide a huge performance boost. By enabling caching, the server no longer has to render the page every time the page is requested.
Instead when the page is rendered, a snapshot of that page is created and then stored in the server cache. The next time a visitor then comes to the site, there'll be handed at this stored cached snapshot instead of a freshly rendered page. This is why static site generators have become so popular: they produce pre-rendered cacheable static pages and bypass the entire CMS service side rendering problem. The challenge with this type of caching is in dynamic features they have. Like every time a new comment is added, the cache needs to be cleared, and then the page has to be regenerated. Even so, caching should be enabled for all sites relying on server-side rendering because performance benefits are so significant.

  • Caching on the CDN

CDNs are effectively external caching services for sites. CDNs can also do edge computing. Here, the CDN renders the page when requested and then caches it itself. This edge approach works well with modern static site generators like Gatsby and all JavaScript based site generators and frameworks because they serve up static assets by default, and are built to work in this modern web architecture.

  • Caching in the browser

There are two main things we can do here. One, store existing assets. So if the visitor returns to the site it already has all the information cached in the browser and two, push files to the browser early so by the time the browser requests the file, the files that are already sitting in the cache. All browsers do some level of caching automatically and also we can then instruct the browser on exactly how we want to handle caching of our assets. For assets that are unlikely to change such as main style sheets, JavaScript, and other images, long caches makes sense. For assets that are likely to change over time, short cache durations, or no cashing at all may make more sense.

To ensure new and updated assets always make it to the visitor. We can use cache busting strategies like appending automatic hashes to file names or we can rely on the server itself to document the file name on file date for each file, and then do the caching automatically. You can also split up CSS and JavaScript files into smaller modules and when you update something in CSS or JavaScript, instead of having to recache an entire style sheet for an entire site, you're just recaching the module that has that update.

PRPL and Performance Budget

To achieve the best possible performance for your website or application always keep the PRPL pattern in mind.
This is an acronym that stands for:
Push or preload important resources to the browser using server push for the initial load and service workers in the next round, the application will run faster.
Render the initial route as soon as possible by serving the browser with critical CSS and JavaScript, the perceived performance of the application will be improved.
Pre-cache remaining assets so they are available when the browser needs them.
Lazy load all non-critical assets so they only load when they are actually needed, such that we reduce the time to initial load and save the visitor from wasting their bandwidth on assets they will never use.

The number one metric that determines the performance of your site or app is its weight.
Performance budget gives you a metric to measure every new feature against and a tool to use when hard decisions need to be made. A performance budget may include limits on the total page weight, total image weight, number of HTTP requests, maximum number of fonts or images or external assets, etc.
We now have tools that we can integrate into our build processes like Webpack's performance options, which you can get directly inside Webpack and Lighthouse's light wallet, which gives you the ability to test your builds against the performance budget at any time to get flags anytime your images are too big or your JavaScript is too big or your CSS is too big or anything else.

Some best practice metrics for Performance budget are:

  • Make sure that your site meets a speed index under three seconds.
  • Time to interactive is under five seconds.
  • The largest contentful paint is under one second
  • The max potential first input delay is under 130 microseconds.
  • The maximum size of the Gzipped JavaScript bundle is under 170kb.
  • The total bundle size is under 250kb and that all of this happens on a low powered feature phone on 3G.

Now these performance budget metrics are severe and really difficult to hit. They're also the metrics being used by tools like Lighthouse to test for performance.

So the question here comes how to create a realistic Performance Budget?

  • Build seperate performance budgets on slow networks and laptop/desktop devices on fast networks.
  • Do performance audit.
  • Set resonable goals based on audit.
  • Test production version against perfomance budget.
  • Do a competitor performance audit: make your performance goal better than your competitor.
  • Test all work against Performance budget though Performance budget are unique to each project and will change overtime.

Part- II

Top comments (6)

mangor1no profile image

Really nice and detail article. Keep it up :D I pressed your Follow button

bipul profile image
Bipul Sharma

Thank you means alot

cindreta profile image
Vedran Cindrić

Really in depth one - nice 👌🏻I wrote one today about performance as well:

Let me know what you think?

bipul profile image
Bipul Sharma

Thank you. Yours is really really cool btw

toddhgardner profile image
Todd H. Gardner

Thanks for writing this!

bipul profile image
Bipul Sharma

Means a lot coming from you.