<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bipul Sharma</title>
    <description>The latest articles on DEV Community by Bipul Sharma (@bipul).</description>
    <link>https://dev.to/bipul</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bipul"/>
    <language>en</language>
    <item>
      <title>Web Performance Optimization- II</title>
      <dc:creator>Bipul Sharma</dc:creator>
      <pubDate>Tue, 03 Aug 2021 18:18:13 +0000</pubDate>
      <link>https://dev.to/bipul/web-performance-optimization-ii-2799</link>
      <guid>https://dev.to/bipul/web-performance-optimization-ii-2799</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/bipul/web-performance-optimization-i-5d39"&gt;Part-I&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  About
&lt;/h3&gt;

&lt;p&gt;𝐈𝐦𝐚𝐠𝐞 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬: with different file formats, Responsive Images Markup, mannual and automatic optimzations, lazy loading&lt;br&gt;
𝐉𝐒 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐢𝐭𝐢𝐨𝐧: modularization, async-defer, lazy loading, minifiers&lt;br&gt;
𝐂𝐒𝐒 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧: modularization, critical CSS, using onload and disabled attributes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Glossary&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shallow depth of feild- very small zones of focus.&lt;/li&gt;
&lt;li&gt;Lossy and Lossless images- lossy has loss in quality and file size on compression while lossless has no loss in quality and results in bigger file size on compression. &lt;/li&gt;
&lt;li&gt;transparency/opacity-  images that is clear and can take the effect of any images behind it&lt;/li&gt;
&lt;li&gt;Render blocking- JS stopping the DOM rendering.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Image Optimization
&lt;/h2&gt;

&lt;p&gt;Images are the leading cause of the slow web. We have two conflicting needs here: we want to post high quality images online, but also want our websites and apps to be performant, and images are the main reason they are not. So how do we solve this conundrum? The answer is with a multi-pronged approach, ranging from &lt;strong&gt;compression&lt;/strong&gt; to careful &lt;strong&gt;selection of image formats&lt;/strong&gt;, to how we &lt;strong&gt;mark up&lt;/strong&gt; and &lt;strong&gt;load&lt;/strong&gt; images in our applications.&lt;/p&gt;

&lt;p&gt;Image performance is all about how much data is contained within an image and how easy it is to compress that data. The more complex the image, the larger the data set necessary to display it and the more difficult it is to compress. &lt;strong&gt;Shallow depth of field means better performance&lt;/strong&gt;. For photography including products, headshots, documentary, and others, a shallower depth of field is preferred. &lt;/p&gt;

&lt;p&gt;If you want to squeeze as much performance as possible out of your images, &lt;strong&gt;reducing the size of each image by 87% percent, and then upscaling it by 115%&lt;/strong&gt;, will actually impact the performance of the image as well. It turns out downscaling a photo by 87% percent, Photoshop will take away pixels and simplify the image to scale it down and reduce the complexity of the image and by upscaling it by 115% percent it preserves image quality well enough that humans can't tell the difference. So we get a image of same size but has significantly less complexity.&lt;/p&gt;

&lt;p&gt;The image format or file type you choose for your images will have a direct impact on performance. On the web we generally use one of five formats JPEG, PNG, GIF, SVG, and webP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JPG/JPEG&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meant for Photos&lt;/li&gt;
&lt;li&gt;Lossy image with adjustable compression&lt;/li&gt;
&lt;li&gt;High compression means large artifacts(distortion)&lt;/li&gt;
&lt;li&gt;Use for Photos when WebP is not an Option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;PNG&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meant for Graphics&lt;/li&gt;
&lt;li&gt;Lossless image format&lt;/li&gt;
&lt;li&gt;Optional transparent alpha layer&lt;/li&gt;
&lt;li&gt;Use for computer generated graphics and transparency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GIF&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meant for simple lofi gaphics&lt;/li&gt;
&lt;li&gt;Lossy image format&lt;/li&gt;
&lt;li&gt;256 colors&lt;/li&gt;
&lt;li&gt;Can be animated (but dont use them)&lt;/li&gt;
&lt;li&gt;SVG/Video is always a better option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SVG&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meant for advance scalable graphics&lt;/li&gt;
&lt;li&gt;Written in Markup, can be included in HTML, CSS&lt;/li&gt;
&lt;li&gt;Very small when optimized&lt;/li&gt;
&lt;li&gt;Use for vector-based computer generated graphics and icons&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;webP&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meant for web-based photos&lt;/li&gt;
&lt;li&gt;Upto 34% smaller than JPGs&lt;/li&gt;
&lt;li&gt;Not supported in older browsers(fallback required)&lt;/li&gt;
&lt;li&gt;Used for photos and complex detail images (with fallback)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to choose what to use?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For photos, use webP (with JPG fallback)&lt;/li&gt;
&lt;li&gt;For too complex computer graphics use PNG or JPG (whichever is smaller)&lt;/li&gt;
&lt;li&gt;For graphics with transparency use PNG or webP&lt;/li&gt;
&lt;li&gt;For scalable computer graphics, icons and graphs use SVGs&lt;/li&gt;
&lt;li&gt;Aviod animated GIFs at all cost, use videos instead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Mannual Optimizations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decide on the maximum visible size the image will have in the layout. No image should ever be displayed wider than a full HD monitor, 1920 pixels. Make sure you also restrict the display width of that image to 1920 pixels, and then center align it. Once you've settled on a width for an image, scale your image to fit that size.&lt;/li&gt;
&lt;li&gt;Experiment with compression in webP, JPG &lt;/li&gt;
&lt;li&gt;Simplify SVGs by removing unnecessary points and lines&lt;/li&gt;
&lt;li&gt;Compare file sizes for JPG, webP and PNG for computer graphics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Automated Optimization&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.npmjs.com/package/imagemin" rel="noopener noreferrer"&gt;Imagemin&lt;/a&gt; is a good choice. You can use it to build a custom optimization function in Node.js. Or add automated image optimization into your preferred build process. Imagemin CLI provides lossless compression for JPEG, PNGs, and GIFs.&lt;/li&gt;
&lt;li&gt;You can add dedicated lossy compression for each of them using plug-ins: &lt;a href="https://www.npmjs.com/package/imagemin-mozjpeg" rel="noopener noreferrer"&gt;Imagemin-mozjpeg&lt;/a&gt; for JPEGs. &lt;a href="https://www.npmjs.com/package/imagemin-pngquant" rel="noopener noreferrer"&gt;Imagemin-pngquant&lt;/a&gt; for PNGs and &lt;a href="https://www.npmjs.com/package/imagemin-webp" rel="noopener noreferrer"&gt;Imagemin-webp&lt;/a&gt; for webPs.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://squoosh.app/" rel="noopener noreferrer"&gt;Squoosh&lt;/a&gt; uses various compression algorithms to optimize images. And it has an &lt;a href="https://www.npmjs.com/package/@squoosh/cli" rel="noopener noreferrer"&gt;experimental CLI&lt;/a&gt; you can use to automate that process.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.npmjs.com/package/sharp" rel="noopener noreferrer"&gt;Sharp&lt;/a&gt; is also available for use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even a fully optimized image can slow down the performance of your site if it's delivered to the wrong browser at the wrong time. This is the problem &lt;a href="https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimedia_and_embedding/Responsive_images" rel="noopener noreferrer"&gt;Responsive Images Markup&lt;/a&gt; is meant to solve. &lt;/p&gt;

&lt;p&gt;We have responsive images attributes: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-srcset" rel="noopener noreferrer"&gt;srcset&lt;/a&gt; and &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-sizes" rel="noopener noreferrer"&gt;sizes&lt;/a&gt;.&lt;br&gt;
Source sets allows you to provide a list of image sources for the browser to choose from and sizes defines a set of media conditions (e.g. screen widths) and indicates what image size would be best to choose, when certain media conditions are true. W indicates total pixel width of each of these images.&lt;br&gt;
For example:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a5bo4jq42z2wu9d4vlm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a5bo4jq42z2wu9d4vlm.png" alt="Screenshot (200)"&gt;&lt;/a&gt;&lt;br&gt;
If the viewport of the browser is 800 pixels wide. The browser will pick the 1200 pixel wide image because it is the closest size upwards. If you then choose to scale up the viewport by just scaling up the browser window. The browser will automatically pull down larger versions of the image to fill in the space if it's necessary. But the important thing now is, by carefully planning your image sizes you can now deliver appropriately sized image files to all browsers and all devices. &lt;/p&gt;

&lt;p&gt;But, for most of your images, the actual displayed width of the image is determined using CSS and media queries. And you rarely display all your images as full width in the browser. To address this, we have the sizes attribute. Sizes holds a list of media queries and corresponding width to save.&lt;/p&gt;

&lt;p&gt;For this image, if the viewport is 1200 pixels or wider, the actual width this image will be displayed at will always be 1200 pixels. The reason why I'm still providing the 1920 pixel image here is to provide a higher resolution image to higher resolution displays. The 100 VW at the end of the size of the attribute says, for all other conditions, meaning screen widths under 1200 pixels, the image is always full width because this is a responsive layout.&lt;/p&gt;

&lt;p&gt;This is especially important when you have a design where an image has a max size smaller than the viewport width. Which is almost every single image on the web.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lazy Loading Images&lt;/strong&gt;&lt;br&gt;
Loading images, videos, and iframes the user never scrolls to has always been a major performance issue on the web. We're simply wasting data that we shouldn't be wasting. To deal with this issue, developers started adding lazy loading JavaScript libraries that would wait for the user to scroll close to an element before the image was loaded by the browser so that instead of loading all the images on a page, only the images the user would actually get to see inside the viewport were loaded by the browser.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yahgqo4wwo0u3uj84rz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yahgqo4wwo0u3uj84rz.png" alt="Screenshot (204)"&gt;&lt;/a&gt; &lt;br&gt;
Native lazy loading is activated using the loading attribute on the element in question. Lazy, meaning the asset is loaded only when it's close to the viewport, and eager, meaning the asset is loaded immediately, even if it's nowhere near the viewport. There's also a fallback here called auto, but it's not yet in the specification.  Now, this loading attribute is also non-destructive, meaning older browsers who do not understand this attribute will simply ignore it and load all the assets as it would normally do. If you want lazy loading support in older browsers as well, you can use a JavaScript solution like &lt;a href="https://www.npmjs.com/package/lazysizes" rel="noopener noreferrer"&gt;lazysizes&lt;/a&gt;, which has an extension plugin called native loading, which serves up the JavaScript solution only to browsers that do not support the loading attribute and the new built in lazy loading feature.&lt;/p&gt;




&lt;h2&gt;
  
  
  JavaScript Optimization
&lt;/h2&gt;

&lt;p&gt;The code we write is optimized for humans, but if we want the code to be as fast as possible and to be performant, it needs to be rewritten for size and effectiveness, and that makes it unreadable for us humans. We now have tools to do this job for us in the form of code minimizers, packagers, bundlers, and more. At minimum, you'll need a development track where the human readable code is stored and a production track where the highly optimized and compressed machine-readable code is stored.&lt;/p&gt;

&lt;p&gt;How and when we compress, bundle, load, modularize, and execute JavaScript is becoming more and more important to improving performance. The same can be said for CSS. Modular and inline CSS, progressive loading, and other performance techniques are now essential to ensure the style of a site or application doesn't slow down its delivery. &lt;/p&gt;

&lt;p&gt;The modern web platform supports JavaScript modules, separate JavaScript files that export and import objects functions, and other primitives from each other so bundling all JavaScript into one big file, makes no sense on the modern web.&lt;br&gt;
So from a performance perspective heres what should happpen. On initial, load any critical JavaScript necessary to get the app framework up and running and displaying something above the fold should be loaded. Once that's done and the user has something to look at, any necessary JavaScript modules for functionality should be loaded. And from here on out, the browsers should progressively load JavaScript modules only when they become relevant. &lt;br&gt;
JavaScript functionality should be modularized as much as possible and split into dedicated files.&lt;/p&gt;

&lt;p&gt;Several immediate benefits to this approach are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React, uses components. JavaScript modules are the exact same thing. Except they run on the web platform itself and you don't need a bundler to make them work.&lt;/li&gt;
&lt;li&gt;Modularization makes ongoing development easier because it provides clear separation of concerns.&lt;/li&gt;
&lt;li&gt;Modularizing, JavaScript and loading modules only when they are needed, brings significant performance benefits on initial load.&lt;/li&gt;
&lt;li&gt;Modularization means updating some feature in a JavaScript app does not require the browser to download the entire app bundle again. It just needs to download the updated module file with its features, which is way smaller. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When and how the browser loads each JavaScript file it encounters has a significant impact on both performance and functionality. &lt;/p&gt;

&lt;p&gt;If we add JavaScript to the head of an HTML document, it will always load and execute as soon as the browser encounters it, which is always before the body is rendered out. This will always cause render blocking. &lt;/p&gt;

&lt;p&gt;To prevent this blocking JavaScript has been added to the very bottom of the body element, but this too causes render blocking because as soon as the browser encounters a reference to JavaScript, it'll stop doing anything, download the entire script, then execute the script, and then go back to rendering. So basically, entire page will be loaded before the JavaScript is even loaded which just adds to the performance problems.&lt;/p&gt;

&lt;p&gt;We have the &lt;strong&gt;async&lt;/strong&gt; and &lt;strong&gt;defer&lt;/strong&gt; keywords which instruct the browser to either load JavaScript files asynchronously while DOM rendering takes place, and then execute them as soon as they're available, or to load the files asynchronously and defer execution until the DOM rendering is done. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb1v1lvi522drhwo1jol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb1v1lvi522drhwo1jol.png" alt="Screenshot (209)"&gt;&lt;/a&gt; &lt;br&gt;
When we add the async tag, the browser will load the JavaScript asynchronously meaning it loads alongside the HTML parsing process. When the script is fully loaded the browser stops the rendering of the HTML until the script is executed and then it continues. Already we're seeing a significant performance enhancement because the parsing isn't paused while the script is being downloaded. &lt;/p&gt;

&lt;p&gt;In JavaScript and other programming languages, a synchronous event means one event happens after another, in a chain. Asynchronous means the events happen independently of one another and one event doesn't have to wait for another to complete before it takes place. &lt;/p&gt;

&lt;p&gt;In the case of async JavaScript loading the loading is asynchronous, while the execution is synchronous.&lt;/p&gt;

&lt;p&gt;Use async anytime you're loading JavaScript and you don't need to wait for the whole DOM to be created first.&lt;/p&gt;

&lt;p&gt;Defer is slightly different. We're still loading the script asynchronously when the browser encounters it without render blocking. And then we literally defer the execution of the JavaScript until the HTML parsing is complete.&lt;/p&gt;

&lt;p&gt;This is effectively the same as placing the script tag at the end of the body element, except the script is loaded asynchronously, and is therefore much better for performance because we don't render out the entire HTML and then go download the JavaScript. The JavaScript is already downloaded. &lt;/p&gt;

&lt;p&gt;Use defer if you need to wait for the whole DOM to be loaded before executing the JavaScript or if the JavaScript can wait. &lt;/p&gt;

&lt;p&gt;So here is your performance focused JavaScript loading best practices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Call JavaScript by placing the script tag in the head&lt;/li&gt;
&lt;li&gt;Anytime you load JavaScript in the head, always put async on there unless you have a reason to use defer.&lt;/li&gt;
&lt;li&gt;Defer any scripts that need the DOM to be fully built or scripts that you can defer because they don't need to execute right away.&lt;/li&gt;
&lt;li&gt; If and only if, you need to support older browsers and you can't allow the browser to wait for things, load your script in the footer the old way and take the performance hit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lazy load JavaScript modules and their associated assets only when they're interacted with and needed using &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import" rel="noopener noreferrer"&gt;import&lt;/a&gt; statements.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
import("/path/to/import-module.js")&lt;br&gt;
  .then((module) =&amp;gt; {&lt;br&gt;
    // do something with the module&lt;br&gt;
  });&lt;br&gt;
With this you'll not be chaining the events and getting everything to work conditionally on the user's behavior. So you're saving the user a ton of data and only pushing content to the browser when it's needed.&lt;br&gt;
This whole concept can be used with any JavaScript module including external &lt;a href="https://nodejs.org/api/esm.html#esm_introduction" rel="noopener noreferrer"&gt;ESM module&lt;/a&gt;. &lt;br&gt;
To rewrite everything and turn it into highly optimized human unreadable code we can use minifiers and uglifiers. All major bundlers, including webpack, rollup, parcel, etc ship with minifiers built in. The two most popular minifiers are &lt;a href="https://www.npmjs.com/package/@types/uglify-js" rel="noopener noreferrer"&gt;uglify-js&lt;/a&gt; and &lt;a href="https://www.npmjs.com/package/terser" rel="noopener noreferrer"&gt;terser&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  CSS Optimization
&lt;/h2&gt;

&lt;p&gt;The number one measure of perceived performance is how fast something loads in the view port of the browser. For a page to render, all the CSS has to be fully loaded because CSS is a cascade and the rule sets at the bottom of a style sheet may well impact the rules that's higher up. If we serve the browser with a huge style sheet with all the styles for the page, it takes a long time to load that style sheet on this content and the performance suffers. To get around this problem, developers have come up with a clever hack called &lt;strong&gt;critical CSS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;First, inline any styles impacting the content above the fold(in the viewport) in the HTML document itself as a style tag in the head. Then lazy load and defer the rest of the CSS, using a clever JavaScript trick, so it only loads when the page is fully loaded. &lt;br&gt;
&lt;a href="https://www.npmjs.com/package/critical" rel="noopener noreferrer"&gt;Critical&lt;/a&gt; helps us automate this process so that so you don't have to manually copy and paste code every time you update something.&lt;/p&gt;

&lt;p&gt;Critical reads the HTML and CSS figures out what rule sets should be inlined automatically inlines that CSS into the HTML document, separates out the non-critical CSS into a step separate style sheet and then lazy loads on the first and non-critical CSS. &lt;br&gt;
Because this tool is built into the tool chain, it can be set up to take place at every build, so you don't have to keep tabs on what styles are critical. This tool also has a ton of options, so you can fully customize exactly what happens within the critical CSS, index file or the HTML file, the CSS, the view port you're targeting, all this stuff can be configured.&lt;br&gt;
For example:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm2e24nfsfbpznibe67e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm2e24nfsfbpznibe67e.png" alt="Screenshot (212)"&gt;&lt;/a&gt; &lt;br&gt;
Critical actually spin up a browser and then display the contents in the browser in a defined view port size that we've defined. And then look at what CSS is affecting the content inside that view port and split that out into this critical CSS file. The view port in the example is 320 width, 480 height. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo7l55v50erj7twdfxpc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo7l55v50erj7twdfxpc.png" alt="Screenshot (213)"&gt;&lt;/a&gt;&lt;br&gt;
The critical inline CSS that will run before the dom's even built. So this will then define the content that's above the fold.&lt;br&gt;
Then below we have our link elements, but the link element now points at uncritical CSS. And you'll notice the media property is set to print. This is the JavaScript trick.&lt;br&gt;
So what happens now is a regular browser will identify itself as screen. For that reason, this style sheet will not be loaded because it's set to only load for print. Meaning when you're actually printing something. Then, on load, which is an event that is triggered when the page is fully loaded, would change this media to all instead. And at that point, once everything else is done, this extra style sheet will be loaded.&lt;/p&gt;

&lt;p&gt;To see how much of your JavaScript and CSS and other code is loaded unnecessarily into the browser, you can use the coverage view in the browser dev tools. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev2qij6fh254i09ayrg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev2qij6fh254i09ayrg7.png" alt="Screenshot (220)"&gt;&lt;/a&gt; &lt;br&gt;
If you see anything marked in red, here, it's a rule that is not currently being used on the page. This is what Critical does, it runs this type of process and then identifies which rules are being used and which rules are not being used, but in the view port, and then it picks and chooses. &lt;br&gt;
If you have one giant style sheet, you need to compare all of these pages and do a bunch of work.&lt;br&gt;
A better solution would be if we could modularize our CSS and split the CSS into smaller components and then load them only if they are needed. And one way we can do that is by deferring loading of CSS until something happens. Now, you already saw an example of that in Critical. You'll remember when we used Critical, the Critical CSS was in lined and then the rest of the styles were put in this uncritical CSS file and deferred. &lt;/p&gt;

&lt;p&gt;So, here's a different way of doing the same thing.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20a9yruwitjp1i4bb1kq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20a9yruwitjp1i4bb1kq.png" alt="Screenshot (221)"&gt;&lt;/a&gt;&lt;br&gt;
Here we set the rel preload and as style attributes into the link element, to tell the browser to preload this style sheet when there's processing available, meaning the loading is delayed to avoid render blocking. Then the on load attribute fires when the CSS is fully loaded and sets the rel attributes to stylesheet so the browser recognizes it and renders it. But this non script element at the bottom is a fall back for browsers that don't have JavaScript, and in that case, they will just immediately load the style sheet.&lt;br&gt;&lt;br&gt;
We could also:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p7088njjqzt6pns10i6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p7088njjqzt6pns10i6.png" alt="Screenshot (222)"&gt;&lt;/a&gt;&lt;br&gt;
This style sheet will not be loaded by the browser at all until the disabled attribute is removed or set defaults. You can then set up a JavaScript function to change the disabled attribute if, and only if, some event occurs like activating a gallery or triggering a JavaScript or triggering some external function and only then will the browser go to the internet pull down the style sheet, and mount it in the browser.   &lt;/p&gt;

&lt;p&gt;Lastly,&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxauyg8563wb2z8gyjs19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxauyg8563wb2z8gyjs19.png" alt="Screenshot (224)"&gt;&lt;/a&gt;&lt;br&gt;
Loading style sheets in body means you can have each component load its own style sheets on the fly. That way the component brings its own styles to the table and you don't have to load any styles you don't need. This makes for much cleaner and more manageable code and it falls in line with modern component-based development practices. &lt;/p&gt;

</description>
      <category>performance</category>
      <category>javascript</category>
      <category>css</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Web Performance Optimization- I</title>
      <dc:creator>Bipul Sharma</dc:creator>
      <pubDate>Mon, 02 Aug 2021 13:43:23 +0000</pubDate>
      <link>https://dev.to/bipul/web-performance-optimization-i-5d39</link>
      <guid>https://dev.to/bipul/web-performance-optimization-i-5d39</guid>
      <description>&lt;p&gt;&lt;strong&gt;About&lt;/strong&gt;&lt;br&gt;
Critical Rendering Path (CRP) and its Optimization, the PRPL pattern and Performance Budget.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Web performance is all about making web sites fast, including making slow processes seem fast. Good or bad website performance correlates powerfully to user experience, as well as the overall effectiveness of most sites. Websites and applications need to be fast and efficient for all users no matter what conditions the users are under. To make that happen we use performance optimizations. The MDN web docs breaks down performance optimization into four major areas. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Reducing overall load time&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compressing and minifying all files.&lt;/li&gt;
&lt;li&gt;Reducing the number of file and other HTTP requests sent back and forth between the server and the user agent.&lt;/li&gt;
&lt;li&gt;Employing advanced loading and caching techniques and conditionally serving the user with only what they need when they actually need it. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Making the site usable as soon as possible&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is done by loading critical components first to give the user initial content and functionality and then deferring less important features for later using lazy loading to request and display content only when the user gets to or interacts with it. And by pre-loading features, the user is likely to interact with next.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Smoothness and Interactivity&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improving the perceived performance of a site through skeleton interfaces, visual loaders and clear indication that something is happening and things are going to work soon. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Performance measurements&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tools and metrics to monitor performance and validate up to station efforts. The thing to keep in mind here is that not every performance optimization will fit your solution and needs.&lt;/li&gt;
&lt;li&gt;Browser tools measuring performance include Lighthouse (Chrome), Network monitor, Performance monitor. There are also hosted third-party tools like PageSpeed Insights (Google), WebPage Test, GTMetrics(actually Lighthouse) which help measure performance. &lt;/li&gt;
&lt;li&gt;Key indicators that these tools use ro describe the performance are:

&lt;ul&gt;
&lt;li&gt;First paint- The time it takes before the user sees changes happening in the browser.
Largest Contentful Paint (LCP)- The time it takes before the user sees content, so text images, something else in the browser.&lt;/li&gt;
&lt;li&gt;First Meaningful Paint (FMP)- The time it takes before the user sees content that is actually meaningful. So when above the full content and web fonts are loaded and the user can derive meaning from what they are seeing.&lt;/li&gt;
&lt;li&gt;Time To Interactive- The time it takes before the content has finished loading and the UI can be interacted with so the user can actually click on buttons, fill forms or do whatever else is going to happen on the site. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The longer it takes for a site to hit each of these points, the higher the chance of the user either getting annoyed or abandoning the user experience altogether. So good performance is better for your visitors, better for you because you don't have to pay as much for your hosting, better for your Google rankings, and finally, better for the environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Critical Rendering Path (CRP)
&lt;/h2&gt;

&lt;p&gt;To understand performance optimization, you first need a solid understanding of how typing something into the address bar of a browser results in the page being rendered in the viewport.&lt;/p&gt;

&lt;p&gt;It all starts with the browser sending a request for some site.com to its Internet Service Provider.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0whp661zstc0r2in69t5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0whp661zstc0r2in69t5.png" alt="Screenshot (137)"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The ISP then sends the request immediately to a DNS domain name service, a phone book for the web which maps the website you're seeking to the address for the website.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu58mh7nmte2jxybxj6mb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu58mh7nmte2jxybxj6mb.png" alt="Screenshot (170)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This DNS lookup is done for each unique hostname. So if the site you're requesting is using externally hosted fonts, or JavaScript libraries, or images, or videos or other services, this DNS lookup happens for each of those different services. Anytime there's a new domain name, a new DNS lookup have to take effect. This is the first major performance bottleneck. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3hghlx511zhbazn2vlq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3hghlx511zhbazn2vlq.png" alt="Screenshot (171)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To do away with some of this performance overhead, the domain name to IP address association will probably be cached at numerous different steps, your ISP will cached as information, it will also likely be cached in your router and on your computer. That way when you send a request to the same domain you requested before, instead of having to go through the whole DNS lookup again, we're just pulling a cache from somewhere closer to the computer, but that also means if the DNS has changed in the meantime, you'll get an incorrect address pointing and things won't work as expected.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftltlow5w40tulns4p1kk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftltlow5w40tulns4p1kk.png" alt="Screenshot (172)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the IP address is established, the browser and server now perform what's called a TCP handshake, where they exchange identity keys and other information to establish a temporary connection and working relationship. This is also where the type of connection is determined this is there's a regular HTTP connection or is it an encrypted HTTPS connection? If the latter, encryption keys are exchanged and if both the browser and the server support it, the transaction is updated from HTTP 1.1 to HTTP two, which provides substantial performance enhancements.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosi7i9mutvo2m9whsbwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosi7i9mutvo2m9whsbwn.png" alt="Screenshot (173)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We now have a connection and everything is ready to go. At this point, the browser sends an HTTP GET request for the resource it's looking for. This initial GET request will be for whatever the default file on the server location is, typically index.html or index.php or index.js or something similar to that. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h8pwv8h7zdjtcf3r2yg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h8pwv8h7zdjtcf3r2yg.png" alt="Screenshot (174)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The time it takes for the browser to finally receive the first byte of the actual page it's looking for, is measured in time to first byte or TTFB. The first piece of data called the packet that the browser receives is always 14 kilobytes, then the packet size doubles with every new transfer. That means if you want something to happen right away, you need to cram it into those first 14 kilobytes.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodb7h2rwyx1jghfc2w2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodb7h2rwyx1jghfc2w2q.png" alt="Screenshot (175)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The browser now gets a file an HTML document, and it starts reading it from top to bottom and then parsing that data. This means the HTML is turned into a DOM tree, the CSS is turned into a CSSOM tree and object model for the CSS of the page, which makes it possible for the browser to render the CSS for JavaScript to interact with it. And as the document is parsed, the browser also loads in any external assets as they are encountered. That means anytime it encounters a new CSS file, or reference to anything else, it'll send a new request, the server responds by sending the request back, then it gets placed into the system, and the browser starts rendering that as well. &lt;/p&gt;

&lt;p&gt;In the case of JavaScript, though, the browser stops everything else and waits for the file to be fully downloaded. Why? Because there's a good chance of JavaScript wants to make changes to either the DOM or the CSSOM or both. This is what's known as render blocking, whatever rendering was happening, stops and is literally blocked for as long as the browser is waiting for the JavaScript to be fully loaded and then fully executed. Once all of this parsing is done, the rendering can begin in earnest and here the browser combines the DOM and CSSOM to style, layout, paint and composite the document in the viewport. &lt;/p&gt;

&lt;p&gt;The metric time to first Contentful paint refers to how long it takes for all of this to happen. What's important for our purposes is to remember what's actually happening, that way we can identify bottlenecks and add performance enhancements to get past them as quickly as possible.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5h8l0bwdqztg0g6ibcei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5h8l0bwdqztg0g6ibcei.png" alt="Screenshot (176)"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Optimizing the CRP
&lt;/h2&gt;

&lt;p&gt;When you interact with content on the web today, you're using one of two different versions of the HTTP protocol, either the old HTTP/1.1 or the more modern HTTP/2. Which protocol version is in use has a significant impact on the performance of the site. In HTTP/1.1, all files requested by the browser are loaded synchronously, one after the other. So a typical HTML page with two style sheets, a couple of images, and some JavaScript would require the browser to first load the HTML document, then the CSS files, then the JavaScript files, and finally the image files one after the other. This is slow, inefficient, and a recipe for terrible performance.&lt;/p&gt;

&lt;p&gt;To work around this obvious issue, browsers cheat by opening up to six parallel connections to the server to pull down data. However, this creates what's known as head of line blocking, where the first file, the HTML file, holds back the rest of the files from downloading. It also puts enormous strain on the internet connection and the infrastructure, both the browser and the server, because you're now operating with six connections instead of one single connection. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6hsxf2tn03kp1ie86yk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6hsxf2tn03kp1ie86yk.png" alt="Screenshot (182)"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In HTTP/2, we have what's known as multiplexing. The browser can download many separate files at the same time over one connection, and each download is independent of the others. That means with HTTP/2, the browser can start downloading a new asset as soon as it's encountered, and the whole process happens significantly faster.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshimn8xvwxqtqhfecccn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshimn8xvwxqtqhfecccn.png" alt="Screenshot (177)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, for HTTP to work, a few key conditions need to be met. Number one, the server must support HTTP/2. Number two, the browser must also support HTTP/2. And number three, the connection must be encrypted over HTTPS. If any of these conditions are not met, the connection automatically falls back to HTTP/1.1. So bottom line, for instant performance improvements with minimal work, get an SSL certificate for your domain and ensure your server supports HTTP/2. &lt;/p&gt;

&lt;p&gt;Identifying which bottlenecks cause performance issues for you is the key to performance optimization.The server itself can contribute to poor performance.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl8c6awbw7omopht157c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl8c6awbw7omopht157c.png" alt="Screenshot (178)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next bottleneck is the connection made between the browser and the servers hosting the files necessary to render the page. For each of these connections, that whole DNS and TCP handshake loop needs to take place, which slows down the whole process.&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz6u2oj8o29vyhlmguvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz6u2oj8o29vyhlmguvl.png" alt="Screenshot (186)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How many files are downloaded and in what order those files are downloaded has an impact on performance.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzd5bc3pfe73ehmbu0yf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzd5bc3pfe73ehmbu0yf.png" alt="Screenshot (185)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Caching(or storing of assets) is also one of the methods for performance optimization. This can be done on the server, on the CDN or in the browser.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Caching on the Server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're running a site relying on server-side rendering, meaning each page or view is generated on the fly by the server when it is requested, caching may provide a huge performance boost. By enabling caching, the server no longer has to render the page every time the page is requested. &lt;br&gt;
Instead when the page is rendered, a snapshot of that page is created and then stored in the server cache. The next time a visitor then comes to the site, there'll be handed at this stored cached snapshot instead of a freshly rendered page. This is why static site generators have become so popular: they produce pre-rendered cacheable static pages and bypass the entire CMS service side rendering problem. The challenge with this type of caching is in dynamic features they have. Like every time a new comment is added, the cache needs to be cleared, and then the page has to be regenerated. Even so, caching should be enabled for all sites relying on server-side rendering because performance benefits are so significant.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Caching on the CDN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CDNs are effectively external caching services for sites. CDNs can also do edge computing. Here, the CDN renders the page when requested and then caches it itself. This edge approach works well with modern static site generators like Gatsby and all JavaScript based site generators and frameworks because they serve up static assets by default, and are built to work in this modern web architecture. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Caching in the browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are two main things we can do here. One, store existing assets. So if the visitor returns to the site it already has all the information cached in the browser and two, push files to the browser early so by the time the browser requests the file, the files that are already sitting in the cache. All browsers do some level of caching automatically and also we can then instruct the browser on exactly how we want to handle caching of our assets. For assets that are unlikely to change such as main style sheets, JavaScript, and other images, long caches makes sense. For assets that are likely to change over time, short cache durations, or no cashing at all may make more sense.&lt;/p&gt;

&lt;p&gt;To ensure new and updated assets always make it to the visitor. We can use cache busting strategies like appending automatic hashes to file names or we can rely on the server itself to document the file name on file date for each file, and then do the caching automatically. You can also split up CSS and JavaScript files into smaller modules and when you update something in CSS or JavaScript, instead of having to recache an entire style sheet for an entire site, you're just recaching the module that has that update.&lt;/p&gt;




&lt;h2&gt;
  
  
  PRPL and Performance Budget
&lt;/h2&gt;

&lt;p&gt;To achieve the best possible performance for your website or application always keep the &lt;strong&gt;PRPL&lt;/strong&gt; pattern in mind. &lt;br&gt;
This is an acronym that stands for:&lt;br&gt;
&lt;strong&gt;Push or preload&lt;/strong&gt; important resources to the browser using server push for the initial load and service workers in the next round, the application will run faster.&lt;br&gt;
&lt;strong&gt;Render&lt;/strong&gt; the initial route as soon as possible by serving the browser with critical CSS and JavaScript, the perceived performance of the application will be improved. &lt;br&gt;
&lt;strong&gt;Pre-cache&lt;/strong&gt; remaining assets so they are available when the browser needs them.&lt;br&gt;
&lt;strong&gt;Lazy load&lt;/strong&gt; all non-critical assets so they only load when they are actually needed, such that we reduce the time to initial load and save the visitor from wasting their bandwidth on assets they will never use. &lt;/p&gt;

&lt;p&gt;The number one metric that determines the performance of your site or app is its weight. &lt;br&gt;
&lt;strong&gt;Performance budget&lt;/strong&gt; gives you a metric to measure every new feature against and a tool to use when hard decisions need to be made. A performance budget may include limits on the total page weight, total image weight, number of HTTP requests, maximum number of fonts or images or external assets, etc.&lt;br&gt;
We now have tools that we can integrate into our build processes like Webpack's performance options, which you can get directly inside Webpack and Lighthouse's light wallet, which gives you the ability to test your builds against the performance budget at any time to get flags anytime your images are too big or your JavaScript is too big or your CSS is too big or anything else.&lt;/p&gt;

&lt;p&gt;Some best practice metrics for Performance budget are:   &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure that your site meets a speed index under three seconds. &lt;/li&gt;
&lt;li&gt;Time to interactive is under five seconds.&lt;/li&gt;
&lt;li&gt;The largest contentful paint is under one second&lt;/li&gt;
&lt;li&gt;The max potential first input delay is under 130 microseconds.&lt;/li&gt;
&lt;li&gt;The maximum size of the Gzipped JavaScript bundle is under 170kb.&lt;/li&gt;
&lt;li&gt;The total bundle size is under 250kb and that all of this happens on a low powered feature phone on 3G.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now these performance budget metrics are severe and really difficult to hit. They're also the metrics being used by tools like Lighthouse to test for performance.&lt;/p&gt;

&lt;p&gt;So the question here comes how to create a realistic Performance Budget?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build seperate performance budgets on slow networks and laptop/desktop devices on fast networks.&lt;/li&gt;
&lt;li&gt;Do performance audit.&lt;/li&gt;
&lt;li&gt;Set resonable goals based on audit.&lt;/li&gt;
&lt;li&gt;Test production version against perfomance budget.&lt;/li&gt;
&lt;li&gt;Do a competitor performance audit: make your performance goal better than your competitor.&lt;/li&gt;
&lt;li&gt;Test all work against Performance budget though Performance budget are unique to each project and will change overtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://dev.to/bipul/web-performance-optimization-ii-2799"&gt;Part- II&lt;/a&gt;&lt;/p&gt;

</description>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>optimization</category>
    </item>
    <item>
      <title>Managing UI State in a React App</title>
      <dc:creator>Bipul Sharma</dc:creator>
      <pubDate>Fri, 16 Jul 2021 11:27:56 +0000</pubDate>
      <link>https://dev.to/bipul/managing-ui-state-in-a-react-app-3kpb</link>
      <guid>https://dev.to/bipul/managing-ui-state-in-a-react-app-3kpb</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;UI State is the state that is only useful for controlling the interactive parts of our React app. The other state being Server-cache(State's stored on the server, and we store in the client for quick-access: like user data).&lt;/p&gt;

&lt;p&gt;useState is a Hook that lets you add React state to function components. &lt;/p&gt;

&lt;h2&gt;
  
  
  Lifting State Up
&lt;/h2&gt;

&lt;p&gt;If we have 2 sibling components that both need same state(or functions/methods), we need to lift that state to the least parent component and pass the data down to the components that need through props.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fTTMSY7n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd2il8izq2st2o2a0bxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fTTMSY7n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd2il8izq2st2o2a0bxg.png" alt="Screenshot (78)" width="880" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prop Drilling
&lt;/h2&gt;

&lt;p&gt;Now, if some component far away in the component tree needs the state, you need to lift the state all the way to the top and pass props to all intermediate components to get the data down to the component that actually needs it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GhEV3Or---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1hkfnue25av28grsw8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GhEV3Or---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1hkfnue25av28grsw8f.png" alt="Screenshot (79)" width="880" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is what is called prop drilling- passing data from one part of the React Component tree to another by going through other parts that do not need the data but only help in passing it around.&lt;/p&gt;

&lt;p&gt;Lets take a simple example:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eFKveLFD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rsgpn3djookplatz71e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eFKveLFD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rsgpn3djookplatz71e.png" alt="cont1" width="880" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, UserPanel component and UserPanelContent component are intermediate components acting as tunnels to get the data down to the Welcome component that actually needs the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Composition to avoid Prop Drilling
&lt;/h2&gt;

&lt;p&gt;Instead of making components that render components and wiring props everywhere like this, you could compose things together through children props.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tEwgRBrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j89kp5b0j56cw5gkvukv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tEwgRBrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j89kp5b0j56cw5gkvukv.png" alt="cont3" width="880" height="1082"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using children prop increases the ability to compose i.e it makes the UserPanel costomizable(such that you can choose what goes inside the Panel) and also it eliminates prop drilling.&lt;/p&gt;

&lt;p&gt;There are various situations where prop drilling can cause some real pain in the process of refactoring especially.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Context is designed to share data that can be considered “global” for a tree of React components, such as the current authenticated user, theme, or preferred language. If you only want to avoid passing some props through many levels, component composition is often a simpler solution than context.&lt;br&gt;
 ~React docs&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If 3 components require data(or methods), we choose a common parent and wrap it in a provider and then we wrap the components that require data/method in a consumer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R_rEWhD5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvilr86cfov1f9jm1dvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R_rEWhD5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvilr86cfov1f9jm1dvz.png" alt="Screenshot (80)" width="880" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Taking example from the previous code:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AKlaQ-r5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70kehuzu3s7z966yfj2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AKlaQ-r5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70kehuzu3s7z966yfj2f.png" alt="cont2" width="880" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the useContext hook gives us an extra, much prettier, way to consume context.&lt;br&gt;
If you have a really large application and you have the entire thing surrounded by a Context Provider anytime you make a change to that state it is going to render everything that is nested under the provider which can make interactions feel slow. Adding useMemo increases performance which says hey if these values dont change dont render these components.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Outro
&lt;/h2&gt;

&lt;p&gt;Many UI state managment tools are available like Redux, Mobx, Recoil, Jotai, XState which may be used but React is all you need to manage your applications UI state most of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://egghead.io/courses/react-state-management-in-2021-6732"&gt;https://egghead.io/courses/react-state-management-in-2021-6732&lt;/a&gt;&lt;br&gt;
&lt;a href="https://reactjs.org/docs/composition-vs-inheritance.html#so-what-about-inheritance"&gt;https://reactjs.org/docs/composition-vs-inheritance.html#so-what-about-inheritance&lt;/a&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
