<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ashutosh Sharma</title>
    <description>The latest articles on DEV Community by Ashutosh Sharma (@iamserverless).</description>
    <link>https://dev.to/iamserverless</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iamserverless"/>
    <language>en</language>
    <item>
      <title>DNS Resolution: Optimization Tools and Opportunities</title>
      <dc:creator>Ashutosh Sharma</dc:creator>
      <pubDate>Wed, 18 Nov 2020 15:59:41 +0000</pubDate>
      <link>https://dev.to/iamserverless/dns-resolution-optimization-tools-and-opportunities-23d</link>
      <guid>https://dev.to/iamserverless/dns-resolution-optimization-tools-and-opportunities-23d</guid>
      <description>&lt;p&gt;DNS resolution is the first thing that happens when a request is made to a remote server. It is a process of finding the computer-friendly address of the remote server using a human-friendly domain name.&lt;/p&gt;

&lt;p&gt;There are few performance improvement possibilities like perfect cache invalidation time. Preferring A record over CNAME. But before all these let's understand how DNS resolution actually works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiwm5q3k33jodgq3y82mj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiwm5q3k33jodgq3y82mj.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A network request is made to the server using its domain name.&lt;/li&gt;
&lt;li&gt;The browser first checks its DNS cache, if present use the IP address else asks Operating System.&lt;/li&gt;
&lt;li&gt;Operating System checks its cache (and few more things like host file) if present return it else asks resolver.&lt;/li&gt;
&lt;li&gt;Resolvers are ISP(Internet Service Provider) servers but 5. can be configured to any other DNS service provider.&lt;/li&gt;
&lt;li&gt;Resolvers check if the domain is in its cache return it else look for root servers.&lt;/li&gt;
&lt;li&gt;Root servers are top in the DNS hierarchy and know the address of TLDs(top-level domain like .com, .net, .org, etc.) servers.&lt;/li&gt;
&lt;li&gt;Based on the type of domain, Root servers direct resolver to corresponding TLD servers. For a .com domain, root servers direct to .com TLD servers.&lt;/li&gt;
&lt;li&gt;TLD servers know the address of name servers.&lt;/li&gt;
&lt;li&gt;Name servers know the real IP address of the requested domain.
The browser receives the resolved IP address.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ahh too confusing and too much to remember. Let's keep it simple and know what is in our control and find the scope of performance improvements.&lt;/p&gt;

&lt;h1&gt;
  
  
  DNS Management Tools
&lt;/h1&gt;

&lt;p&gt;A registrar is a place where one buys a domain. The registrar provides name servers and a few other DNS management tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Registrars and DNS management tools&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Web Service(AWS) — Route 53&lt;/li&gt;
&lt;li&gt;Google Cloud — domains.google.com&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The main job of DNS management tools.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provide name servers details to TLDs.&lt;/li&gt;
&lt;li&gt;Add domain validation and authorization config for third parties.&lt;/li&gt;
&lt;li&gt;Define cache invalidation time or Time To Live(TTL). It’s a duration for which resolvers, browsers, etc can cache IP addresses in their cache.&lt;/li&gt;
&lt;li&gt;Define how to resolve a particular request. For mails check MX record. For HTTP/HTTPS check A, AAAA, or CNAME record.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  DNS Performance Optimization Opportunities
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1. Increase cache invalidation time
&lt;/h2&gt;

&lt;p&gt;Increasing cache invalidation time will ensure that the domain IP addresses will be served from the nearest cache. This will result in low latency DNS resolution.&lt;/p&gt;

&lt;p&gt;This will be a problem in cases where domain to IP mapping is frequently changed.&lt;/p&gt;

&lt;p&gt;To handle such cases follow these steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set TTL to 0 to avoid the cache of any new request.&lt;/li&gt;
&lt;li&gt;Wait for the previous TTL value time to ensure the previous cache is invalidated.&lt;/li&gt;
&lt;li&gt;Make the new changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This will fix any downtime possibilities from any known changes. But what if the server crashed or something unexpected happened. For such cases keeping a static IP and assigning it to a new server will help.&lt;/p&gt;

&lt;p&gt;Let's look at the TTL value of some popular domains.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;apple.com — 60 min&lt;/li&gt;
&lt;li&gt;microsoft.com — 60 min&lt;/li&gt;
&lt;li&gt;Booking.com — 24 hrs&lt;/li&gt;
&lt;li&gt;google.com — 5 min&lt;/li&gt;
&lt;li&gt;indiatimes.com — 20 min&lt;/li&gt;
&lt;li&gt;godaddy.com — 10 min&lt;/li&gt;
&lt;li&gt;azure.microsoft.com — 60 min&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Increasing TTL is a tradeoff between availability and performance. Controlling availability with high TTL is possible but needs extra effort and care.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Use A or AAAA record wherever possible in place of CNAME
&lt;/h2&gt;

&lt;p&gt;CNAME or Canonical Name is like recursion where one domain resolves to another domain. The DNS resolution algorithm keeps looking until it finds the real IP address.&lt;/p&gt;

&lt;p&gt;In most cases replacing CNAME will not be possible because of no control over the resolved IP address. This rule is only applicable for cases where IP address is known but still, CNAME is preferable due to unmanageable DNS records. DNS records if not maintained properly becomes unmanageable in most mid to large-scale organizations.&lt;/p&gt;

&lt;p&gt;Some CDN and DNS service providers use the concept of CNAME Flattening to resolve IP directly without going through the whole chain of DNS resolution. Opt for it if your CDN or DNS service provider has support for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Use CDN which uses their own name servers.
&lt;/h2&gt;

&lt;p&gt;CDN works in two ways.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask to replace registrar name servers with their name servers.&lt;/li&gt;
&lt;li&gt;Ask to add CNAME for domain resolution.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both approaches have their own Pros and Cons. The first one is good for fast DNS resolution. The second one gives more flexibility and control to the maintainer.&lt;/p&gt;

&lt;p&gt;CDN has other limitations they are yet not equipped to serve dynamic content. There is some development in this area like using lambda on the edge(AWS) but still, there is a long path to cover.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Use custom name servers (only for large scale applications
&lt;/h2&gt;

&lt;p&gt;The purpose of name servers is to provide a real IP address that corresponds to a domain. Using custom logic a domain can be resolved to a different IP each time it receives a request.&lt;/p&gt;

&lt;p&gt;CDN uses this approach to serve content from the nearest host to the user but they can’t be used for dynamic content.&lt;/p&gt;

&lt;p&gt;Using custom name servers to resolve IP addresses based on the region can significantly reduce latency.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In California for users in North America&lt;/li&gt;
&lt;li&gt;In Paris for users in Europe&lt;/li&gt;
&lt;li&gt;In Mumbai for users in Asia&lt;/li&gt;
&lt;li&gt;In Sydney for users in Australia&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now people in North America can get their content directly from servers in California and people in India get content from Mumbai. Data belongs to a region can be stored along with the region and fallbacks to other regions.&lt;/p&gt;

&lt;p&gt;This gives a lot of flexibility for dynamic logic without any compromising on performance.&lt;/p&gt;

&lt;p&gt;There are lots of complexities in this approach. One problem is database sharding keeping a region's data close to it but still able to serve content from other shards.&lt;/p&gt;

&lt;p&gt;Many large scale organizations use this approach to distribute traffic and serve the request quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Improving DNS resolution can have a huge impact on the performance of a site. But every possible optimization has some cost.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Increasing Time to Live(TTL) reduces latency but impact availability.&lt;/li&gt;
&lt;li&gt;CNAME is not always replaceable with the corresponding A record.&lt;/li&gt;
&lt;li&gt;CDN’s are not yet ready for dynamic content.
Custom name servers are hard to put in place.&lt;/li&gt;
&lt;li&gt;Figure out appetite for performance at your org and accordingly tune DNS settings.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Previously published at &lt;a href="https://ashu.online/blogs/optimize-dns-resolution-for-fast-website" rel="noopener noreferrer"&gt;https://ashu.online/blogs/optimize-dns-resolution-for-fast-website&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>dns</category>
      <category>aws</category>
    </item>
    <item>
      <title>Lighthouse: Expectations vs. Reality</title>
      <dc:creator>Ashutosh Sharma</dc:creator>
      <pubDate>Sun, 08 Nov 2020 19:04:20 +0000</pubDate>
      <link>https://dev.to/iamserverless/lighthouse-expectations-vs-reality-45nn</link>
      <guid>https://dev.to/iamserverless/lighthouse-expectations-vs-reality-45nn</guid>
      <description>&lt;p&gt;When someone starts looking for optimizing the performance of their web application they immediately come across this tool called lighthouse by Google.&lt;/p&gt;

&lt;p&gt;Lighthouse is an awesome tool to quickly find out the performance issues in your web application and list down all the actionable items. This list helps you quickly fix the issues and see the green color performance score on your lighthouse report. With time lighthouse has become a defacto standard for web performance measurement and Google is pushing it everywhere from chrome dev tools to browser extensions, page speed insight to web.dev, and even webmaster search console. Anywhere if you talk about performance you will see the lighthouse auditing tool.&lt;/p&gt;

&lt;p&gt;This article will cover the usage of the lighthouse and its strengths and weaknesses. Where to trust it and where to not. Google has eagerly advertised all the benefits of the tools and integrated it in all of its other major tools like search console, page speed insight, and web.dev. This directly or indirectly forces people to improve their scores sometimes at the cost of something important. Many teams do weird things to see green ticks in their lighthouse report without knowing the exact impact of it on their conversion and usability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issues which needs to be tackled
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A) CPU power issue
&lt;/h3&gt;

&lt;p&gt;Lighthouse has made it very easy to generate your site performance report. Just open your site, go to dev-tools click Audit Tab, and run the test. Boom you got the results. But wait can you trust the score you just got the answer to this is a big no. Your results vary a lot when they are executed on a high-end machine vs when executed on a low-end machine because of different available CPU cycles to the lighthouse process. You can check the CPU/Memory power available to the Lighthouse process during the test at the bottom of your lighthouse report&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F84rhfzfqie7cikuvzd16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F84rhfzfqie7cikuvzd16.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Lighthouse team has done a great job in throttling the CPU to bring computation cycles down to an average of most used devices like MOTO G4 or Nexus 5X. But on a very high-end machine like the new fancy MacBook Pro throttling CPU cycles does not drop CPU cycles to the desired level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let a high-end processor like Intel i7 can execute 1200 instructions in a sec by throttling it 4x only 300 instructions will get executed.&lt;/p&gt;

&lt;p&gt;Similarly, a low-end processor like intel i3 can only execute 400 instructions in a sec and by throttling it to 4x only 100 instructions can get executed.&lt;/p&gt;

&lt;p&gt;It means everything on intel i7 or any other higher-end processor will be executed faster and will result in much better scores. One of the critical matrices in the lighthouse is TBT (Total Blocking Time) which depends heavily on CPU availability. High CPU availability ensures a fewer number of long tasks (tasks which take more than 50ms) and less the number of long tasks lower is the TBT value and higher is the performance score.&lt;/p&gt;

&lt;p&gt;This is not the only problem, lighthouse scores can differ between multiple executions on the same machine. This is because lighthouse or in fact any application cannot control the CPU cycles as this is the job of the operating system. The operating system decides which process will get how many computation cycles and can reduce or increase CPU availability based on a number of factors like CPU temperature, other high priority tasks, etc.&lt;/p&gt;

&lt;p&gt;Below are the lighthouse scores on the same machine when the lighthouse is executed 5 times for housing.com once serially and once in parallel. When executed serially results are completely different than when run in parallel. This is because available CPU cycles from the operating system get distributed to all 5 processes when run in parallel and are available to a single process when executed in serial.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;When the lighthouse is executed 5 times on the housing home page serially using the below code&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let numberOfTests = 5;
 let url = 'https://housing.com';
 let resultsArray = [];
 (async function tests() {
  for(let i =1;i &amp;lt;= numberOfTests; i++) {
   let results = await launchChromeAndRunLighthouse(url, opts)
   let score = results.categories.performance.score*100;
   resultsArray.push(score);
  }
  console.log(median(resultsArray));
  console.log(resultsArray);
 }());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Median is 84&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[ 83, 83, 84, 84, 85]&lt;/p&gt;

&lt;p&gt;The results are pretty much consistent.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;When the same test is executed in parallel.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const exec = require('child_process').exec;
const lighthouseCli = require.resolve('lighthouse/lighthouse-cli');
const {computeMedianRun as median} = require('lighthouse/lighthouse-core/lib/median-run.js');

let results = [], j=0;
for (let i = 0; i &amp;lt; 5; i++) {
exec(`node ${lighthouseCli} 
 https://housing.com 
 --output=json`, (e, stdout, stderr) =&amp;gt; {
   j++;
   results.push(JSON.parse(stdout).categories.performance.score);
   if(j === 5) {
    console.log(median(results));
    console.log(results);
    }
  });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Median is 26&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[ 22, 25, 26, 36, 36 ]&lt;/p&gt;

&lt;p&gt;You can clearly see the difference in scores between the two approaches.&lt;/p&gt;

&lt;h3&gt;
  
  
  B) Lighthouse covers only the most generic issues and does not understand your application behavior.
&lt;/h3&gt;

&lt;p&gt;This is the most complex issue with lighthouse reporting. Every application is different and optimizes the available resource where it sees the best fit.&lt;/p&gt;

&lt;p&gt;Gmail is the best example of this case. It prioritizes emails over any other things and mails get interactive as soon as the application loads in the browser. Other applications like Calendar, Peak, Chat, Tasks keep loading in the background.&lt;/p&gt;

&lt;p&gt;If you will open the dev tools when Gmail is loading you might get a heart attack seeing the number of requests it makes to its servers. Calendar, Chat, Peak, etc. adds too much to its application payload but Gmail’s entire focus is on emails. Lighthouse fails to understand that and gives a very pathetic score to Gmail applications.&lt;/p&gt;

&lt;p&gt;There are many similar applications like Twitter, Revamped version of Facebook which has worked extensively on performance but lighthouse mark them as poor performance applications.&lt;/p&gt;

&lt;p&gt;All of these companies have some of the best brains who very well understand the limitations of the tool so they know what to fix and what aspects to be ignored from lighthouse suggestions. The problem is with organizations that do not have resources and time to explore and understand these limitations.&lt;/p&gt;

&lt;p&gt;Search google for “perfect lighthouse score” and you will find 100’s blogs explaining how to achieve 100 on the lighthouse. Most of them have never checked other critical metrics like conversion or Bounce rate.&lt;/p&gt;

&lt;p&gt;One big issue with Google’s integration of lighthouses is that these tools are mostly used by non-technology people. Google search console which helps in analyzing the site’s position in the google search result is mostly used by marketing teams.&lt;/p&gt;

&lt;p&gt;Marketing teams report performance issues reported in the search console to higher management who do not understand the limitations of the tool and force the tech team to improve performance at any cost (as it may bring more traffic).&lt;/p&gt;

&lt;p&gt;Now the tech team has two options either to push back and explain limitations of the tool to higher management which happens rarely or take bad decisions that may impact other critical metrics like conversion, bounce rate, etc. Many large companies lack processes to regularly check these crucial metrics.&lt;/p&gt;

&lt;p&gt;The only solution to this issue is to measure more and regularly. Define core metrics your organization is concerned about and prioritize them properly. Performance has no meaning if it is at the cost of your core metrics like &lt;strong&gt;conversion&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving the score inconsistency issue
&lt;/h2&gt;

&lt;p&gt;Inconsistency in lighthouse scores cannot be solved with 100% accuracy but can be controlled to a greater extent.&lt;/p&gt;

&lt;h3&gt;
  
  
  A) Using hoisted services
&lt;/h3&gt;

&lt;p&gt;Cloud services are again an awesome way to test your site quickly and get a basic performance idea. Some of the google implementations like page speed insight try to limit the inconsistency by including lighthouse lab data and field data (Google tracks the performance score of all sites you visit if you allow Google to sync your history). Webpagetest queue the test request to control CPU cycles.&lt;/p&gt;

&lt;p&gt;But again they also have their own limitations.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cannot make all tests serial as this will increase waiting time for tests. Making them parallel on different machines will increase infra cost to infinity. Parallel execution on the same machine will result in uneven CPU cycle distribution.&lt;/li&gt;
&lt;li&gt;Different providers have different throttling settings like some prefer to not throttle CPU when executing tests for the desktop site. Which may or may not be a perfect setting for most people.&lt;/li&gt;
&lt;li&gt;Services need to have servers all around the world (webpagetest.org already has this feature) to understand the latency behavior in your target location.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You will be amazed by seeing the delta between minimum and maximum of ten test runs of a single page on web.dev. Prefer to take the median of all results or remove the outliers and take avg of the remaining tests.&lt;/p&gt;

&lt;p&gt;B) Self hoisted lighthouse instance&lt;/p&gt;

&lt;p&gt;The Lighthouse team has again done a great job here by providing a CI layer for self hoisting. The product is lighthouse-ci.&lt;/p&gt;

&lt;p&gt;This is an amazing tool that can be integrated with your CI Provider (Github Actions, Jenkins, Travis, etc) and you can configure it as per your needs. You can check the performance diff between two commits, Trigger lighthouse test on your new PR request. Create a docker instance of it, this is a way where you can control CPU availability to some extent and get consistent results. We are doing this at housing.com and pretty much happy with the consistency of results.&lt;/p&gt;

&lt;p&gt;The only problem with this approach is It is too complex to set up. We have wasted weeks to understand what exactly is going on. Documentation needs a lot of improvement and the process of integration should be simplified.&lt;/p&gt;

&lt;h3&gt;
  
  
  C) Integrating Web Vitals
&lt;/h3&gt;

&lt;p&gt;Web vitals are core performance metrics provided by chrome performance API and have a clear mapping with the lighthouse. It is used to track field data. Send data tracked to GA or any other tool you use for that sake. We are using perfume.js as it provides more metrics we are interested in along with all metrics supported by web vitals.&lt;/p&gt;

&lt;p&gt;This is the most consistent and reliable among all the other approaches as It is the average performance score of your entire user base. We are able to make huge progress in optimizing our application by validating this data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqgeyes3m6dq4oos33434.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqgeyes3m6dq4oos33434.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We worked on improving our Total Blocking Time(TBT) and the Largest Contentful Paint(LCP) after identifying problem areas. We improved TBT by at least 60% and LCP by 20%.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;TBT improvements Graph&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl8ahnns32evg3kx50j9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl8ahnns32evg3kx50j9n.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;LCP improvements graph&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fybnn77wu8581qegvzcx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fybnn77wu8581qegvzcx6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above improvements were only possible because we were measuring things. Measuring your critical metrics is the only way to maintain the right balance between performance, conversion, etc. Measuring will help you know when performance improvement is helping your business and when it is creating problems.&lt;/p&gt;

&lt;p&gt;Developers apply all sorts of tricks to improve their lighthouse scores. From lazy loading offscreen content to delaying some critical third-party scripts. In most cases, developers do not measure the impact of their change on user experience or the users lost by the marketing team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Considering lighthouse suggestions
&lt;/h2&gt;

&lt;p&gt;Lighthouse performance scores mostly depend upon the three parameters&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How fast page rendered (FCP, LCP, Speed Index)&lt;/li&gt;
&lt;li&gt;Page Interactivity (TBT, TTI)&lt;/li&gt;
&lt;li&gt;Stability (CLS)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To improve your performance score, the lighthouse report provides tons of suggestions. You need to understand the suggestions and check how feasible they are and how much value those suggestions will bring to your website.&lt;/p&gt;

&lt;p&gt;Let us take a few suggestions from each category of the lighthouse report and see what are the hidden cost of implementing them.&lt;/p&gt;

&lt;h3&gt;
  
  
  How fast page rendered (FCP, LCP, Speed Index)
&lt;/h3&gt;

&lt;p&gt;Lighthouse suggests optimizing images by using modern image formats such as webp or avif and also resizing them to the dimension of the image container. This is a very cool optimization and can have a huge impact on your LCP score. You can enhance it further by preloading first fold images or serving them via server push.&lt;/p&gt;

&lt;p&gt;To build a system where images are resized on the fly or pre resized to multiple possible dimensions on upload is a tedious task. In both ways, depending upon your scale you might need to take a huge infra burden that needs to be maintained and also continuously invest.&lt;/p&gt;

&lt;p&gt;A better approach is to implement it on a single page for a limited image and track your most critical metrics like conversion, bounce rate, etc. And if you are really happy with the ROI then take it to live for all of your images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Page Interactivity (TBT, TTI)
&lt;/h3&gt;

&lt;p&gt;Lighthouse recommends reducing your Javascript, CSS size as much as possible. Javascript or CSS execution can choke the main thread and CPU will be unavailable for more important stuff like handling user interaction. This is a fair idea and most people understand the limitation of js being single-threaded.&lt;/p&gt;

&lt;p&gt;But Google took the wrong path here. In the upcoming version, the lighthouse will start suggesting the replacement of larger libraries with their smaller counterparts. There are multiple problems with this approach.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Most libraries get larger because they solve more corner cases and feature requests. Why do people say webpack is tough because it handles so many edge cases that no other bundler handles. Imagine if webpack did not exist then half of us would have stuck in understanding the different kinds of module systems js supports. Similarly, the popular frontend frameworks are large because they handle too many things, from backward compatibility to more bugs. Jumping to a new library may cause issues like weak documentation, bugs, etc. So if you plan to pick this item get ready to have an expert developer team.&lt;/li&gt;
&lt;li&gt;It is highly unlikely that Google will recommend Preact to React because of the emotional attachment community has with the React framework. Doing this is unprincipled and unfair with the maintainers of projects whose community is not aggressive in nature.&lt;/li&gt;
&lt;li&gt;Google itself does not follow rules created by themselves. Most of the google products load way too much of Javascript. A company which has the best resources around the world has never focused on their lighthouse score but wants the entire world to take it seriously. There seems to be a hidden agenda of google behind this. Faster the web better is the ad revenue of google and lesser is the crawl infra requirement can be some of the benefits.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Google should learn from this famous quote&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Be the change that you wish to see in the world.”&lt;br&gt;
Mahatma Gandhi&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before taking any step to reducing javascript on your page like lazy loading off-screen components please calculate its impact on your primary metrics like conversion, user experience, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stability (CLS)
&lt;/h3&gt;

&lt;p&gt;Every website must try to avoid any kind of layout shift which may cause issues in user experience. But there will be cases where you will not have many options to avoid CLS.&lt;/p&gt;

&lt;p&gt;Let a website want to promote app downloads to users who have already not installed the app. Chrome has added support to detect if your app is already installed on the device(using getInstalledRelatedApps API) but this information is not available to the server on the first request.&lt;/p&gt;

&lt;p&gt;What the server can do is make a guess and decide if it needs to append the app download banner on the page or not. If the server decides to add it and the app is already present on the device, the Download banner needs to be removed from the page, and similarly when the server decides to not include the download banner and the app is already not installed on the device it will be appended to the DOM on the client which will trigger Cumulative layout shift(CLS).&lt;/p&gt;

&lt;p&gt;To avoid CLS you will remove the banner from the main layer of the page and show it as a modal, floating element or find some other way to show it, but what if you get maximum downloads when the banner is part of your page. Where will you compromise?&lt;/p&gt;

&lt;p&gt;On a funny note, Most people have already experienced CLS on the google search result page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Femjudge3yht1b0pq7gqn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Femjudge3yht1b0pq7gqn.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Lighthouse is an awesome performance tool built by Google and can help you improve your website performance.&lt;/li&gt;
&lt;li&gt;There are multiple issues related to how lighthouse work and the consistency of the results.&lt;/li&gt;
&lt;li&gt;Devices with different configurations can give completely different scores so it is important to stick to a single device configuration while running a lighthouse process.&lt;/li&gt;
&lt;li&gt;The same device can give different scores based on how much CPU is available to the lighthouse process during the test.&lt;/li&gt;
&lt;li&gt;Using cloud solutions like web.dev is a better solution to get consistent results than running a lighthouse on your local machine.&lt;/li&gt;
&lt;li&gt;Running self hoisted service is better than cloud solutions because results in cloud solutions can get inconsistent based on the amount of traffic they are handling. Also, lighthouse settings can be better manipulated in a self-hosted environment.&lt;/li&gt;
&lt;li&gt;A self-hosted environment requires expertise and time because of limited resources and documentation but is very scalable and integrates very well with most popular CI providers.&lt;/li&gt;
&lt;li&gt;Tracking real user data is the most reliable approach to track web performance. Google web vital or perfume.js is some of the lovely libraries to track real user data.&lt;/li&gt;
&lt;li&gt;Define critical metrics to your website like conversion, bounce rate, user experience, etc. Plan any optimization suggestion from the lighthouse after tracking the impact of it on your critical metrics.&lt;/li&gt;
&lt;li&gt;Never do premature optimization for the sake of a high lighthouse score. Simple lazy loading of offscreen components to reduce javascript size in some cases can drastically reduce user experience so prefer caution while making such changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Originally published at &lt;a href="https://ashu.online/blogs/lighthouse-performance-auditing-things-you-should-know" rel="noopener noreferrer"&gt;https://ashu.online/blogs/lighthouse-performance-auditing-things-you-should-know&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>lighthouse</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
