DEV Community

Jim Medlock
Jim Medlock

Posted on • Originally published at Medium on

Using the Chrome DevTools Audit Feature to Measure and Optimize Performance (Part 2)

Tuning the Application

Photo by rawpixel.com from Pexels

In Part 1, we presented questions to help frame the application performance problem, defined a performance tuning, and demonstrated how to use the Performance Audit feature of Chrome Developer Tools to create a performance baseline.

In this article, we’ll use the Meteorite Explorer frontend web application to show how to identify, measure, and correct specific performance issues.

The Meteorite Explorer repo is hosted on GitHub and the starting point, before any tuning has been performed, is located in the feature/01-initial-app branch.

Review the Application’s Performance

The key to performance tuning starts with understanding how the application operates and having a clear picture of its performance attributes. Both are necessary because even though you might possess in-depth knowledge about the inner workings of the application, what may not be clear is how different parts interact and how well it scales.

As a developer, and possibly the creator, of the app it’s up to you to understand how it does what it does. This includes understanding application performance attributes, but the task can be eased using the Audit feature.

Every application is unique and has the potential for presenting special situations the may make deciphering it quite tricky. However, reviewing the Dev Tools Performance Audit and following the suggestions it provides is a good starting point.

Step 1 — Review Audit Metrics

Let’s reexamine the Metrics section of the Performance Audit report for the Meteorite Explorer application (see Figure 4). Even though Meteorite Explorer’s overall performance score looks acceptable, we can see that two areas need to be corrected — a poor speed index and high input latency.

A poor speed index indicates that the page is being rendered very slowly while high input latency is indicative of an application that responds slowly to user input. This is born out by the accompanying metric showing 6.5 seconds are consumed from the time it starts until the point when it displays meaningful information. Also, another 0.9 seconds is needed to respond to user inputs.

The trace (see Figure 5) compares what’s the user sees on the screen to the activity on the main thread, which reinforces the rendering speed is an issue. The trace’s summary section shows that 3.7 seconds are spent in Javascript and hovering the cursor over the area in the Main thread timeline under the ‘Microtasks’ bar shows the majority of this time is spent executing DOMNodeInserted events.

These two facts, the areas of the application taking the most elapsed time, will be revisited when we start to identify possible improvements.

Step 2 — Review Opportunities

Implementing the opportunities highlighted by the Performance Audit (see Figure 6) is “low hanging fruit.” Even though the savings may be minimal, Performance Audit had done the hard work for us by identifying the problem, suggesting solutions, and providing the saving estimate.

In the case of Meteorite Explorer, reducing the number of round trips between the client and the host to retrieve the meteorite landing data can save an additional 0.3 seconds. In this case, retrieving the JSON file containing meteorite landing data from https://data.nasa.gov.

As with the review of Audit Metrics, at this point no action will be taken to resolve this issue. Instead, we’ll come back to it after we identify possible improvements.

Step 3 — Review Diagnostics

The last step in the review of application performance is to examine the list of diagnostics (see Figure 7). The diagnostics for Meteorite Explorer are,

Uses an excessive DOM size — 10,073 nodes

Recall that we discovered in the review of Audit Metrics that there was a high volume of DOMNodeInserted events. Coupling that fact with this diagnostic indicates that reducing the number of nodes we add to the DOM can reduce elapsed time by up to 3.7 seconds.

Has excessive main thread work — 18,540 ms

In this profile, 18.5 seconds were used in the application’s main thread. We know that 3.7 seconds of this was devoted to adding nodes to the DOM, but where is the rest of the time spent? Clicking on this diagnostic provides a breakdown of where this time was spent.

Figure 10 — Audit Diagnostics

Once again, knowing that inserting DOM nodes takes a considerable amount of time intuition tells us that a good part of the time in the main thread was devoted to updating the DOM.

Javascript boot-up time is too high — 4,660 ms

Knowing that Javascript boot-up time is too high is important, but even more important is understanding why this is so. The URL’s accessed during boot-up can be displayed in descending sequence by total time by clicking the diagnostic.

Figure 11 — Javascript Boot-up Time Diagnostic

The first two URL’s account for almost half of the time required for boot-up. The first is the Grammarly plugin and the second is our application. The latter is known to be the application since it resides in the build directory of the repo. The npm run build command used to create the production application in this directory.

Figure 11 — Build Directory

Uses inefficient cache policy on static assets — 3 assets found

This diagnostic indicates that time could be saved by caching static assets required by the application rather than retrieving them each time they are needed. At this point, we know that all three are part of the application since they reside in the build directory (see Figure 11) in the repo.

If you aren’t familiar with how to create a performance cache policy the ‘Learn more’ link in the diagnostic is an excellent source for both background information and solutions to this issue.

Figure 13 — Inefficient Cache Policy Diagnostic

Critical request chains — 3 chains found

The final diagnostic identifies three request chains that are running at a high priority and are somewhat long. Just as we discovered when reviewing the cache policy diagnostic, these three chains are all part of our application logic (see Figure 11).

This audit is intended to point out an opportunity to improve page load time rather than being an absolute “pass” or “fail” condition.

Figure 14 — Critical Request Chain Diagnostic

Identify Possible Improvements

Photo by Kaleidico on Unsplash

After reviewing the Audit metrics, opportunities, and diagnostics it’s always a good idea to summarize and prioritize the significant findings. In the same way that a coach sets up plays for a sports team, organizing the facts establishes a starting point identifying the changes we’d like to make to improve the application’s performance.

The task now is to identify and prioritize solutions for each of the performance observations identified during the review of Meteorite Explorer’s performance. Prioritization is critical since we will work on performance issues in priority order until we reach a point of diminishing return.

Poor Speed Index — 6.9 seconds

The fact that the trace shows the application is spending considerable time processing DOMNodeInserted events, coupled with diagnostics indicating an excessive DOM size, excessive main thread work, and high Javascript boot-up time are clear indicators that Meteor Explorer is inefficiently manipulating the DOM.

It is at this point where having an in-depth knowledge of the application logic comes into play. In App.js the componentDidMount function synchronously loads the meteorite landing JSON file and sets the isDataLoaded state variable to true when completed

  componentDidMount() {
    fetch(process.env.REACT_APP_METEORITE_STRIKE_DATASET)
    .then((response) => {
      return response.json();
    })
    .then((strikesJSON) => {
      this.setState({ meteoriteStrikes: strikesJSON });
      this.setState({ isDataLoaded: true});
    });
  }
Figure 15 — componentDidMount function in App.js

Once the meteorite landing file has been loaded it contents, all 46K, are added to the <MeteoriteTable> component.

<section className="App-results">
  <div>
    {this.state.isDataLoaded
       ? ( <MeteoriteTable meteoriteStrikes={ this.state.meteoriteStrikes }
             searchTerms={ this.state.searchTerms } /> )
       : (' ')
    }
  </div>
</section>
Figure 16 — Populating rows in the MeteoriteTable component

Two changes will be made to solve this performance issue.

  1. Solution: Implement a service worker to cache the meteorite landing data to reduce the number of round trips to the Nasa host. Expected result: Reduction of high Javascript boot-up time to an acceptable level.
  2. Solution: Change the <MeteoriteTable> component to support pagination to reduce the number of DOM nodes. Expected result: Decrease the total number of DOM nodes and time required on the main thread to an acceptable level.

In both of these cases, the measure of success is returning the Poor Speed Index to green status and eliminating the excessive DOM size, excessive main thread work, and high Javascript boot-up time diagnostics.

Input Latency — 0.9 seconds

Reading the supplemental information for this issue reveals that the Performance Audit feature defines a laggy application as one taking longer than 100ms to respond to user input.

Again, a knowledge of the application is an essential factor in identifying possible solutions. Meteorite Explorer‘s search function uses the Lodash function debounce to space out, or debounce, calls resulting from user keypresses to eliminate multiple calls for the same keypress.

this.emitChangeDebounce = debounce(this.queryName, 150);

This issue is the result of a wait interval of 150ms when processing the input of search terms.

Solution: Reduce the debounce wait time to 95ms. Expected result: Elimination of the input latency issue with no impact on the user experience.

Inefficient Cache Policy & Critical Request Chains — Unknown

The most likely solution to this issue is because the cache control policy isn’t correctly specified. Cache control is a complicated subject, and you can learn more in this tutorial.

Figure 17 shows that maximum age is set to 600 seconds in the cache-control has a maximum age of 600 seconds. This setting is too low in a production environment where the application doesn’t change all that often.

Figure 17 — Cache-Control Setting

Solution: Increase the maximum age setting in the cache controls to 86,400 seconds, or one day. Expected result: Elimination of the ‘Inefficient cache policy’ diagnostic.

To correct the ‘Critical request chains’ diagnostic, a change in the way the meteorite landing data is retrieved by App.js is necessary. Since JavaScript is single-threaded, there is no option for creating new threads for specific subtasks as there is in other languages. However, executing activity asynchronously will eliminate blocking of UI activity.

Solution: Change the function componentDidMount located in App.js to retrieve meteorite landing data asynchronously. Expected result: Elimination of the ‘Critical request chains’ diagnostic.

Test Improvement Ideas

Photo by Nicolas Thomas on Unsplash

Once the top three issues solutions, expected results, and success criteria are identified it is time to implement them and see if they have the expected outcomes. The following steps will be used to resolve the issues,

  1. Create a branch for each solution.
  2. The application will be tested to ensure the change hasn’t impacted its functionality.
  3. A new Performance Audit will be run and compared against the original audit to determine the performance impact.

Solution #1 — Implement Service Worker (branch feature/02-service-worker)

Create React App already includes service worker support for applications running in production mode. By default, the service worker is disabled in index.js, but it can be enabled by replacing serviceWorker.unregister() with serviceWorker.register().

To test the service workers impact publish the branch feature/02-service-worker to GitHub Pages using the npm run publish command and rerun the Performance Audit.

Rerunning the Performance Audit show the service worker has had the desired effect of not just reducing the Speed Index time, but returning it to an acceptable (e.g., green) status.

Figure 18 — Service Worker Results

Notice that the Estimated Input Latency increased by 192 ms. But, knowing that later we’ll be reducing the debounce wait time we’ll wait to see what outcome it has on latency.

Solution #2 — Add Pagination to the <MeteoriteTable> Component (branch feature/03-pagination)

The number of nodes in the DOM is excessive because Meteorite Explorer unconditionally adds all meteorite landings in the <MeteoriteTable> component. Adding pagination to the table will correct this by limiting the number of rows displayed at any one time.

Figure 19 — Pagination UI Changes

After implementing this change, the ‘Uses an excessive DOM size’ is no longer included as a diagnostic requiring correction.

Figure 20 — Pagination Results

Solution #3 — Reduce debounce wait time to 95ms (branch feature/04-debounce)

Sometimes solutions don’t achieve their desired results. In this case of this solution, reducing the debounce wait time from 150ms to 95ms and again to 75 ms didn’t diminish the input latency as expected. However, examining the Metrics trace shows that the Grammarly browser extension is consuming a significant amount of time.

Figure 21 — Grammarly Main Thread Trace

Disabling this extension didn’t have the desired effect of resolving the input latency issue, but the overall health indicator has risen to 98, and the JavaScript boot-up time and ‘Critical request chains’ diagnostics have been reduced to acceptable levels.

Figure 22 — Results after disabling Grammarly

Solution #4 — Inefficient Cache Policy & Critical Request Chains

The 'Inefficient cache policy' diagnostic should be corrected by increasing the max-age setting for Meteorite Explorer’s cache-control to 86400 seconds by changing the HTTP request headers from the server. However, this is a client-only application and since we’re using GitHub Pages as our host that isn’t available.

Instead, adding the following to index.html was attempted, but had no effect since the maximum age is being set to 600 by the server.

<meta http-equiv="Cache-Control" content="must_revalidate, public, max-age=86400">

Similarly, changes to componentDidMount in App.js didn’t have the desired result of eliminating the 'Critical request chains' diagnostic. This diagnostic remains even after changing the fetch of meteorite landing data to be asynchronous instead of synchronous.

Figure 23 — Asynchronous Meteorite Landing Data Retrieval

Even though we weren’t able to eliminate these diagnostics, application performance is significantly better as the result of the tuning efforts. The impact is the overall performance rating, as measured by Performance Audit, has increased from 72 to 99.

Figure 23 — Ending Results

However, the failure the resolve the 'Inefficient cache policy' and 'Critical request chains' diagnostics shouldn’t be skipped since doing so increases technical debt. Even though we are at a point of diminishing returns for the application as it currently stands, these are likely to become more critical as the application is enhanced.

To prepare for the future additional research and study is needed to learn more about these diagnostics and to identify how to solve them. Stay tuned…

Wrapping It Up

Creating a performant application doesn’t just happen. It takes thought, methodology, hard work. The commitment to maintaining performance over the life of an application requires that each new release is measured, monitored, and improved. More importantly, it requires the recognition of performance as a crucial aspect of user experience and satisfaction.

An additional benefit that’s just as important as maintaining the quality of the user experience is the impact an objective, and repeatable process has on the developer’s personal development. Observing, developing ideas, and testing them reinforces what you know as being correct, and exposes what is not.

As we saw in the previous section some of the assumptions made regarding how to fix two diagnostics were incorrect. This shouldn’t be a de-motivator. Instead, it’s a learning opportunity and a motivator to learn more.

The ability to make and learn from mistakes is one of the most powerful tools in your development arsenal. Although it has been said that “You are your worst enemy,” conversely, it is true that you are your most valuable asset. Take advantage of this and allow yourself the freedom to be blinded by science!

Disclosure : This article was based on an earlier article, “Using the Chrome DevTools Audit Feature to Measure and Optimize Performance” which has been refactored into two parts.


Top comments (0)