This post was originally posted on my blog
Recently at Forward Digital, we have had a big push on SEO. A key part of this is the performance of your website. There are a lot of resources out there supplied by Google, that go into detail on how performant your website should be, and what metrics are used to track performance.
So, we have spent a lot of time optimising our site for maximum performance, and we thought we would share some useful tips and how we did it.
To give some context, our website is built using Next.js and hosted on Vercel.
Tools for tracking performance
The process of optimising your website is incremental. It is important to track how each change is affecting performance. By doing this you can gain a better understanding of where to prioritise your efforts. Thankfully, Chrome Dev Tools is a great tool for doing this, it has Lighthouse built into it which enables you to quickly run reports and calculate a performance score.
This is what we used to track our progress optimising our site, and what I will keep referring to in this post.
Now some important things to note. The first is that you should track your performance based on your mobile performance.
Generally speaking, desktop performance is much better than mobile, and so you will notice that when analysing sites via Lighthouse the performance score on mobile is much lower than on desktop. Therefore if we can get good mobile performance then we know we should have a really good desktop performance.
The next important thing to note is that your lighthouse score can change each time you run a report, even though you may have made no changes. If you run your website locally, open localhost and run the lighthouse report you will notice the score is significantly lower than when you run lighthouse on your live site. Then if you try opening an incognito window and running the reports again, the score may have changed again.
This is because of two things. Firstly when you are running your code locally with next dev
, you are not running it in production mode. This means that there are lots of extra debugging and dev tools included in the bundle and files are not being minified. This causes slower performance.
Secondly, when you are running your browser outside of an incognito window you will have all of your browser extensions. Some browser extensions will require extra resources and inject things onto the page. A good example of this is React Dev Tools. This then gives you are worse score than other users may experience who do not have any extensions installed.
Process for tracking performance
In our case, to try and make sure we had a good understanding of our current performance we always ran our lighthouse checks in an incognito window. We also ran two reports after each change. We ran one on localhost, and then we would push our changes to a branch which had a Vercel preview deployment attached to it and run a report on the preview URL in an incognito window. This helps give a clear understanding of how our code changes have affected performance, and what we can expect our performance to be on a live deployment.
Before you start making changes to your code, run these two reports and make note of the mobile performance scores. Each time we make a change we can run the reports again and compare the results.
It can become a tedious task making changes, pushing to preview and then waiting for it to deploy, just to run the performance report again. So when you run the report on your localhost, if your performance score has not increased by at least 2, then I would not bother pushing it, as the change is unlikely to have made much difference.
However, it is important to note that even though your score on localhost may only have increased by 2, it does not mean that the preview deployment will have only increased by 2. In our case, we would sometimes see a change of 2 on localhost, which was more like 8/10 when on the preview URL.
How does Google measure performance
There are 6 key metrics that are measured, and these are:
- First Contentful Paint (FCP)
- Time to Interactive (TTI)
- Speed Index
- Total Blocking Time
- Largest Contentful Paint (LCP)
- Cumulative Layout Shift (CLS)
I won’t go into too much detail on what each of these mean, but Google actually supply us with numbers that they see as Good, Needs Improvement or Poor. We can use these as our targets.
You can read more about what each of these metrics means and how they are measured here. I have attached some more resources at the end of this post if you want to dive in further on each of them.
You will notice in these metrics they show FID, which is not on the Lighthouse metrics. FID stands for First Input Delay, and it is very similar to Total Blocking Time. So, we should aim to get our Total Blocking Time to less than 100ms.
These targets are NOT for our localhost reports. These are the targets for the preview deployment report. It will be almost impossible to achieve these on our localhost report, and the preview deployment will be much closer to what the live deploy report will be. The ultimate goal is to achieve these targets with the live URL.
Now let’s get into actually improving our scores!
Image Optimisation
Next.js includes image optimisation via the next/image
component. If you deploy your site using next export
then you may want to skip this section, as you will probably be aware that the component cannot be used the same with exported sites.
Try to make sure that you are using this component instead of <img>
tags everywhere.
You can read the docs here
It may cause some slight pains with sizing your images on the page but it is worth it for the benefits. You can also use the fill prop with a relatively positioned container to help make this easier.
When using the Image component there is a priority
prop. This can help improve your FCP and LCP scores. If you have any images “above the fold” when your site first loads, then these should all have this set to true. It ensures that these images are loaded first.
You can also use the quality
prop to set a number between 1 and 100. It defaults to 75 but can be used to help make larger images load faster.
Here is an example:
<Image
src="me.png"
alt="Picture of the author"
width={500}
height={500}
quality={60}
priority
/>
Custom Fonts and Google Font Optimisation
Whether you are loading your own font files or loading from a CDN like Google Fonts you can optimise this. Loading in fonts can increase your Total Blocking Time and your Time to Interactive.
In our case, we use a custom font that is included in our website bundle.
Next.js has a package called @next/font
that allows you to optimise your fonts. It is not included with Next so you must install this package separately. You can read the documentation here.
As we host our own font files we needed to use the Local Fonts method of optimising fonts. This required us to define the paths to our files in an object, and inject them into the main of our app.
The process is similar for Google Fonts, however, you can import these directly from the package.
As the fonts are injected via className it means that you can only include fonts for specific pages or components where the font is used.
Example:
import { Roboto } from '@next/font/google'
const roboto = Roboto({
weight: '400',
subsets: ['latin'],
})
export default function MyApp({ Component, pageProps }) {
return (
<main className={roboto.className}>
<Component {...pageProps} />
</main>
)
}
Lazy Loading Content
When a page is first loaded we want to prioritise the content above the fold. The LCP is referring to the largest element that the user will see first. In order to get this time down we want it to be one of the first things to load. This is where next/dynamic
can help. You can read the documentation here.
In our case, our pages are broken down into smaller React components. Each section of our homepage is its own component and all of the content above the Fold is wrapped in a Fold component. This approach makes using the dynamic import packages really easy.
We wrapped all of the components below the fold in dynamic imports. This means that all of the content in the FCP and LCP is prioritised, reducing those all-important times.
Here is a quick code snippet to show you how it looks:
import dynamic from 'next/dynamic';
import { Fold } from 'Components';
const WhatWeDo = dynamic(() => import('../components/what-we-do'));
const AboutUs= dynamic(() => import('../components/about-us'));
const Services= dynamic(() => import('../components/services'));
export default function Home() {
return <>
<Fold />
<WhatWeDo />
<AboutUs />
<Services />
</>
}
3rd Party Scripts
You may use 3rd party scripts for certain libraries or things like Google Analytics. These need to be loaded into the page, and if they are required then they can increase your TTI and Total Blocking Time.
You can use the next/script
components to optimise these imports (Documentation is here).
Some scripts do not need to be loaded in before the page renders and so you can utilise the strategy prop on the Script component. By default the strategy is afterInteractive
this means that these are not loaded until the TTI has finished.
You can also use lazyOnLoad
, this is useful for any scripts that are for content below the fold or that can be loaded while the user is already using the site.
If there are certain scripts that must be loaded before you can use beforeInteractive
but you should try to do this as little as possible as it will have a negative impact on the performance.
Google Analytics can actually be rendered later. However, one thing to note is that your analytics will not track users who did not complete loading the page.
Here is an example of the script component in action:
<Script src="https://example.com/script.js" strategy="lazyOnload" />
Don’t use Babel compiler, use Next.js compiler
In version 12 of Next.js they introduced the new compiler, that would replace the Babel compiler. The new compiler is much more performant and effective at optimising your final bundle.
When we first built our website we were using version 11 of Next which came with the babel compiler. Even after upgrading to the latest version we still had a custom .babelrc
file in our project. Just by having this file it meant that when we compiled our website it would use the babel compiler instead of the Next.js one. So we removed the file entirely and re-compiled and saw a huge increase in performance.
As of Next 13 the Next.js swc compiler removed the use of Terser for minification, and the swc compiler handles all minification which is 7x faster. So make sure you are running the latest version of Next without any babel config.
You can read up more about the compiler here.
Another thing to check is that in your next.config.js you do not have the swcMinifier disabled.
For example, this is bad as it will use the slower Terser minifier:
// next.config.js
module.exports = {
swcMinify: false,
}
Bundle size
Here is the big one! Most of the previous tips try to reduce the number of render-blocking resources and optimise what gets loaded and when. However, ultimately one of the most important things you need to maximise performance is a smaller bundle size.
Most of us know what it’s like to be waiting for downloads, and the dreaded feeling when you start a download and see almost 100GB+. Websites are not so different. The smaller the bundle, the faster it downloads. The main difference when it comes to websites is that we are working with much smaller numbers.
If you were using the Babel compiler and moved to the Next.js compiler then you will have probably already managed to get your bundle down somewhat. I will run you through some more ways of getting the bundle size down.
next bundle analyzer
The first step to reducing the bundle size is analysing what you currently have. This way you can see what are the biggest contributors to your bundle, and look for ways to reduce them. In some cases, you may even want to remove them.
Follow the instructions to install @next/bundle-analyzer
here. Then run a build. You will see a few windows open in your browser.
The first one to look at is the client.html
. This shows you a visual representation of all the files included in your bundle. The size of the file or directory on the screen is proportionate to the size of it in the bundle.
Take a look at this example:
There are a few that stand out to me. These are:
- react-fullpage.js
- react-dom.production.min.js
- data-protection-policy.mdx
- prism.js
You will notice that 3 of these are wrapped in node_modules. This means that they are some of the libraries that we have installed in our project.
react-dom
there is not much we can do about that, as we need React installed.
However, react-fullpage.js
is a 3rd party library we installed for some fancy slide animations. When I hover it I can see that it has a parsed size of 73.36KB. This is significantly larger than most of the other files and directories in the bundle. Therefore, this would be a great place to start.
The first step is to see if I can remove it entirely. If I remove it then I will be saving 73.36KB which would reduce my total bundle size from 836.78 KB to 763.42 KB, which equates to around 11.4% reduction. This is a significant reduction in size.
How do I know if I can remove it? This is where depcheck comes into play.
depcheck
depcheck is a package that analyses your code and helps determine if you have any packages installed that are not being used. You can read the docs here.
You do not need to install the package, you can run this command in the root of your project
npx depcheck
This will output to the console a list of your dependencies. There is a section with the heading *******************************Unused dependencies*******************************. This is where you should focus your attention.
According to depcheck the packages under this section are not being used in your app, so ******in theory,****** you can remove all of these packages.
However, be careful. Just because depcheck can’t find any reference to them it doesn’t mean they are not being used. I would recommend uninstalling them one by one and re-running your app to check it still builds and works as expected.
Removing any unused packages can help reduce bundle size without having any effect on the app, but in our example, we are using react-full-page
which means we can’t just remove the dependency. So the next step is to see if we can reduce it.
Light versions
If you have a package that is taking up most of your bundle size then it may be a good idea to look for an alternative. There are often a lot of packages out there that do similar jobs but have different bundle sizes. A good example of this is Lottie animations.
lottie-web
is the official package that is used, however, this has a massive bundle size. We were using react-lottie-player
which had a dependency on lottie-web and it took up almost 40% of our total bundle size when we ran the analyser.
After looking through the docs, we found that there was a light version of Lottie that react-lottie-player provided in the distribution. So we updated all of our references to Lottie to use the light version. This reduced the bundle size a lot and got it down from 40% to around 20%.
This was a big performance boost, and after re-running our performance reports we saw a big difference. But even at 20% it was still one of our biggest contributors to the bundle size. So we pushed on to see if there were any other ways we could reduce it.
Introducing dotlottie. Dotlottie is a new format and allows you to convert Lottie .json into the new and smaller .lottie format. We can then install the @dotlottie/player-component
package instead of the react-lottie-player
one. This removed the dependency on lottie-web
entirely and got the bundle size of the Lottie animations so small I can’t even see it on the analyser!
Not all packages have light versions, so you may need to try something else. What next?
Tree shaking
Tree shaking is the term used to describe when dead code is removed from a bundle. Lots of packages will implement tree shaking, and in this context, it means that when your app is compiled it will only include the specific files and assets that you are using in that package.
Some packages will not have a good implementation of tree shaking. An example of this (at the time of writing this post) is react-icons
.
We used react icons across our website and used various icons from the material design icon set. Across the entire site, we probably used about 10 different icons. Overall it is a very small subset of a large icon set.
When we ran the bundle analyser we could see that react-icons was including the entire material design icon set in the final bundle. This is an example of failed tree shaking. We should only be including the 10 icons we used.
After looking around online it seemed that this was a known issue with react-icons and Next.js. The solution was to install a new package that provided all of the individual icon files (@react-icons/all-files
). Using this package we could import all of our icons by specifying the exact paths to the file.
import { MdMail } from '@react-icons/all-files/md/MdMail';
This then meant that only the 10 icons we used were included in the final bundle.
When you are looking at some of the larger packages in your bundle, you should try to see if it uses an effective tree shaking approach.
Last resort
If you find that you are using a package that has no light version or alternative, then you may want to consider writing your own implementation.
In our case react-fullpage-js
is being used for some nice CSS animations. It includes a lot of other functionality but we are only using a small part of it. So it wouldn’t be too much of a big job to write our own CSS animations to replace it and remove the dependency.
However, this may be too much of an undertaking for some more complex third-party packages, and it also becomes more code for you to maintain yourself. So tread lightly with this one!
Summary
After implementing all of the above techniques on our site, we were able to see massive performance gains. Check out our website, and you can see what I mean forward.digital.
I have gathered a list of some of the resources that I found useful when researching and optimising our website performance.
Useful links
- Minimizing main thread work issues
- How to use the dev tools performance tab to determine main thread work
- PRPL design patterns for optimisation
- More info on First Contentful Paint (FCP)
- More info on Speed Index
- More info on Time to Interactive (TTI)
- More info on Total Blocking Time
- More info on Largest Contentful Paint (LCP)
Top comments (0)