DEV Community

Cover image for 5 quick steps to improve the performance of your website
David
David

Posted on • Updated on

5 quick steps to improve the performance of your website

Have you come across a project/website that you haven't bothered to optimize much and you need to speed it up in a short time?
Then this article might help you!

There are a lot of techniques and patterns to speed up the work of a web application, but in this article I will consider the methods that seemed to me personally the most effective, fast and easy to implement.

TL;DR

  1. Optimize media
  2. Compress text files
  3. Cache static files
  4. Use code splitting
  5. Eliminate render-blocking resources

Optimize media

Media files are often one of the largest files downloaded by the browser, so their optimization can significantly speed up the work of your website.

  • Reduce your PNG's, JPEG's with services like TinyPNG or similar ones
  • Resize your images based on size they would be displayed. File size difference

As we can see, the kitten on top and bottom look the same (cute). Therefore, there is no need to upload a 3700x3700px image if we have our image container significantly smaller in size. Thus we can shrink image to div size 400x400px and make load time 5 time faster and file size more than 1mb smaller.

  • Display different images for different viewport sizes. If you have large background image that you don't need on mobile, you can load smaller one or just set background color. For that you're gonna need to use media queries.
@media screen and (min-width: 1900px)  {
  .main {
    background-image: url('./large-background.jpeg');
    background-repeat: no-repeat;
    /*...*/
  }
}
Enter fullscreen mode Exit fullscreen mode

Thanks to the media query, the browser will know that a large background needs to be loaded only at 1900px and more.

  • Convert media to Webp, Webm and use them.

When you use the WebP format, images are smaller in size, but they almost never compromise quality, allowing your page to load faster.
Most newer versions of browsers support webp/webm format.
And for older browsers you can always use other formats as fallbacks:

<picture>
  <source srcset="images/kitten.webp" type="image/webp">

  <!-- fallback -->
  <source srcset="images/kitten.jpg" type="image/jpeg"> 
  <img src="images/kitten.jpg" alt="Alt Text!">
</picture>
Enter fullscreen mode Exit fullscreen mode

The only disadvantage is that you will have to store at least two image formats wich will cost you extra space. Most of the time that is not a problem.

  • Remove image metadata (if you want to go crazy)

On average, image metadata makes up 16% of a typical JPEG file on the web. It is used to store data describing information about image rights and administration. The metadata of the image can also be sent and used by AI's.
So if you don't care about copyright issues, AI and want go crazy - just cut down the metadata using tools like verexif.

Compress (GZIP)

When we use websites, we receive HTML, CSS and JS files from the server. These are all text files and they can be compressed.

If we send a gziped file to the browser instead of plain old index.html, we’d save on bandwidth and download time. The browser could download the gzipped file, extract it, and then show it to user much faster.

Ok, so how do we make our files gzipped?

The answer is we add on our server (backend) HTTP header Content-Encoding: gzip.

It works simple, if the browser supports any compression schemes, when executing a request, it adds a header Accept-Encoding: gzip, deflate, where gzip and deflate are two possible compression schemes.

If the server doesn’t send the content-encoding response header, it means the file is not compressed which is default behaviour on many servers.

So all we need to do is add on our server content-encoding header.

For example, in Go we need to wrap our handler like this:

func gzipHandler(h http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
            h.ServeHTTP(w, r)
            return
        }
        w.Header().Set("Content-Encoding", "gzip")
        gz := gzip.NewWriter(w)
        defer gz.Close()
        h.ServeHTTP(gzipResponseWriter{Writer: gz, ResponseWriter: w}, r)
    })
}
Enter fullscreen mode Exit fullscreen mode

In Nodejs we could use compress library (be aware that you can configure what files/routes to compress):

// server.js
const compression = require('compression');
const express = require('express');
const app = express();

// Compress all HTTP responses
app.use(compression());
app.get('/', (req, res) => {
  const animal = 'elephant';
  // It will repeatedly send the word 'elephant' in a 
  // 'text/html' format file
  res.send(animal.repeat(1000));
});

app.listen(3000, function () {
  console.log('Example app listening on port 3000!');
});
Enter fullscreen mode Exit fullscreen mode

Enabling compression is one of the fastest and more effective ways to improve website performance. So go for it and enjoy speed boost.

Cache static files

I would divide caching into two parts:

  • Browser cache
  • Server side cache

Browser cache

As we know browser caching improves and speeds up browsing. Once you've downloaded an asset, it lives (for a time) on your machine. Retrieving files from your hard drive will always be faster than retrieving them from a remote server, no matter how fast your connection is.

Because of that we should definitely use it for static assets like JavaScript, CSS and Images.

So what we need to set up browser cache?

Basically, for practical use we neeed only two HTTP headers! Simple, right?

Our HTTP header would be Cache-Control and Etag.

  • Cache-Control. The server can return a Cache-Control header to specify how, and for how long, the browser should cache the individual response.
  • ETag. When the browser finds an expired cached response, it can send a small token (usually a hash of the file's contents) to the server to check if the file has changed. If the server returns the same token, then the file is the same, and there's no need to re-download it.

Note that you can use Expires instead of Cache-Control. But Cache-Control was introduced in HTTP/1.1 and offers more options than Expires.

If you use Express.js then you good to go, because Express.js sets up headers for you. If you use pure Nodejs your code could look like this:

// Importing http module
const http = require('http');
const etag = require('etag')

// Setting up server cache to store Etag
const NodeCache = require("node-cache");
const myCache = new NodeCache();

// Setting up PORT
const PORT = process.env.PORT || 3000;

// Creating http Server
const httpServer = http.createServer(
       function(request, response) {

  const url = request.url;
  if(url === '/some_url_that_dont_need_caching') {
    response.write('no need for caching, return result');
    // Do something and return result
    /* ....... */
    response.end('ok'); 
  }

  // Url's that need caching

  // Setting up Headers
  if (request.method == 'GET') {
    response.setHeader('Cache-control', `public, max-age=${period}`)
  } else {
    // for the other requests set strict no caching parameters
    response.setHeader('Cache-control', `no-store`)
  }

  const token = url;
  result = myCache.get(token, true);

  // Check Etag and set if needed
  setEtag(result, request, response);

  /* ..... */

  response.end('ok');
});

// Listening to http Server
httpServer.listen(PORT, () => {
    console.log("Server is running at port 3000...");
});

function setEtag(result, req, res) {
    var jresult = JSON.stringify(result);
    var hash = etag(jresult);
    var noneMatch = req.headers['if-none-match'];
    if (hash === noneMatch) {
        res.writeHead(304, "Not Modified");
        res.end();
    } else {
        res.setHeader("ETag", hash);
        res.json(result);
    }
}
Enter fullscreen mode Exit fullscreen mode

In Go it could look like:

func Handler(w http.ResponseWriter, r *http.Request) {
    e := "Some hash, could be file checksum"
    w.Header().Set("Etag", e)
    w.Header().Set("Cache-Control", "max-age=2592000") // 30 days

    if match := r.Header.Get("If-None-Match"); match != "" {
        if strings.Contains(match, e) {
            w.WriteHeader(http.StatusNotModified)
            return
        }
    }
    /* ... */
}
Enter fullscreen mode Exit fullscreen mode

The mechanism is the same in both languages, if there is a match between the If-None-Match field and the key you generate in the server, there is no need to rebuild the response again as the browser already has it. In that case set the HTTP status to StatusNotModified, i.e 304 and return.

Server side cache

If we have a complex and heavy page that takes a lot of time to generate the HTML output then our server will have a hard time. It’s certainly not useful to cache it on the browser, because if our page changes, the user won’t see the new content anytime soon and our server still would have to regenerate the page for each different user accessing our web application.
You can think about a large news portal, do they process their HTML over and over again for each visitor?

This is where server-side cache comes in handy.

The difficult part is to decide where you want to cache the page.

Never cache POST, PUT or DELETE requests. These are used to change resources and not retrieve data, so it doesn’t make sense to cache it.

  • In-Memory: Use a part of RAM in the server as cache, thus it's fastest cache you’ll ever have and the easiest to implement. In this case, the additional space needed to RAM for caching purpose. The drawback is that if you have multiple servers (which you should probably have), you’ll end with N copies of these cached content. If the process restarts for any reason, it’ll lose all the cached content and thus slowing down the first request again.

  • Centralized cache: For example Redis. It’s a high performance, assures data consistency and battle tested in-memory database widely used. It’s not as fast as process in-memory, because it requires network calls, but content is shared across all servers, so they are not duplicated and neither require resources from the application server.

Basically it from code perspective it doesn't matter where to store our cached data in redis or in-memory, because we will just set and get data from storage (if it's redis or in memory cache).

So, Nodejs and Redis cache would look like:

const express = require("express");
const app = express();
const redis = require("redis");
const fetch = require("node-fetch");

const client = redis.createClient(6379);

app.get("/blogposts", checkInCache, async (req, res) => {
    // some function that generates html
    const blogpostsHTML = generateHTML('blogposts');

    client.set("blogposts", blogpostsHTML);   // save data(key,value pair) in redis in form of cache

    res.send(blogpostsHTML)
})

const checkInCache = (req,res,next) =>{
  const data = client.get("blogposts"); //get data from cache and check if it exists
  if(data !== null){
    res.send(data);
  }else{
    next();
  }
}

app.listen(8080, () => {
     console.log("Server started!");
});
Enter fullscreen mode Exit fullscreen mode

Use code splitting

When you load your website for the first time it's going to load files like HTML, CSS and JS. It could happen that content that is not needed is downloaded, making the bundle file large with a long download time; this increases the website’s load time.

Another problem to consider is every time you change the content of a resource, for example release a new version of your JavaScript bundle, the browser needs to re-fetch it to get the new version in cache. That is slow.

Here is where code splitting comes into play.

If you instead split your bundle up to many smaller bundles with code splitting, and you change one of them, the others don’t need to be reloaded because they haven’t changed.

Great, right?

Code splitting could be achieved using any bundlers like Webpack, Browserify, Rollup, etc. For example, Webpack documentation tells us that there are three general approaches to code split available:

  • Entry Points: Manually split code using entry configuration.
  • Prevent Duplication: Use Entry dependencies or SplitChunksPlugin to dedupe and split chunks.
  • Dynamic Imports: Split code via inline function calls within modules.

Since our task is to speed up the website as quickly as possible, I suggest taking a closer look at the dynamic imports.
Using dynamic imports is quite simple.
Most libraries like React, Vue, Angular have a great documantation on how to do it.

For example in react we could make use of js dynamic import (looks exactly like pure js dynamic import):

import('./Users.js').then(({ default: User, userDetail }) => {
    // This will be the code that depends on the module...
});

// More clean with await
const { default: User, userDetail } = await import('./Users.js');
Enter fullscreen mode Exit fullscreen mode

or use build in methods in react library for components:

import React, { Suspense, lazy } from 'react';
import { BrowserRouter as Router, Routes, Route } from 'react-router-dom';

const Home = lazy(() => import('./routes/Home'));
const About = lazy(() => import('./routes/About'));

const App = () => (
  <Router>
    <Suspense fallback={<div>Loading...</div>}>
      <Routes>
        <Route path="/" element={<Home />} />
        <Route path="/about" element={<About />} />
      </Routes>
    </Suspense>
  </Router>
);
Enter fullscreen mode Exit fullscreen mode

The example above is code splitting by routes.
Basically, you can decide to split anything and anywhere, but remember that main purpose is making first load of the website faster but at the same time not to disrupt the user experience.

And because deciding where to start splitting could be tricky I would personally suggest starting with routes (using dynamic imports). This is a safe choice because users are unlikely to be interacting with other elements on the page at the same time and they are used to waiting for a new route to load.

After that, you could split more and more using the rest of the methods.

Eliminate render-blocking resources

What is render-blocking resources?

Render-blocking resources are scripts, stylesheets, and HTML imports that block or delay the browser from rendering web page.

One of the most powerful techniques to eliminate render-blocking resources is deferring or delaying non-critical resources. The browser will spend less time loading resources that aren’t crucial for the user experience (images not in the viewport, CSS styling for non-critical content, etc.).

For images:

We can always lazy load images, that is not in viewport by lazy loading:

<img src="image.jpg" alt="..." loading="lazy">
<iframe src="video-player.html" title="..." loading="lazy"></iframe>
Enter fullscreen mode Exit fullscreen mode

For CSS

You may not use all of your CSS to render the critical part of your page (above-the-fold content, other pages CSS).
So you can split the CSS into critical and non-critical parts.
Actually, it is a performance optimization technique that has gained a lot of popularity since the introduction of Core Web Vitals as it also improves LCP scores.

You can defer CSS loading like this:

<!-- NOT DEFERRED -->
<link href='/css/custom-react-select.css' rel='stylesheet' />

<!-- DEFERRED -->
<link rel="preload" href="/css/custom-react-select.css" as="style" onload="this.onload=null;this.rel='stylesheet'"/>

<noscript><link rel="stylesheet" href="/css/custom-react-select.css"/></noscript>
Enter fullscreen mode Exit fullscreen mode

For javascript scripts

Once you've identified critical code, you can apply one of the two attributes to your non-critical script: defer or async. Defer tells the browser not to wait for the script while async loads in the background and runs when ready.

// async
<script src='https://yourwebsite.com/scripts.js' async type='text/javascript'></script>

// defer
<script src='https://yourwebsite.com/scripts.js' defer type='text/javascript'></script>
Enter fullscreen mode Exit fullscreen mode

All of those techniques above aim at eliminating render-blocking resources and avoiding chaining critical requests.

Conclusion

Website optimization is actually an endless and very exciting process.
The 5 steps described above will help you to start optimizing and significantly speed up your slow website in a relatively short time.
The more you do, the harder it will be to speed up your site even more, but never despair and keep going.

In your process, I strongly recommend using:

  • performance insight and lighthouse tabs in devtools
  • "Even faster web sites" by Steve Souders
  • this article ;)

Top comments (0)