DEV Community

Jaime Rios
Jaime Rios

Posted on • Edited on

Front End Development Automation with Puppeteer. Part 3

Intro

This post is quite of ironic since this week at work, I had some changes deployed into production that could have been prevented with what you are about to read on this post. Hopefully, with this post you'll learn from my mistakes.

Scenario 3: Compare a snapshot of local vs test.

You are changing the core of your app, perhaps some endpoints consumed on the back end, but business logic and the core functionality are supposed to stay the same. The rendering of the app must stay the same. Now we will make a script to compare both environments and guarantee that it happens.

Application outline.

As a boilerplate, we'll start with Create React App and Parcel, just to have some content to be Rendered on a page, just the starter page will be enough.

app
├── src
│   └── App.js         # The single component we'll render.
├── scripts
    └── visual-regresion-test
        ├──|actions    # All the DOM traversing functions.
        |  └──getPageScreenshot.js
        |  └──generateDateString.js
        |  └──compareScreenshots.js
        ├── images     # Here we will store our evidence.
        ├── index.js  # The main script were we will run our tests.   
        ├── config.json # For the url, viewport sizes, etc.   

Enter fullscreen mode Exit fullscreen mode

Project setup.

We'll use parcel-react-app to scaffold our project.

  1. Install parcel-react-app
    npm i -g parcel-react-app

  2. Install depencies
    yarn add puppeteer chalk signale pixelmatch pngjs

We use chalk and signale in order to get a fancier console.log. If you need it, here is again the link to part 1 of these series.

We also have pngjs in order to code/decode our images, and pixmatch in order to provide us with the image comparison.

  1. Add our test script to the package.json
// package.json
scripts: {
//...
    "vrt": "node --experimental-modules ./scripts/visual-regresion-tests/index.js"
}

Enter fullscreen mode Exit fullscreen mode

Note: We are using --experimental-modules in order to use ESM without additional setup 🤓

Then we'll deploy it to firebase on https://visual-regresion-testing.firebaseapp.com/

Writting our action script

  1. It looks quite similar to the ones like part one and two of this series. We'll make a function that takes some parameters and gets the screenshot
// actions/getPageScreenshots.js
export const getPageScreenshot = async (url, env, viewportConfig) => {
  const { height, width } = viewportConfig;
  const dateString = generateDateString();
  const selector = 'h1' // This could be any valid CSS Selector

  await signale.success('Initializing browser')

  const browser = await puppeteer.launch()
  const page = await browser.newPage()

  await page.setViewport({ width, height })
  await signale.success('Opening browser...')
  await signale.success('Navigating to the site ');
  await page.goto(url);
  await page.waitForSelector(selector)
    .then(async () => {
      signale.success('Form was submitted successfully');
      await page.screenshot({ path: `./scripts/visual-regresion-tests/images/${env}_${dateString}.png` });
      browser.close();
    })
};

Enter fullscreen mode Exit fullscreen mode

After getPageScreenshot is run in both environments, we'll have to files named something like this:
Production_7_21h30.png
Test_7_21h30.png

Comparing both images

For this to work, we need to images of the exactly the same size, that's why we are have it defined on the config.json

In order to compare both images, we'll take the example as it is from the pixelmatch documentation and change code to ES6.

If you want to know what it does under the hood, here is the explanation:

  1. Takes as an input 2 images of the same size.
  2. Decode them and process them as streams.
  3. Once done, it compares them and create a third stream which is transformed into an image where we can better appreciate the differences. We can also use information from the third stream to know how many pixels are different and act on them.
//actions/compareScreenshots.js

const imageFromFile = filename =>
  new Promise(resolve => {
    const img = fs
      .createReadStream(filename)
      .pipe(new PNG())
      .on('parsed', () => {
        resolve(img.data)
      })
  })

const compareScreenShots = async (FILENAME_A, FILENAME_B, viewportConfig) => {
  const IMAGES_FOLDER_PATH = './scripts/visual-regresion-tests/images/'
  const { height, width } = viewportConfig

  const newLayout = await imageFromFile(IMAGES_FOLDER_PATH + FILENAME_A + '.png') // './automation/images/local_host_layout.png'
  const oldLayout = await imageFromFile(IMAGES_FOLDER_PATH + FILENAME_B + '.png') // './automation/images/local_host_layout.png'

  const diff = await new PNG(viewportConfig)
  const diffPixels = await pixelmatch(
    newLayout,
    oldLayout,
    diff.data,
    width,
    height,
    {
      threshold: 0
    }
  )

  if (diffPixels === 0) {
    console.log('Success! No difference in rendering'.green)
  } else {
    console.log(
      `Uh-oh! Ther are ${diffPixels} different pixels in new render!`.bgRed
    )
  }
}

Enter fullscreen mode Exit fullscreen mode

Putting it all together

Thanks for sticking this long. Now we need to put everything in one file and simply run our tests. We'll do that on scripts/visual-regresion-tests/index.js. This is the folder we are pointing towards on when running yarn vrt.

Here is what the code looks like in one file:

// scripts/visual-regresion-tests/index.js
const signale = require('signale')
const colors = require('colors')
const config = require('./config.json')
const { generateDateString }= require('./actions/generateDateString.js');
const { getPageScreenshot } = require('./actions/getPageScreenshot.js');
const { compareScreenShots } = require('./actions/compareScreenShots.js');

let testImage;
let productionImage;

const runLocalTest = async (device = 'default', config, dateString) => {
  const { env, viewport } = config
  // await signale.success(`Running production on ${device}`)
  await signale.success(
    `Running production test on ${device} on a ${
      config.browser.clientName
    } viewport`
  )
  await getPageScreenshot(env.local, 'Test', config.viewport[device], dateString)
  await signale.success('Files are now created')
}

const runProductionTest = async (device = 'default', config, dateString) => {
  const { env, viewport } = config
  // await signale.success(`Running production on ${device}`)
  await signale.success(
    `Running production test on ${device} on a ${
      config.browser.clientName
    } viewport`
  )
  await getPageScreenshot(env.stagging, 'Production', config.viewport[device], dateString)
  await signale.success('Files are now created')
}

const runItAll = async (config) => {
  const dateString = await generateDateString();
  await console.log(`Generating date for ${dateString}`.green);
  productionImage = await `Production${dateString}`;
  testImage = await `Test${dateString}`;

  await runLocalTest('mobile', config, dateString);
  await runProductionTest('mobile', config, dateString).then(() => {
  compareScreenShots(testImage, productionImage, config.viewport.default)
  });
}

runItAll(config)
  .catch(error => console.log('error'.red, error));



Enter fullscreen mode Exit fullscreen mode

What we are doing here :

  1. First declare the names for the test and local files. We are declaring them in the outer most scope because the we need the date to be consistent between the function that takes the screenshot and the one that compares both images.
  2. Declare runProductionTest and runLocalTest. The only difference is the environment, they initialize puppeteer, go to the corresponding url and generate a screenshot for each environment. Note that both functions take the dateString as an argument and must use the same viewport in order to compare both images.
  3. We define the runItAll function, it generates the main configuration that both environments use.
  4. We execute runItAll(config) with the configuration defined in config.json.

As I've stated in the very beginning of this post, the idea is to be able to test, with a single command, that our changes don't introduce any visual changes to the application.

Now we can run yarn vrt and should see something like this:

Console screenshot

Conclusion

There is a lot of potential in puppeteer and Nodejs. During the next weeks I'll make a post about how to use these tools with Github Hooks and Conitinous Integration for the Front End.

During the development of this project I tried to use ES Modules, but they don't play that well with some libraries, at least not yet. I'm making a blog post with my impressions.

Useful Links

Thanks for reading, guys.

Cheers.

Top comments (4)

Collapse
 
petegordon profile image
Pete Gordon • Edited

Thanks for the Puppeteer series!

I used this Chrome Extension to record and generate Puppeteer code last week. It worked pretty well—definitely not perfect—but useful. I’m hoping to fork and contribute. I found it after trying the segmentio daydream project it’s based on—which didn’t work as well.

github.com/checkly/puppeteer-recorder

Collapse
 
clamstew profile image
Clay Stewart
Collapse
 
papaponmx profile image
Jaime Rios

Sweet, thanks for sharing.

Collapse
 
clamstew profile image
Clay Stewart

Thanks for your post too! Been looking for visual regression ideas after the death of PhantomCSS.