DEV Community

Cover image for Getting Started with Automated Testing
Sergey Labut for FocusReactive

Posted on • Originally published at focusreactive.com

Getting Started with Automated Testing

Testing is important to ensure that your web application works as expected. This becomes even more important as your application grows. Setting up and maintaining testing often seems like an unnecessary overhead. In this article, we'll look at the benefits of testing and learn how to quickly start testing your application.

What is Automated Testing?

Automated testing is the process of using software tools to run tests on code automatically, rather than manually. It involves creating scripts and test cases that simulate user interactions, validate functionality, and ensure that the software behaves as expected. The primary benefits of automated testing are speed, efficiency, and the ability to consistently repeat tests without human intervention.

Importance of Automated Testing in Web Development

We can highlight several reasons why automated testing is important:

  1. Consistent and Full Coverage: tests run the same way every time, reducing human error and ensuring consistent results. They can cover a wide range of scenarios and edge cases that might be missed in manual testing.
  2. Early Bug Detection and Rapid Feedback: tests can identify bugs early in the development process, providing immediate feedback to developers. This reduces the cost and effort required to fix issues later.
  3. Faster Test Execution and Parallel Testing: tests run significantly faster than manual tests and multiple tests can run in parallel, saving time and resources.
  4. Continuous Monitoring and 24/7 Testing: tests can continuously monitor the application’s performance and functionality, and can be scheduled to run at any time, including overnight, ensuring continuous testing.
  5. Seamless CI/CD Integration and Continuous Deployment: tests can be integrated into CI/CD pipelines, ensuring that every code change is tested before deployment. This enables continuous deployment by ensuring that the application is always in a releasable state.
  6. Refactoring Confidence and Regression Testing: tests provide confidence for developers to refactor code, knowing that tests will catch any regressions or new issues. Regression tests ensure that new code changes do not break existing functionality.

Types of Automated Tests

The three primary types of automated tests are unit tests, integration tests, and end-to-end (E2E) tests. Comparison and Use Cases:

Type of Test Scope Purpose Example Use Case
Unit Test Individual components Ensure each unit performs correctly Testing a utility function
Integration Test Interactions between units Ensure combined units work together Testing service-to-database interaction
End-to-End Test Complete application flow Ensure the application works from user's perspective Testing the entire user login process

Apart from this, we can also mention:
Regression Testing. It involves re-running previously conducted tests to ensure that recent code changes have not adversely affected existing functionality. This helps in maintaining the integrity of the application as it evolves.
Performance Testing. It assesses the speed, responsiveness, and stability of an application under various conditions. Includes load testing, stress testing, and scalability testing.

Overview of Popular Testing Tools and Frameworks

Unit Testing

  • Jest: Jest is a delightful JavaScript testing framework with a focus on simplicity. It works seamlessly with projects using Babel, TypeScript, Node, React, Angular, and Vue.
  • Mocha: Mocha is a feature-rich JavaScript test framework running on Node.js, making asynchronous testing simple and fun.
  • Jasmine: Jasmine is a behavior-driven development framework for testing JavaScript code. It has no dependencies and does not require a DOM.

Integration Testing

  • Jest: Jest can also be used for integration testing by grouping related units and testing their interactions.
  • Mocha: Mocha is flexible enough to handle integration tests, often paired with assertion libraries like Chai and HTTP request libraries like Supertest.
  • Ava: Ava is a test runner that helps you write tests in JavaScript with a minimalistic syntax, supporting asynchronous testing and providing clean error output.

End-to-End Testing

  • Cypress: Cypress is a fast, easy, and reliable testing framework for anything that runs in a browser. It is designed to make it easy to set up, write, and debug tests.
  • Puppeteer: Puppeteer is a Node library that provides a high-level API to control Chrome or Chromium over the DevTools Protocol.
  • Playwright: Playwright is a Node.js library to automate Chromium, Firefox, and WebKit with a single API. It enables reliable end-to-end testing for modern web applications.

How to choose Testing Tool

Choosing the right testing tool for your project depends on several factors, including the type of application you're testing, your team's expertise, and the specific testing requirements. Here are some steps and considerations to help you choose a testing tool:

  1. Understand Your Testing Needs: Determine whether you need to perform unit tests, integration tests, end-to-end tests, or a combination of these. Consider whether you need to test on different browsers, devices, or operating systems.
  2. Evaluate Tool Capabilities: Review the features offered by each testing tool, such as assertions, test runners, mocking capabilities, and reporting. Ensure the tool supports the technologies and frameworks used in your project (e.g., React, Angular, Node.js).
  3. Assess Learning Curve: Check the availability and quality of documentation, tutorials, and community support. Evaluate how easy it is to set up, write tests, and integrate with your existing development workflow.
  4. Consider Performance and Scalability: Assess the execution speed of tests, especially for end-to-end tests that may involve UI interactions. Determine how well the tool handles large test suites and distributed testing.
  5. Community and Support: Look for active communities and forums where you can get help and share experiences. Consider the availability of commercial support, if needed, for enterprise-level projects.
  6. Integration with CI/CD: Check how easily the tool integrates with your continuous integration and deployment pipelines (e.g., Jenkins, GitHub Actions).
  7. Cost and Licensing: Evaluate whether you prefer open-source tools with community support or commercial tools with additional features and support.

For this guide, we choose Playwright for several reasons:

  • Despite the fact that the main goal of Playwright is e2e testing, it is not prohibited to conduct other types of testing – same tool for all kind of tests,
  • Native TS support,
  • Some Playwright specific features: performance metrics, headless mode, device emulation, etc.

Setting Up Your Testing Environment

To install Playwright run npm init playwright@latest

After the previous command playwright.config.ts should be created in your repository.

Here is an example of playwright.config.ts.

import { defineConfig, devices } from '@playwright/test';
// Preview or production URL
const URL = process.env.PLAYWRIGHT_TEST_BASE_URL;
export default defineConfig({
  testDir: './tests',
  /* Run tests in files in parallel */
  fullyParallel: true,
  /* Fail the build on CI if you accidentally left test.only in the source code. */
  forbidOnly: !!URL,
  /* Retry on CI only */
  retries: URL ? 1 : 0,
  /* Opt out of parallel tests on CI. */
  workers: URL ? 1 : 4,
  /* Reporter to use. See https://playwright.dev/docs/test-reporters */
  reporter: 'html',
  /* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
  use: {
    /* Base URL to use in actions like `await page.goto('/')`. */
    baseURL: URL || 'http://localhost:3000',
    /* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
    trace: 'on-first-retry',
  },
  /* Stores an array of devices which represents real devices. Each device can have its own browser, width, height and other settings. Each test will run on every device you describe, which means the final number of tests is equal to the number of tests you wrote multiplied by the number of devices. 20 (final number of tests) = 5 (number of tests) x 4 (projects/devices) */
  projects: [
    {
      name: 'Desktop Chrome',
      use: {
        ...devices['Desktop Chrome'],
      },
    },
    {
      name: 'Moto G4',
      use: {
        ...devices['Moto G4'],
      },
    },
    {
      name: 'iPhone 13 Pro',
      use: {
        ...devices['iPhone 13 Pro'],
      },
    },
    {
      name: 'Desktop Safari',
      use: {
        ...devices['Desktop Safari'],
      },
    },
  ],
  expect: {
// Specifies the timeout (in milliseconds) for each test case
    timeout: 10000,
    toMatchSnapshot: {
// Configuration for snapshot testing with a maximum difference pixel ratio of
      maxDiffPixelRatio: 0.07,
    },
    toHaveScreenshot: {
//Configuration for screenshot comparison with a maximum difference pixel ratio of
      maxDiffPixelRatio: 0.07,
    },
  },
//Specifies the global timeout (in milliseconds) for test execution
  timeout: 30 * 1000,
});
Enter fullscreen mode Exit fullscreen mode

Writing Your First Test

We will start with the simplest example – unit test.

import test, { expect } from "@playwright/test";

const sum = (a: number, b: number) => a + b;

test.describe("Test sum", () => {
  test("adds 1 + 2 to equal 3", () => {
    expect(sum(1, 2)).toBe(3);
  });
});
Enter fullscreen mode Exit fullscreen mode

This test suite tests the result of a function for a specific set of arguments.

To run the test, run npx playwright test. This will run all your tests. To run a single test just add the name of the file after the command: npx playwright test yourTest. If your test file named sum.spec.ts command would be npx playwright test sum.

Now let's write an integration test.

import test, { expect } from "@playwright/test";

test.describe("GET /users", () => {
  test("responds with json", async ({ request }) => {
    const response = await request.get("/users");

    expect(response.ok()).toBeTruthy();
  });
});
Enter fullscreen mode Exit fullscreen mode

Note: it may be slower than the simpler and lighter Jest/Mocha. But we simply want to show that the Playwright is capable of such tests too. The number of projects/devices is arbitrary in config but to skip it for a specific project/device you can use test.skip(skipCondition).

import test, { expect } from "@playwright/test";
test.describe("GET /users", () => {
  test.skip(({ browserName }) => browserName !== "chromium", "Chromium only!");
  test("responds with json", async ({ request }) => {
    const response = await request.get("/users");
    expect(response.ok()).toBeTruthy();
  });
});
Enter fullscreen mode Exit fullscreen mode

And now – e2e testing.

To test e2e, you can have a scenario of user interaction with a web page or test the basic functionality of interactive components on the page.

Imagine we have a form. This form has multiple fields such as name, phone number, email address, etc. Each field is assumed to have an input validation and an error state when the input is invalid. So instead of checking it every time by hand we can write a test to check it automatically.

Our form has several fields:

  • Name,
  • Email,
  • Text,
  • Checkbox,
  • File uploader,
  • Date picker.

Empty form

Note: in the code examples, we will limit our fields to only the following fields: name, file uploader, checkbox. Because all text fields are basically treated the same.

We can first try to submit a blank form to trigger validation errors and check if errors occur.

import test, { expect } from "@playwright/test";
import { join } from "path";

test.beforeEach(async ({ page }) => {
  await page.goto("https://example.com/contact");
});

test.describe("Contact form", () => {
   test("Check the validation", async ({ page }) => {
//First, before interacting, we must check that there are no errors on the page.
    const nameFieldError = "Name required";
    await expect(page.getByText(nameFieldError)).not.toBeAttached();
    const policyFieldError =
      "Please approve our privacy policy by ticking the box";
    await expect(page.getByText(policyFieldError)).not.toBeAttached();
// Here, when clicked, we launch a check, and errors should appear on the page.
    await page.getByRole("button").click();
//Check if errors have appeared
    await expect(page.getByText(nameFieldError)).toBeAttached();
    await expect(page.getByText(policyFieldError)).toBeAttached();
  });
});
Enter fullscreen mode Exit fullscreen mode

Error submitting empty form

Okay, the next step might be to populate the fields with data and then make sure there are no errors. Right at the end of the previous test we can add the following lines of code.

// fill out the fields one by one and check that the value is correct  
    const nameField = page.locator("#name");
    const nameValue = "Name";
    await nameField.fill(nameValue);
    await expect(nameField).toHaveValue(nameValue);
    const policyField = page.locator('input[id="important checkbox"]');
    await policyField.check();
    await expect(policyField).toBeChecked();
    const fileName = "test.pdf";
    await page
      .locator("#fileInput")
      .setInputFiles(join(process.cwd(), "/public/uploads/" + fileName));
//Here we expect to see no more validation errors.
    await expect(page.getByText(fileName)).toBeAttached();
    await expect(page.getByText(fileName)).toContainText(fileName);
    await expect(page.getByText(nameFieldError)).not.toBeAttached();
    await expect(page.getByText(policyFieldError)).not.toBeAttached();
Enter fullscreen mode Exit fullscreen mode

We can also add a UI regression test: if you are using git or any other version control system, before pushing changes to master, you can check if there are any UI changes in the branch (master screenshots required for comparison).

We can also submit the form now and check how our API handles it. It can be noted that this is more like integration testing and it is beyond the topic of this article.

Here is the code with all the pieces. I think it's best to keep this in one test to avoid race conditions.

import test, { expect } from "@playwright/test";
import { join } from "path";
test.beforeEach(async ({ page }) => {
  await page.goto("https://example.com/contact");
});
test.describe("Contact form", () => {
  test("Check the form", async ({ page }) => {
    // first, before interacting, we must check that there are no errors on the page.
    const nameFieldError = "Name required";
    await expect(page.getByText(nameFieldError)).not.toBeAttached();
    const policyFieldError =
      "Please approve our privacy policy by ticking the box";
    await expect(page.getByText(policyFieldError)).not.toBeAttached();
    // here, when clicked, we launch a check, and errors should appear on the page.
    await page.getByRole("button").click();
    // check if errors have appeared
    await expect(page.getByText(nameFieldError)).toBeAttached();
    await expect(page.getByText(policyFieldError)).toBeAttached();
    // check error state UI
    expect(await page.screenshot({ fullPage: true })).toMatchSnapshot(
      "form-error-state.png"
    );
    // fill out the fields one by one and check that the value is correct
    const nameField = page.locator("#name");
    const nameValue = "Name";
    await nameField.fill(nameValue);
    await expect(nameField).toHaveValue(nameValue);
    const policyField = page.locator('input[id="important checkbox"]');
    await policyField.check();
    await expect(policyField).toBeChecked();
    const fileName = "test.pdf";
    await page
      .locator("#fileInput")
      .setInputFiles(join(process.cwd(), "/public/uploads/" + fileName));
    await expect(page.getByText(fileName)).toBeAttached();
    await expect(page.getByText(fileName)).toContainText(fileName);
    // here we expect to see no more validation errors.
    await expect(page.getByText(nameFieldError)).not.toBeAttached();
    await expect(page.getByText(policyFieldError)).not.toBeAttached();
    // check filled form UI
    expect(await page.screenshot({ fullPage: true })).toMatchSnapshot(
      "fille-from.png"
    );
    // Submit the form
    await page.getByRole("button").click();
  });
});
Enter fullscreen mode Exit fullscreen mode

Since we have UI tests, we essentially have to run our test twice: first on master and then on branch. Or we can use some kind of cache for master screenshots.

Before running tests, you should start the development environment: yarn run dev.
To run your test locally you need to run the npx playwright test --update-snapshots command on the master and npx playwright test on your current branch. To easy up things a little bit we can automate it under one command:

  "scripts": {
    "test:local": "git checkout master && npx playwright test --update-snapshots && git checkout - && npx playwright test",
  },
Enter fullscreen mode Exit fullscreen mode

Note: to make this work your screenshots should be ignored: /tests/*/*.png on .gitignore.

Conclusion

As a result, we can say that Playwright is suitable not only for e2e testing, but also for many other purposes: unit testing - you can test regular functions with expect matcher, snapshot/UI regression testing - you can test specific isolated components by removing the rest from the page, integration – by sending data to other API/services.

Playwright is also suitable for monitoring the performance of your site. We have a separate article on this topic.

In this article we consider only local testing but Playwright starts to shine on CI. We will consider CI with Playwright using Github actions in a separate article.

Top comments (1)

Collapse
 
dogfrogfog profile image
Maksim Hodasevich

👍