DEV Community

Cover image for How We Automate Accessibility Testing with Playwright and Axe
Bojan for Subito

Posted on with Alessandro Grosselle

How We Automate Accessibility Testing with Playwright and Axe

At Subito, accessibility (a11y) is an important requirement for ensuring all users, regardless of their abilities or disabilities, can use our platform effectively. An accessible website improves the user experience for everyone, including those using assistive technologies like screen readers or alternative input devices.

In this article, we'll show you how we use Playwright combined with Axe (@axe-core/playwright) to automatically catch accessibility issues and integrate them into our CI pipeline.

Our Toolkit: Axe + Playwright

We chose Axe, an open-source library from Deque Systems, as our accessibility testing engine. It's well-regarded, easy to use, and provides a JavaScript API to run tests directly in the browser. The @axe-core/playwright package makes integration seamless.

And since we already rely on Playwright for visual regression testing and our end-to-end suite, adding accessibility checks right on top of that felt like the obvious next step.
No new tools to learn, just extending a setup we know well with Axe’s engine running inside the same Playwright workflows.

Configuration

First, we created a helper to get a pre-configured Axe instance. Our configuration focuses on WCAG 2.1 Level A and AA criteria.

What is WCAG? The Web Content Accessibility Guidelines (WCAG are developed by the W3C to make web content more accessible.

  • Level A: The minimum level of conformance.
  • Level AA: The mid-range level we (and many others) target, as it addresses more advanced barriers.
  • Level AAA: The highest, most stringent level.

We also explicitly exclude certain elements that are outside our direct control, such as third-party advertisements positions, to avoid false positives.

// From /test/utils/axe.ts
import { Page } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';

const getAxeInstance = (page: Page) => {
  return new AxeBuilder({ page })
    // We decided to target WCAG 2.1 A and AA success criteria
    .withTags(['wcag2a', 'wcag2aa'])
    // We exclude elements we don't control, like ads
    .exclude('[id^="google_ads_iframe_"]')
    .exclude('#skinadvtop2')
    .exclude('#subito_skin_id');
};
Enter fullscreen mode Exit fullscreen mode

Implementation: Generating and Saving Reports

Next, we implemented another helper function, generateAxeReport, to run the analysis and save the results.

// From /test/utils/axe.ts
import { Page } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
import { Result } from 'axe-core';
import * as fs from 'fs';
import * as path from 'path';

// ... getAxeInstance code from above ...

export const generateAxeReport = async (
  name: string,
  page: Page,
  isMobile: boolean,
  includeSelector?: string
) => {
  let axe = getAxeInstance(page);

  // Optionally scope the analysis to a specific selector
  if (includeSelector) {
    axe = axe.include(includeSelector);
  }

  const results = await axe.analyze();
  const violations = results.violations;

  // Save the results to a JSON file
  await saveAccessibilityResults(name, violations, isMobile);

  return violations;
};

async function saveAccessibilityResults(
    fileName: string,
    violations: Array<Result>,
    isMobile: boolean
) {
    const outputDir = 'test/a11y/output';

    if (!fs.existsSync(outputDir)) {
        fs.mkdirSync(outputDir, { recursive: true }); // Create directory if it doesn't exist
    }

    const filePath = path.join(
        outputDir,
        `${fileName}-${isMobile ? 'mobile' : 'desktop'}.json`
    );

    // We map the violations to a clean object for serialization
    const escapedViolations = violations.map((violation) => {
        return {
            id: violation.id,
            impact: violation.impact,
            description: violation.description,
            help: violation.help,
            helpUrl: violation.helpUrl,
            nodes: violation.nodes, // The specific elements that failed
        };
    });

    fs.writeFileSync(filePath, JSON.stringify(escapedViolations, null, 2));
    console.log(`Accessibility results saved to ${filePath}`);
}
Enter fullscreen mode Exit fullscreen mode

The A11y test

With these helpers in place, adding an accessibility check to any Playwright test is incredibly simple.

// From /test/a11y/example.spec.ts
import { test } from '@playwright/test';
import { generateAxeReport } from '../utils/axe';

test('check Login page', async ({ page }) => {
  await page.goto('/login_form');
  await page.waitForLoadState('domcontentloaded');

  // Just call our helper!
  await generateAxeReport('login-page', page, false);
});
Enter fullscreen mode Exit fullscreen mode

It generates a JSON report for the login-page, essentially, each test run produces a structured JSON output with all the accessibility findings:

Integration with Continuous Integration (CI)

Our workflow is triggered every time our staging environment is updated. The action performs the following steps:

  1. Runs the accessibility tests against a predefined list of critical pages.
  2. Generates the JSON reports.
  3. Updates or creates a dedicated GitHub Issue with the results whenever violations are detected.

Here's what the automated report looks like when posted to our GitHub Issue:

Example of an automatically generated accessibility report in a GitHub issue

And here is the detail of the violations found:

Why a GitHub Issue? (And Not a Failing Build)

This is a key difference from our visual regression tests, which automatically open a PR and send a Slack notification to the engineer who introduced the visual change.

Since we’ve only recently introduced automated a11y checks, there’s naturally a lot of work to catch up on. We’re fixing issues progressively, but until the overall accessibility debt gets closer to zero, blocking or slowing down the pipeline wouldn’t be sustainable.

At the same time, using a GitHub Issue, we create a persistent record of the accessibility debt; the repo owner is then responsible for triaging these issues, assessing their priority, and scheduling the fixes.

Below is an example of a pull request where we address a record previously logged in the GitHub Issue:

What Automation Really Finds

We had high hopes for catching complex navigation issues, but the reality is that automated tests are best at finding basic problems.

What our tests do catch:

  • Missing alternative text for images (alt attributes)
  • Color contrast problems
  • Semantic HTML errors (e.g., improper heading structure)

What our tests don't catch:

  • Complex keyboard navigability issues
  • Clarity or comprehensibility of content

These more complex issues still require manual testing and review by accessibility experts (for now).

What's Next?

We plan to add Slack notifications to our GitHub Action; this notification will fire only when new violations are introduced.
While the GitHub Issue tracks our overall a11y debt, a new problem introduced by a recent deployment to staging needs to be fixed immediately.

Conclusion

Automating accessibility testing with Playwright and Axe doesn’t find every a11y problem, but it gives us a baseline that runs on every PR and helps us catch the obvious issues before they ever reach production.

There’s still plenty we want to explore, but this already feels like a solid step forward.

Check Out Our Code

Top comments (0)