DEV Community

Cover image for How I automated 62% of Europe's RGAA accessibility criteria
Chaàbane LEMARED
Chaàbane LEMARED

Posted on

How I automated 62% of Europe's RGAA accessibility criteria

In Europe, web accessibility became mandatory in June 2025 under the European Accessibility Act (EAA). But the EU doesn't define how to test — each country has its own standard. The French standard is RGAA 4.1: 106 criteria, published by the government, with precise test procedures.

Every tool out there (Axe, WAVE, Lighthouse) speaks WCAG. None of them map to RGAA natively.

So I built one.

The problem: RGAA ≠ WCAG

RGAA 4.1 is based on WCAG 2.1 AA, but it goes further:

  • 106 criteria instead of 78, with specific test methodology
  • 13 thematic areas (images, colors, links, forms, navigation...)
  • Precise test procedures — not just "what to achieve" but "how to verify"

When a public agency or a company under the EAA needs an audit, they need RGAA — not WCAG. And there was no SaaS tool for that.

What's automatable?

Out of 106 criteria, 66 (62%) can be tested automatically. The best coverage by theme:

  • Mandatory elements — 9/10 automatable (90%) — missing lang, invalid HTML, page titles
  • Navigation — 8/11 automatable (73%) — skip links, landmarks, consistent nav
  • Structure — 3/4 automatable (75%) — heading hierarchy, lists, ARIA landmarks
  • Colors — 2/3 automatable (67%) — contrast ratios
  • Forms — 7/13 automatable (54%) — labels, autocomplete, error messages
  • Images — 5/9 automatable (56%) — alt text, decorative images

The lowest: Multimedia at 23% — most video/audio checks need human judgment.

The highest ROI comes from mandatory elements — 90% automatable. Missing lang attributes, invalid HTML, missing page titles. Easy wins that affect every single page.

The architecture

I used axe-core as the detection engine, but added a critical layer on top:

1. RGAA mapping

Every axe-core rule is mapped to one or more RGAA criteria. This isn't a simple 1:1 — some WCAG success criteria map to multiple RGAA criteria, and vice versa.

// Example: axe rule "image-alt" maps to RGAA criteria 1.1 AND 1.2
{
  "image-alt": {
    rgaaCriteria: ["1.1", "1.2"],
    theme: "Images",
    testMethod: "automatic"
  }
}
Enter fullscreen mode Exit fullscreen mode

The mapping is stored in a database with full referential integrity — criteria, themes, test methods, and the relationship to WCAG success criteria.

2. Pattern detection — the killer feature

This is what makes the tool actually useful. Instead of reporting:

"287 images without alt text across 50 pages"

...it detects patterns:

"Your product images use the same component — fixing the alt attribute in this one place resolves 287 violations across 50 pages."

One fix = hundreds of pages fixed.

This is the difference between a 500-line CSV dump and an actionable plan. The pattern detection works by comparing the DOM structure of each violation — same CSS selector, same component hierarchy, same violation type → same pattern.

3. Multi-page scanning

Real audits aren't one page. A proper RGAA audit requires testing a representative sample: homepage, contact page, login flow, content pages.

The tool:

  • Crawls the site and discovers pages
  • Detects page templates via DOM fingerprinting
  • Avoids scanning duplicate pages (e.g., 200 product pages with the same template → scan one, extrapolate)
  • Supports authenticated pages (dashboard, admin, settings)

The stack

  • Nuxt 3 — SSR for marketing, SPA for dashboard
  • axe-core + Puppeteer — headless accessibility scanning
  • Prisma + PostgreSQL — audit data, users, subscriptions
  • Stripe — billing (Free → Enterprise tiers)
  • OpenAI — AI-powered fix suggestions adapted to the detected framework (Vue, React, Angular, WordPress)

CI/CD integration

An audit tool is only useful if it's part of the development workflow:

GitHub Action — runs RGAA checks on every PR:

- uses: rgaaudit/action@v1
  with:
    url: ${{ env.STAGING_URL }}
    threshold: 75
    api-key: ${{ secrets.RGAA_KEY }}
Enter fullscreen mode Exit fullscreen mode

If the score drops below the threshold, the PR is blocked.

REST API — for Jenkins, GitLab CI, or custom pipelines:

curl -X POST https://rgaaudit.fr/api/ci/scan \
  -H "X-API-Key: sk_..." \
  -d '{"url": "https://staging.mysite.com"}'
Enter fullscreen mode Exit fullscreen mode

MCP Server — run audits from Claude Code, Cursor, or Copilot:

npx @rgaaudit/mcp-server
> rgaa_audit https://mysite.com
# Score RGAA: 72/100
# Patterns detected: 6
# Priority fixes: 3
Enter fullscreen mode Exit fullscreen mode

What I learned

62% automation is the sweet spot. It handles the repetitive checks (contrast, alt text, heading structure) and frees up expert time for the 38% that requires human judgment — semantic relevance, keyboard navigation quality, screen reader experience.

Pattern grouping changes everything. Developers don't want a list of 500 violations. They want to know: "Fix this one component, and 200 pages are fixed." That's the insight that turns an audit report into an action plan.

Local standards are a real market gap. Every international tool speaks WCAG. Europe has 27 countries, many with their own accessibility standards. There's room for localized tools that speak the language auditors and regulators actually use.

Try it

The tool is live at rgaaudit.fr — free tier available (3 audits/month, 15 pages). The MCP server is on npm: @rgaaudit/mcp-server.

If you're dealing with European accessibility requirements, or just curious about how RGAA compares to WCAG, I'd love your feedback.


The European Accessibility Act is in effect since June 2025. If you sell digital products or services in the EU, accessibility compliance is now mandatory.

Top comments (0)