So I built an accessibility scanner that uses Playwright + axe-core in a headless browser to scan websites for WCAG 2.2 violations, then uses the Anthropic API to generate AI code fixes for each one.
To test it, I ran crawls against 5 well-known UK brand websites (10 pages each):
Gymshark — 112 violations (25 critical, 48 serious)
Missguided — 58 violations (21 critical, 16 serious)
The Body Shop — 68 violations (10 critical, 11 serious)
JD Sports — 35 violations (1 critical, 13 serious)
Trainline — 26 violations (1 critical, 10 serious)
Most common issues: missing alt text on product images, form fields without labels, broken keyboard navigation, and insufficient colour contrast. ALL basic violations on every single site.
The stack:
Next.js + TypeScript frontend
Playwright + axe-core via Browserless for scanning
Anthropic API (Claude) for generating code fixes
Supabase for auth and data
Stripe for payments
Vercel for deployment
The scanner injects axe-core 4.10.2 via page.evaluate to bypass Trusted Types, runs the audit, then passes each violation to Claude with the surrounding HTML context to generate a specific fix — not a generic suggestion but still quite easy to replicate imo, but the actual corrected code.
It's live at viascan.dev if anyone wants to try it. Free scan, no signup.
Curious what other devs think — is accessibility something you actively test for in your workflow, or does it always get deprioritised?
Top comments (0)