DEV Community

Cover image for Your accessibility score is lying to you

Your accessibility score is lying to you

Chris on April 07, 2026

Automated accessibility testing tools, such as axe-core by Deque, WAVE, Lighthouse are bit like a spellcheck for web accessibility. They are really...
Collapse
 
automate-archit profile image
Archit Mittal

This is an important distinction that too many teams miss. A Lighthouse 100 gives a false sense of security because automated tools can only catch about 30% of real accessibility issues. The ones they miss - logical tab order, meaningful focus management, screen reader announcement timing - are often the ones that actually block users. I've started adding manual keyboard-only navigation testing as a CI gate in my projects. It's not automated accessibility testing, but spending 2 minutes tabbing through your critical flows catches more real issues than any score.

Collapse
 
codingwithjiro profile image
Elmar Chavez

This is an eye opener for me. I've been using Lighthouse for checking the accessibility for most of my projects. Good read, I'll keep this in mind.

Collapse
 
chris_devto profile image
Chris

I'm glad! This is what worries me, it's not your fault it's down to the Google team - why setup the scoring like this?

Collapse
 
agentkit profile image
AgentKit

This matches what we found running weekly scans on AI product landing pages. We audited 29 of them over a few months -- every single one passed axe-core's color contrast and alt-text checks. Perfect scores on those specific rules. But then we did keyboard-only walkthroughs and screen reader testing, and all 29 had structural WCAG failures that the automated tools either missed entirely or flagged as "needs review" (which nobody reviews).

The 30-40% detection range you mention feels right from our data too. The scary part isn't the score itself, it's that teams use it as a stopping point. "We got 95, ship it."

Collapse
 
chris_devto profile image
Chris

😔 Exactly, so many disabled people are still blocked.

Collapse
 
theeagle profile image
Victor Okefie

The table is the evidence. A 90% automated score is actually 51% of real issues. That's not a measurement gap. That's a lie told in percentages. The tools keep the scale because 57% doesn't sell. 100% does. The real fix isn't better automation. It's admitting that accessibility can't be reduced to a number. But that doesn't fit on a dashboard. So the illusion persists.

Collapse
 
chris_devto profile image
Chris

"That's a lie told in percentages" Exactly you've nailed the problem around what "sells"

Collapse
 
bhavin-allinonetools profile image
Bhavin Sheth

I have seen teams celebrate a 90% Lighthouse score while basic keyboard navigation was completely broken 😅

Automated tools are great for quick wins, but real issues only show up when you actually try using the product like a user. Score ≠ accessibility.

Collapse
 
chris_devto profile image
Chris

Exactly, why does Google lighthouse do this? They're not selling people on scores, they're just miseducating a whole generation of Developers about accessibility

Collapse
 
shaynaproductions profile image
ShaynaProductions

I use axe-jest to ensure my components pass minimal testing. It's useful for catching common programming mistakes as a sanity check. I do think a tool like Accessibility Insights (free) can help guide developers and testers toward better accessibility. While it has an automated segment, most of it requires manual interactions, and each test scenario maps to WCAG.

Collapse
 
chris_devto profile image
Chris • Edited

They're great tools but they're only part of the solution, the worry is that people fix for the tool and think they're done when this is not the case.