Automated accessibility testing tools, such as axe-core by Deque, WAVE, Lighthouse are bit like a spellcheck for web accessibility. They are really...
For further actions, you may consider blocking this person and/or reporting abuse
This is an important distinction that too many teams miss. A Lighthouse 100 gives a false sense of security because automated tools can only catch about 30% of real accessibility issues. The ones they miss - logical tab order, meaningful focus management, screen reader announcement timing - are often the ones that actually block users. I've started adding manual keyboard-only navigation testing as a CI gate in my projects. It's not automated accessibility testing, but spending 2 minutes tabbing through your critical flows catches more real issues than any score.
This is an eye opener for me. I've been using Lighthouse for checking the accessibility for most of my projects. Good read, I'll keep this in mind.
I'm glad! This is what worries me, it's not your fault it's down to the Google team - why setup the scoring like this?
This matches what we found running weekly scans on AI product landing pages. We audited 29 of them over a few months -- every single one passed axe-core's color contrast and alt-text checks. Perfect scores on those specific rules. But then we did keyboard-only walkthroughs and screen reader testing, and all 29 had structural WCAG failures that the automated tools either missed entirely or flagged as "needs review" (which nobody reviews).
The 30-40% detection range you mention feels right from our data too. The scary part isn't the score itself, it's that teams use it as a stopping point. "We got 95, ship it."
😔 Exactly, so many disabled people are still blocked.
The table is the evidence. A 90% automated score is actually 51% of real issues. That's not a measurement gap. That's a lie told in percentages. The tools keep the scale because 57% doesn't sell. 100% does. The real fix isn't better automation. It's admitting that accessibility can't be reduced to a number. But that doesn't fit on a dashboard. So the illusion persists.
"That's a lie told in percentages" Exactly you've nailed the problem around what "sells"
I have seen teams celebrate a 90% Lighthouse score while basic keyboard navigation was completely broken 😅
Automated tools are great for quick wins, but real issues only show up when you actually try using the product like a user. Score ≠ accessibility.
Exactly, why does Google lighthouse do this? They're not selling people on scores, they're just miseducating a whole generation of Developers about accessibility
I use axe-jest to ensure my components pass minimal testing. It's useful for catching common programming mistakes as a sanity check. I do think a tool like Accessibility Insights (free) can help guide developers and testers toward better accessibility. While it has an automated segment, most of it requires manual interactions, and each test scenario maps to WCAG.
They're great tools but they're only part of the solution, the worry is that people fix for the tool and think they're done when this is not the case.