DEV Community

a11ySolutions
a11ySolutions

Posted on

Accessibility issues that automated tools won't catch (from real audits)

I run accessibility audits regularly. One pattern keeps showing up:

The most critical issues are never in the code. They're in the interaction.

Automated tools are genuinely useful — they catch structural and semantic
issues fast. But they don't simulate real user behavior. And that's where
things actually break.

Here are the failure patterns I find most often:

Keyboard traps in modals — focus gets locked inside a dialog with no
way out unless you know to press Escape (and even then, sometimes it doesn't
work).

Unpredictable focus jumps — after a button click or route change, focus
lands somewhere random. A screen reader user loses their place entirely.

Silent error messages — form validation runs, the error renders visually,
but aria-live is missing. Screen readers announce nothing.

Forms that validate but can't be completed — the markup passes the
scanner. Real users hit walls at step 2.

None of these show up in axe, Lighthouse, or WAVE. You only find them
when you actually use the product — keyboard-only, screen reader on,
cognitive load simulated.

Accessibility isn't about passing a scan. It's about whether people can
complete critical flows.

How are you testing beyond automated tools in your projects?
Keyboard-only passes? Screen reader sessions? Something else?

Top comments (0)