Most frontend developers today use tools like Axe, WAVE, or Lighthouse to test accessibility.
And that’s a good start.
But there’s a gap that these tools don’t really cover:
They don’t tell you what a screen reader user actually hears.
The Problem: Accessibility ≠ Announcement
Let’s take a simple example:
From a rule-based perspective, this passes:
-It has an accessible name
-It uses a semantic element
But what does a screen reader actually announce?
NVDA: “Submit Payment, button”
VoiceOver: “Submit Payment, button”
JAWS: “Submit Payment, button”
So far, so good.
Now let’s tweak it slightly:
Submit
This might still pass some basic checks.
But a screen reader might announce:
“Submit”
Not:
“Submit, button”
That’s a completely different experience.
The Real Pain: Testing Screen Readers Is Hard
If you’ve ever tried to properly test this, you know the workflow:
Install NVDA (Windows),
Maybe use JAWS (paid, also Windows)
Use VoiceOver (macOS),
Navigate everything with a keyboard
Listen carefully to announcements
Repeat across browsers and flows
It works—but it’s pretty slow, manual, hard to automate, and OS dependent
In practice, what happens is:
Run Axe/Lighthouse → fix obvious issues then
Do minimal screen reader testing (if any), then
Ship- badaboom.
The problem?
Most accessibility issues that affect real users show up in how things are announced, not just whether rules pass.
A Missing Layer in the Tooling
What’s missing is something in between:
Not just rule-based linting
Not full manual screen reader testing but-
A way to preview how your markup will be interpreted by assistive technologies
Recently in being fed up with this, I started building a tool to help with this exact problem:
👉 getspeakable.dev
You can paste in HTML and see predicted announcements for:
• NVDA
• JAWS
• VoiceOver
For example:
🔍
Might produce something like:
“button”
Instead of:
“Search, button”
Which immediately tells you:
the element has no accessible name
Noted that, this isn’t a replacement for real screen reader testing.
Instead, it fits earlier in the process alongside linting (Axe & lighthouse) tools, then using speakable for audible authentication/regression, and the n manual screen reader testing to validate real user experience.
Catching them earlier is significantly cheaper than fixing them later.

Top comments (0)