As front-end developers, we know the accessibility docs for our frameworks exist. But in practice, there's often a gap between those guides and the code that ships.
I am conducting a comparative analysis of this "practical gap." My first step was to systematically test the effectiveness of the tools themselves. My question: What do the "official" accessibility linters for React, Vue, and Angular actually detect?
The Test:
I first selected 9 specific WCAG 2.2 Success Criteria based on two main factors:
- High Impact on Users (e.g., labels, keyboard focus)
- Direct Impact on Coding (things a developer physically types)
I then built three identical, intentionally broken prototypes: one in React, one in Vue, and one in Angular. Each prototype was filled with the same set of violations, including:
-
<span>s used as labels -
<label>s with no for attribute -
<div>s withonClickhandlers (fake buttons) - Dynamic, low-contrast error messages
- No keyboard focus on the fake buttons
- No focus management on error
- No
role="alert"for success messages
I then ran the recommended linters (jsx-a11y, vuejs-accessibility, @angular-eslint) and axe-core against each one.
The Linter Scorecard
Here are the results from the static analysis (Green = Linter CAUGHT it, Red = Linter MISSED it):
Finding 1: Linter effectiveness is not uniform.
This was the most interesting discovery. Look at Row 1 (<span> used as label).
This is a critical, non-semantic label. The linters for React and Angular were completely silent.
The linter for Vue (vuejs-accessibility) was the only one to correctly flag that the input was missing a programmatic label.
Finding 2: Linters and Axe-core have different, complementary strengths.
The tools test different things:
- Linters (All 3) were great at catching syntax and interaction problems (like
<div>withonClick- Rows 3 and 4). - Axe-core (All 3) missed those "fake buttons," but was perfect at finding the semantic violations the linters missed (like the
<span>labels and the<div>used as a<form>- Rows 1, 2 and 9). This strongly indicates that relying on only one of these tools means you are shipping significant bugs.
Finding 3: All automated tools have a critical blind spot for logic.
This is the key finding. Look at the wall of red at the bottom (Rows 5-8).
Nothing: not the linters, not Axe-core, in any framework was able to detect:
1. Dynamic Low-Contrast Errors: Axe ran on page load, before the low-contrast error message appeared. It missed the bug.
2. Weak Focus Indicator: No tool complained about using the browser's default, barely-visible focus ring.
3. No Focus Management: No tool detected that when the form failed, the user's focus wasn't programmatically moved to the first error.
4. No Success Alert: No tool detected that the "Success!" message appeared silently, leaving screen reader users unaware.
So, Why Does This Keep Happening? (My Research Question)
My technical analysis is done, and it's clear the tools are imperfect.
But this doesn't answer the human question: Why do these bugs (especially the logic-based ones) make it to production? Is it because we trust these tools too much? Is it impossible deadlines? Is it simply not a priority in our teams? Do we just add //eslint-disable and move on?
To complete my thesis, I need to cross-reference this technical data with the real-world experience of developers. If you're a dev with experience in React, Vue, or Angular, I would be extremely grateful for your perspective.
You can answer an anonymous, 3 to 5 minute academic survey on the practical challenges and perceptions of accessibility:
➡️ Survey Link (Google Forms)
Thanks for reading! I'll be happy to share the full results (technical and human) here when the study is done. =)

Top comments (0)