Nine out of 30 SaaS pricing pages had zero WCAG 2.1 AA violations when we ran them through axe-core this week. Figma, Netlify, Twilio, Zendesk, Calendly, Loom, Miro, Grammarly, Webflow. Clean results across the board.
The other 21 didn't come close.
What we tested and how
We pointed axe-core 4.11 at the pricing page of 30 SaaS products -- not their homepages, not their docs, specifically their pricing pages. The standard was WCAG 2.1 AA, and we ran every scan in a consistent headless Chromium environment with no browser extensions or user scripts that might interfere.
Why pricing pages? Because they're where the money changes hands. They tend to get heavy custom design treatment: comparison tables with intricate layouts, toggle switches between monthly and annual billing, gradient backgrounds behind plan names, animated feature lists. All of that increases the surface area for accessibility failures in ways that a simpler marketing page might not.
The scan completed on April 12, 2026. Every result below comes directly from axe-core output -- no manual evaluation, no subjective judgment calls.
The numbers
Across 30 sites, axe-core flagged 65 total violations touching 548 DOM nodes. The average was 2.2 violations per site, but that average hides a wide spread. Nine sites had nothing. Three sites -- Linear, Render, and Intercom -- had five violations each.
Color contrast was the most common single violation by a wide margin, appearing on 12 of the 30 pricing pages (40%). That tracks with what we see in basically every cohort we scan, but the prevalence on pricing pages is worth noting. These aren't obscure blog posts with inherited styles. Pricing pages get direct design attention, and still, 40% of them had text that didn't meet minimum contrast ratios against its background.
The second most common violation was list -- malformed list structures -- appearing on 8 sites. This one's a pricing page special. When you build a feature comparison table or a bulleted list of what's included in each tier, it's easy to use <div> elements styled to look like lists without actually being lists. Screen readers can't parse the structure, and the user loses the ability to navigate between items.
After that: aria-allowed-attr on 5 sites, link-name and button-name each on 4 sites, and then a longer tail of ARIA-related issues.
Who had the most violations
Linear's pricing page had 5 violations affecting 14 DOM nodes, including a critical aria-required-parent issue and color contrast failures across 3 elements. The list markup was malformed too -- <li> elements outside proper list containers, which affected 8 nodes.
Render also had 5 violations, but its node count was the highest in the entire cohort: 82 affected DOM elements. The bulk of that was 45 nodes failing color contrast and 34 buttons without accessible names. When axe-core flags 34 buttons on a single page without discernible text, that usually points to an icon button pattern where the icons are decorative and no aria-label was added.
Intercom rounded out the top three with 5 violations and 28 affected nodes. Three of those violations were critical severity, including 17 elements with ARIA roles that lacked required parent roles and 6 buttons without accessible names.
Below the top three, Vercel, PlanetScale, Stripe, SendGrid, HubSpot, Monday, and Asana each had 4 violations. Asana's case is interesting -- it only had 4 violation types, but 73 DOM nodes were affected, nearly all of them from invalid ARIA roles (70 nodes). That's likely a single component pattern replicated across every feature row in their pricing table.
Slack had just 2 violations, but one of them -- aria-command-name -- hit 96 DOM nodes. Sometimes a low violation count masks a large blast radius.
Who got it right
The nine clean sites: Figma, Netlify, Twilio, Zendesk, Calendly, Loom, Miro, Grammarly, Webflow.
What do they have in common? Honestly, not as much as you'd hope for a neat narrative. They span different industries (design tools, communications, scheduling, writing, web hosting). They use different tech stacks. Some have elaborate pricing pages with feature grids; others keep it simple.
If there's a through-line, it might be that several of these companies have publicly stated accessibility commitments. Figma has talked about accessibility in their design tool itself. Webflow literally sells website building and has a vested interest in demonstrating that their own output is accessible. Twilio and Zendesk both operate in spaces where enterprise customers with accessibility requirements are a significant part of their revenue.
But I'm speculating. The data just says they passed. Drawing causal conclusions from nine data points would be overreach.
Why pricing pages specifically
We've scanned other cohorts before -- landing pages, documentation sites, forms. Pricing pages consistently perform worse, and we think there are a few structural reasons.
Pricing pages are marketing pages that behave like application interfaces. They have interactive elements (toggles, sliders, accordions for FAQ sections, comparison table filters) layered on top of heavy visual design. That combination creates accessibility failure modes that a static marketing page wouldn't have.
There's also the custom-build problem. When a product team builds a comparison table for their three pricing tiers, they're often working from a one-off Figma design rather than pulling from a shared accessible component system. Custom means untested. Untested means aria-allowed-attr violations slip through.
And then there's the visual hierarchy pressure. You want the recommended plan to stand out. You want the CTA button to pop. That pressure toward visual emphasis creates exactly the conditions where contrast ratios get sacrificed -- a light gray "per user/month" label against a white card background, or a pastel-colored "most popular" badge that doesn't meet the 4.5:1 ratio.
The list violations tell the same story from a different angle. Feature lists on pricing pages are almost never actual <ul> elements. They're styled <div> stacks with checkmark icons, because that's what looks good in the design. The visual result is fine. The semantic result is invisible to assistive technology.
What this doesn't tell you
Automated scanning catches a specific category of issues. axe-core is good at finding color contrast failures, missing labels, malformed ARIA, and structural problems. It's not good at evaluating whether a screen reader user can actually complete the task of understanding and comparing pricing tiers. It can't tell you whether the tab order makes sense, or whether the plan toggle between monthly and annual actually announces its state change.
So the nine sites with zero violations aren't necessarily fully accessible. And the 21 sites with violations aren't necessarily unusable. What the data does tell you is the minimum bar -- these are issues that an automated tool can catch in under 10 seconds per page, and 70% of well-funded SaaS companies haven't cleared that bar on one of their most important pages.
We'll run this cohort again in a few months and see what changes.
We run automated accessibility scans weekly on different website cohorts. Read more at blog.a11yfix.dev.
Top comments (6)
Really solid methodology here — testing pricing pages specifically is a smart choice since they're where design ambition and accessibility requirements collide the hardest.
The Slack stat is wild. 2 violations but 96 DOM nodes affected — perfect example of how violation count alone is misleading. One bad pattern replicated across a page can do more damage than five isolated issues.
This actually made me think about my own project. I'm building an AI sales platform (Provia) and the chat interface has custom-styled product cards that are definitely not screen-reader friendly. Adding this to my backlog.
Would love to see the same scan on AI product landing pages — I'd bet the failure rate is even higher given how fast most of us ship.
yeah the slack number was the one that surprised us the most — same kind of thing where a single component pattern replicated everywhere creates most of the damage.
AI product landing pages is a great idea, we've been circling that cohort for our next run. the "generated UI that looks right but is unusable" angle we wrote about a couple days ago feels like it'd overlap hard with what you're describing on the provia chat interface side.
quick q: when you say "custom-styled product cards that are definitely not screen-reader friendly" — are the cards rendered with aria attributes at all, or totally naked divs? curious where the gap actually is before we design the scan.
Totally naked divs, honestly. The product cards are rendered as styled
<div>blocks inside the chat — product name, price, description, and an image. Norole, noaria-label, no semantic structure. A screen reader would see a wall of text with no way to distinguish one product from another.The card rendering looks roughly like this:
No
alton the image, norole="article"orrole="listitem"on the card, noaria-labelgrouping. It's a visual-only component.If you do run the AI product cohort scan, I'd be curious to see how chat-embedded product cards perform across the board — I doubt anyone is doing this well right now.
Happy to share more about the Provia chat interface if it's useful for designing the scan.
yeah — we actually ran the cohort last night. 29 sites, zero clean. writing it up now.
the card structure you showed is the exact pattern we saw across the category. naked divs dressed up, no landmark, no item boundary. the scans say 100% failure rate but cohort-level numbers hide the actual user experience. "3 products, which is cheapest" is where the divs really collapse.
would it be useful if we ran a proper axe pass on a live provia page + a short screen reader walkthrough, and sent you the report before anything gets published? happy to keep the provia name out of it or in it, whichever is more useful for you right now.
100% failure rate across 29 sites — that's a headline.
Yes, I'd love the axe pass + screen reader walkthrough. That's incredibly useful — I'd rather know exactly what's broken before I fix it than guess.
And keep Provia's name in it. If we're the case study for "here's what a real AI chat interface looks like to a screen reader, and here's what fixing it takes" — that's more valuable than hiding. The whole point of building in public is showing the real state, not the polished version.
Here's a live page you can scan: provia-saas.vercel.app
The product cards render after an AI response, so you'll need to trigger a search (try "show me hoodies") to get the cards in the DOM.
Looking forward to the report. And the article — "3 products, which is cheapest" collapsing on naked divs is exactly the kind of thing people need to see.
appreciated — and we love the case-study framing. build-in-public with real names is the only version of this that actually helps other builders.
drop a url (ideally a page with 2-3 product cards visible without login, even if it's a demo or sandbox) and we'll run it within 48 hours. axe-core pass + a 10-minute VoiceOver/NVDA walkthrough where we literally tab through the interface and record what a screen reader user would hear. the report will be raw — failures, fixes, what's cheap vs expensive — and you get it before we publish anything.
after that, your call on framing. could be a full provia case study (before/after style, like the one we just did on ourselves), or we fold it into a broader "AI chat interfaces and screen readers" piece with provia as the specific example. whichever serves you better.
one flag: if the chat is login-gated, let us know and we'll work around it.