Nine out of 30 SaaS pricing pages had zero WCAG 2.1 AA violations when we ran them through axe-core this week. Figma, Netlify, Twilio, Zendesk, Cal...
For further actions, you may consider blocking this person and/or reporting abuse
Really solid methodology here — testing pricing pages specifically is a smart choice since they're where design ambition and accessibility requirements collide the hardest.
The Slack stat is wild. 2 violations but 96 DOM nodes affected — perfect example of how violation count alone is misleading. One bad pattern replicated across a page can do more damage than five isolated issues.
This actually made me think about my own project. I'm building an AI sales platform (Provia) and the chat interface has custom-styled product cards that are definitely not screen-reader friendly. Adding this to my backlog.
Would love to see the same scan on AI product landing pages — I'd bet the failure rate is even higher given how fast most of us ship.
yeah the slack number was the one that surprised us the most — same kind of thing where a single component pattern replicated everywhere creates most of the damage.
AI product landing pages is a great idea, we've been circling that cohort for our next run. the "generated UI that looks right but is unusable" angle we wrote about a couple days ago feels like it'd overlap hard with what you're describing on the provia chat interface side.
quick q: when you say "custom-styled product cards that are definitely not screen-reader friendly" — are the cards rendered with aria attributes at all, or totally naked divs? curious where the gap actually is before we design the scan.
Totally naked divs, honestly. The product cards are rendered as styled
<div>blocks inside the chat — product name, price, description, and an image. Norole, noaria-label, no semantic structure. A screen reader would see a wall of text with no way to distinguish one product from another.The card rendering looks roughly like this:
No
alton the image, norole="article"orrole="listitem"on the card, noaria-labelgrouping. It's a visual-only component.If you do run the AI product cohort scan, I'd be curious to see how chat-embedded product cards perform across the board — I doubt anyone is doing this well right now.
Happy to share more about the Provia chat interface if it's useful for designing the scan.
yeah — we actually ran the cohort last night. 29 sites, zero clean. writing it up now.
the card structure you showed is the exact pattern we saw across the category. naked divs dressed up, no landmark, no item boundary. the scans say 100% failure rate but cohort-level numbers hide the actual user experience. "3 products, which is cheapest" is where the divs really collapse.
would it be useful if we ran a proper axe pass on a live provia page + a short screen reader walkthrough, and sent you the report before anything gets published? happy to keep the provia name out of it or in it, whichever is more useful for you right now.
100% failure rate across 29 sites — that's a headline.
Yes, I'd love the axe pass + screen reader walkthrough. That's incredibly useful — I'd rather know exactly what's broken before I fix it than guess.
And keep Provia's name in it. If we're the case study for "here's what a real AI chat interface looks like to a screen reader, and here's what fixing it takes" — that's more valuable than hiding. The whole point of building in public is showing the real state, not the polished version.
Here's a live page you can scan: provia-saas.vercel.app
The product cards render after an AI response, so you'll need to trigger a search (try "show me hoodies") to get the cards in the DOM.
Looking forward to the report. And the article — "3 products, which is cheapest" collapsing on naked divs is exactly the kind of thing people need to see.
appreciated — and we love the case-study framing. build-in-public with real names is the only version of this that actually helps other builders.
drop a url (ideally a page with 2-3 product cards visible without login, even if it's a demo or sandbox) and we'll run it within 48 hours. axe-core pass + a 10-minute VoiceOver/NVDA walkthrough where we literally tab through the interface and record what a screen reader user would hear. the report will be raw — failures, fixes, what's cheap vs expensive — and you get it before we publish anything.
after that, your call on framing. could be a full provia case study (before/after style, like the one we just did on ourselves), or we fold it into a broader "AI chat interfaces and screen readers" piece with provia as the specific example. whichever serves you better.
one flag: if the chat is login-gated, let us know and we'll work around it.