<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergi Sánchez</title>
    <description>The latest articles on DEV Community by Sergi Sánchez (@ssmancha).</description>
    <link>https://dev.to/ssmancha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ssmancha"/>
    <language>en</language>
    <item>
      <title>The accessibility tree is the new API</title>
      <dc:creator>Sergi Sánchez</dc:creator>
      <pubDate>Wed, 15 Apr 2026 10:51:27 +0000</pubDate>
      <link>https://dev.to/ssmancha/the-accessibility-tree-is-the-new-api-1hm4</link>
      <guid>https://dev.to/ssmancha/the-accessibility-tree-is-the-new-api-1hm4</guid>
      <description>&lt;p&gt;There is a function call that every major AI browsing agent makes before it can do anything useful on a web page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;accessibility&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;snapshot&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It returns a JSON tree. Not the DOM, not a screenshot, but the accessibility tree: roles, names, states, and relationships, stripped of all visual styling.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"WebArea"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Checkout"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"children"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"children"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"heading"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Your order"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"textbox"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Email address"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"textbox"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Card number"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"button"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Pay now"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is what an AI agent sees when it looks at your checkout page. Not pixels, not CSS classes, not a visual rendering. Roles and names. It is also, almost exactly, what a screen reader sees. The same data structure, built for two entirely different audiences, thirty years apart.&lt;/p&gt;

&lt;h2&gt;
  
  
  The same tree, two consumers
&lt;/h2&gt;

&lt;p&gt;The accessibility tree was designed in the 1990s to give assistive technologies a structured way to understand user interfaces. Browsers build it from HTML semantics and ARIA attributes, screen readers like NVDA and VoiceOver traverse it to announce what is on screen, and platform APIs on every operating system expose it to any program that asks. For thirty years, the primary audience was assistive technology users: a population measured in the low millions, often underserved, and almost always treated as an edge case by the teams building the products they needed to use.&lt;/p&gt;

&lt;p&gt;Then AI agents started browsing the web. OpenAI's &lt;a href="https://openai.com/index/computer-using-agent/" rel="noopener noreferrer"&gt;Operator&lt;/a&gt;, Anthropic's computer use, Google's Project Mariner, Microsoft's Copilot Vision, and the open-source libraries that power thousands of custom agents all faced the same problem: they needed a machine-readable representation of web pages. The first approach was screenshots, and screenshots turned out to be slow, expensive in tokens, and fundamentally ambiguous. A button that says "Submit" on screen is just a rectangle of pixels until the model guesses correctly.&lt;/p&gt;

&lt;p&gt;The accessibility tree already had the answer. Every button has a role, every input has a name, every landmark has boundaries. The entire semantic structure of the page, in a format that fits in a few hundred tokens instead of the thousands required by a screenshot or a full DOM dump.&lt;/p&gt;

&lt;p&gt;So they used it. &lt;a href="https://github.com/browser-use/browser-use" rel="noopener noreferrer"&gt;browser-use&lt;/a&gt;, the open-source agent framework with over 85,000 GitHub stars, calls &lt;code&gt;page.accessibility.snapshot()&lt;/code&gt; as its primary observation method. Its tagline is literally "Make websites accessible for AI agents." Microsoft's &lt;a href="https://github.com/microsoft/playwright-mcp" rel="noopener noreferrer"&gt;Playwright MCP&lt;/a&gt; reads the accessibility tree as structured YAML, deliberately choosing accessibility data over visual rendering for their browser automation standard. &lt;a href="https://www.rtrvr.ai/blog/dom-intelligence-architecture" rel="noopener noreferrer"&gt;rtrvr.ai&lt;/a&gt; built a DOM Intelligence Library on ARIA roles and achieved an 81% success rate on WebBench, ahead of every screenshot-based approach tested, with screenshot-based agents taking two to three seconds per action while DOM-based agents operate in sub-second time.&lt;/p&gt;

&lt;p&gt;The testing industry arrived at the same conclusion years earlier. &lt;a href="https://playwright.dev/docs/locators" rel="noopener noreferrer"&gt;Playwright's&lt;/a&gt; recommended locator strategy is not CSS selectors, not XPath, not test IDs — it is &lt;code&gt;getByRole('button', { name: 'Submit' })&lt;/code&gt;, querying the accessibility tree directly. &lt;a href="https://testing-library.com/docs/queries/about#priority" rel="noopener noreferrer"&gt;Testing Library&lt;/a&gt; adopted the same priority independently. Both frameworks chose accessibility semantics because they are more stable than CSS classes, survive refactoring, work across frameworks, and implicitly assert that the element is accessible. If the locator fails, either the element does not exist or it is not accessible — the test failure is an accessibility bug report. AI agents are now making the same architectural choice, for the same reasons.&lt;/p&gt;

&lt;p&gt;The infrastructure that was built for people who cannot see the screen turns out to be the best interface for machines that do not have eyes at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What breaks when the tree is broken
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://agentlens.in/blog/agent-usability/" rel="noopener noreferrer"&gt;study by AgentLens&lt;/a&gt; found that &lt;strong&gt;68% of websites had barriers preventing AI agents from completing basic tasks&lt;/strong&gt; like submitting a contact form or browsing products. The agents are already visiting, and most sites are already failing them.&lt;/p&gt;

&lt;p&gt;The failure patterns will look familiar to anyone who has tested with a screen reader. An &lt;code&gt;&amp;lt;input&amp;gt;&lt;/code&gt; without a &lt;code&gt;&amp;lt;label&amp;gt;&lt;/code&gt; appears in the accessibility tree as &lt;code&gt;{ role: "textbox", name: "" }&lt;/code&gt;: a screen reader says "edit blank" and moves on, and an AI agent has no idea what to type. A &lt;code&gt;&amp;lt;div class="dropdown" onclick="toggle()"&amp;gt;&lt;/code&gt; does not register as an interactive element in the accessibility tree at all, which means a screen reader cannot operate it and an AI agent does not know it exists. Without &lt;code&gt;&amp;lt;nav&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;main&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;aside&amp;gt;&lt;/code&gt;, the accessibility tree is flat: screen reader users cannot jump between regions, and AI agents cannot distinguish navigation from content.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://arxiv.org/html/2602.09310" rel="noopener noreferrer"&gt;CHI 2026 paper&lt;/a&gt; measured the gap from the other direction. Their AI agent succeeded on 78% of standard tasks, but when users relied on assistive technology like screen magnifiers or keyboard-only navigation, that rate collapsed to 28%. The agent could handle the page, but it could not handle the page the way people with disabilities actually use it. The accessibility layer was not just an interface for the agent; it was the shared ground between the agent and the user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Invisible to both screen readers and AI agents --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"btn"&lt;/span&gt; &lt;span class="na"&gt;onclick=&lt;/span&gt;&lt;span class="s"&gt;"submit()"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"btn-icon"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;svg&amp;gt;&lt;/span&gt;...&lt;span class="nt"&gt;&amp;lt;/svg&amp;gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt;

&lt;span class="c"&gt;&amp;lt;!-- Visible to both --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;button&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt; &lt;span class="na"&gt;aria-label=&lt;/span&gt;&lt;span class="s"&gt;"Submit order"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;svg&lt;/span&gt; &lt;span class="na"&gt;aria-hidden=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt; &lt;span class="na"&gt;focusable=&lt;/span&gt;&lt;span class="s"&gt;"false"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;...&lt;span class="nt"&gt;&amp;lt;/svg&amp;gt;&lt;/span&gt;
  Submit order
&lt;span class="nt"&gt;&amp;lt;/button&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix for the AI agent is the same fix for the screen reader user. It has always been the same fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  The triple win has data behind it
&lt;/h2&gt;

&lt;p&gt;There used to be three separate arguments for structuring your HTML well: accessibility for disabled users, SEO for search engines, and usability for everyone. They were always the same argument wearing different hats, and now the data is catching up.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://accessibe.com/blog/knowledgebase/aeo-and-web-accessibility" rel="noopener noreferrer"&gt;Adobe Digital Insights&lt;/a&gt;, AI-driven traffic to US retail sites from platforms like ChatGPT and Perplexity surged &lt;strong&gt;4,700% year-over-year&lt;/strong&gt; by mid-2025. An &lt;a href="https://wellows.com/blog/google-ai-overviews-ranking-factors/" rel="noopener noreferrer"&gt;analysis of over 15,000 AI Overview results&lt;/a&gt; found that content scoring high on semantic completeness was 4.2 times more likely to be cited than content that buried its meaning in unstructured prose. A &lt;a href="https://www.semrush.com/news/242494-study-is-web-accessibility-key-to-driving-organic-traffic/" rel="noopener noreferrer"&gt;SEMrush study of 847 websites&lt;/a&gt; found that over 73% of those that implemented accessibility improvements saw measurable increases in organic traffic, with an average gain of 12%.&lt;/p&gt;

&lt;p&gt;Google has said it directly: structured data is "critical for modern search features because it is efficient, precise, and easy for machines to process." Microsoft's &lt;a href="https://news.microsoft.com/source/features/company-news/introducing-nlweb-bringing-conversational-interfaces-directly-to-the-web/" rel="noopener noreferrer"&gt;NLWeb project&lt;/a&gt;, led by R.V. Guha, the creator of Schema.org, takes this further by turning schema-marked-up content into AI-queryable data, with every NLWeb instance doubling as an MCP server. Search engines, screen readers, and AI agents all parse the same semantic layer. They always did. The difference is that the audience just scaled from millions to hundreds of millions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The market multiplier
&lt;/h2&gt;

&lt;p&gt;Screen reader users worldwide number in the low millions. That has been the accessibility market for thirty years: important, underserved, and easy for executives to deprioritise when the budget conversation starts.&lt;/p&gt;

&lt;p&gt;ChatGPT alone reported over 200 million weekly active users in mid-2024, and by early 2026 that number is approaching a billion. Even a small fraction of those users delegating tasks to agents means tens of millions of agent sessions per week interacting with real websites, far more than the entire global screen reader population. Add Operator, Mariner, Copilot, and the open-source ecosystem, and the number of non-human consumers of your accessibility tree is about to dwarf the number of human ones. &lt;a href="https://www.gartner.com/en/articles/intelligent-agent-in-ai" rel="noopener noreferrer"&gt;Gartner predicts&lt;/a&gt; that by 2028, 33% of enterprise software will include agentic AI, up from less than 1% in 2024.&lt;/p&gt;

&lt;p&gt;This is not a hypothetical. Shopify's CEO has been publicly building toward what he calls "agentic commerce": AI agents shopping, comparing, and purchasing on behalf of consumers, with Shopify launching Agentic Storefronts and a Universal Commerce Protocol with Google. Perplexity launched AI-powered shopping in late 2024 and expanded it with direct checkout through PayPal in 2025. Amazon hired Adept's founding team and licensed its technology, a company whose entire thesis was autonomous web browsing. If an AI agent cannot parse your product page, it cannot recommend your product. If it cannot fill your checkout form, it will fill a competitor's. The same economics that drove SEO adoption in the 2000s will drive accessibility adoption in the 2020s, not out of virtue, but because the checkout page that works for agents gets the sale.&lt;/p&gt;

&lt;p&gt;The W3C is already asking the question that follows from this. They held sessions in &lt;a href="https://www.w3.org/2025/03/26-ai-agents-minutes.html" rel="noopener noreferrer"&gt;March&lt;/a&gt; and &lt;a href="https://www.w3.org/2025/11/12-ai-and-web-minutes.html" rel="noopener noreferrer"&gt;November 2025&lt;/a&gt; on how AI agents will change the web platform, debating whether ARIA, designed for human assistive technologies, is sufficient for machine agents or whether the web needs a new semantic layer entirely. The early consensus is that ARIA already covers most of what agents need. The infrastructure has been here for years. The question is whether sites bother to implement it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The accessibility tree was built for people who cannot see. It is becoming the way machines see the web. The teams that invested in one are accidentally ready for the other.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>webdev</category>
      <category>ai</category>
      <category>playwright</category>
    </item>
    <item>
      <title>The in-house illusion: why hiring an accessibility lead isn't a compliance strategy</title>
      <dc:creator>Sergi Sánchez</dc:creator>
      <pubDate>Wed, 08 Apr 2026 10:35:00 +0000</pubDate>
      <link>https://dev.to/ssmancha/the-in-house-illusion-why-hiring-an-accessibility-lead-isnt-a-compliance-strategy-5207</link>
      <guid>https://dev.to/ssmancha/the-in-house-illusion-why-hiring-an-accessibility-lead-isnt-a-compliance-strategy-5207</guid>
      <description>&lt;p&gt;A Forbes Tech Council post this week put a number on something the industry has been quietly avoiding. Accessibility settlements typically run &lt;strong&gt;$25,000 to $100,000 per case&lt;/strong&gt;, before legal fees, executive time, and the remediation work that has to happen anyway as part of any settlement.&lt;/p&gt;

&lt;p&gt;That's the bill. The interesting question is who keeps writing the cheque.&lt;/p&gt;

&lt;p&gt;It's almost never the company without an accessibility programme. It's the company that has one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The reassuring hire
&lt;/h2&gt;

&lt;p&gt;The pattern goes like this. Legal flags exposure. A board member reads an article. Someone forwards a link about the European Accessibility Act. The response is to hire an accessibility lead, or appoint a champion from the existing team, or stand up a working group.&lt;/p&gt;

&lt;p&gt;The role gets a title. The title gets a slide in the all-hands. The slide reassures everyone that accessibility is &lt;em&gt;being managed&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This is the in-house illusion. The work of &lt;em&gt;managing&lt;/em&gt; accessibility, the meetings, the audits, the quarterly reports, starts to feel indistinguishable from the work of &lt;em&gt;being accessible&lt;/em&gt;. They are not the same thing. One produces calendar invites. The other produces compliant software.&lt;/p&gt;

&lt;h2&gt;
  
  
  One person, every pull request
&lt;/h2&gt;

&lt;p&gt;Pick any reasonably active codebase. Count the pull requests merged last month. Now imagine one person manually reviewing each one for keyboard traps, focus order, contrast ratios, ARIA misuse, heading hierarchy, form labelling, reflow behaviour, motion sensitivity, and the forty other things WCAG 2.2 actually asks for.&lt;/p&gt;

&lt;p&gt;They can't. Nobody can. The work scales with the codebase, not with headcount.&lt;/p&gt;

&lt;p&gt;So the accessibility lead does what any reasonable person would do. They prioritise. They focus on the highest-risk surfaces. They run quarterly audits. They write a style guide. They train the team. They do all the things a single human can do, and the codebase keeps shipping, and most of what ships never gets reviewed by them at all.&lt;/p&gt;

&lt;p&gt;Then a settlement letter arrives, and the question from the board is: "But we hired someone for this. What happened?"&lt;/p&gt;

&lt;p&gt;What happened is that hiring a person was never going to be enough. The work needed a system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the lawsuits actually live
&lt;/h2&gt;

&lt;p&gt;The gap between "we have an accessibility programme" and "every merge is compliant" is where the $25k to $100k cheques get written. We've written before about &lt;a href="https://jeikin.com/blog/the-enforcement-gap" rel="noopener noreferrer"&gt;the enforcement gap&lt;/a&gt;, the space between an AI finding issues and being able to prove they were fixed. The in-house illusion is the same gap, dressed up in headcount.&lt;/p&gt;

&lt;p&gt;A team with an accessibility lead and no enforcement pipeline is in the exact same legal position as a team with neither. The audit log doesn't say "but we tried." The plaintiff's lawyer doesn't care that there was a champion. The regulator wants to know which components were checked, when, against what criterion, and whether the fixes were verified. If the answer lives in the head of one person who reviewed twelve PRs out of three hundred, there is no answer.&lt;/p&gt;

&lt;p&gt;This is the part the Forbes piece gets right and most internal accessibility programmes get wrong. Settlements aren't priced on intent. They're priced on outcomes that can be evidenced, or the absence of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a pipeline looks like
&lt;/h2&gt;

&lt;p&gt;The companies not getting sued aren't the ones with the biggest accessibility teams. They're the ones who moved the work out of a single role and into the build itself.&lt;/p&gt;

&lt;p&gt;That looks like rules running on every pull request, not in a quarterly audit. It looks like findings going into a tracked system the moment they're detected, not into a chat transcript that disappears at the end of the session. It looks like fixes being verified against criteria before they merge, not waved through because someone said they were done. And it looks like an audit trail that exists by default, because every step of the cycle wrote to it automatically.&lt;/p&gt;

&lt;p&gt;The accessibility lead, in this version, isn't the bottleneck. They're the person who designs the pipeline, decides what the rules should be, handles the 40% of issues that genuinely need human judgement, and presents the evidence when someone asks for it. They're a force multiplier instead of a single point of failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The compliance framing
&lt;/h2&gt;

&lt;p&gt;The other thing the Forbes piece does, almost incidentally, is reframe the conversation away from accessibility-as-virtue and towards accessibility-as-risk-management. This is the framing executives actually respond to. Nobody opens a budget meeting saying "make us more accessible." They open it saying "make us compliant with the law."&lt;/p&gt;

&lt;p&gt;That's not cynicism. It's how compliance gets funded. And it's why the in-house illusion is so dangerous. A virtue framing tolerates "we're trying our best." A risk framing doesn't, because the cheque amount is the same whether you tried or not.&lt;/p&gt;

&lt;p&gt;Compliance isn't a role. It's a pipeline. The role is what designs and runs the pipeline, but the pipeline is what produces the evidence that keeps the settlement letters in the drawer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Look at your last twenty merged pull requests. For each one, ask: who reviewed it for accessibility, against which criteria, and where is that record stored?&lt;/p&gt;

&lt;p&gt;If the answer is "our accessibility lead, when they have time, and it's in their head," the illusion is in the room.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jeikin.com" rel="noopener noreferrer"&gt;Jeikin&lt;/a&gt; is what we built to close that gap. Every PR gets the same review. Every finding gets tracked. Every fix gets verified. The accessibility lead stops being the bottleneck and starts being the person who proves it works.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Hiring someone to care about accessibility is a good first step. Treating that hire as the strategy is where the cheques start getting written.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>webdev</category>
      <category>compliance</category>
      <category>wcag</category>
    </item>
    <item>
      <title>The enforcement gap: why finding issues was never the problem</title>
      <dc:creator>Sergi Sánchez</dc:creator>
      <pubDate>Wed, 08 Apr 2026 10:32:14 +0000</pubDate>
      <link>https://dev.to/ssmancha/the-enforcement-gap-why-finding-issues-was-never-the-problem-4b6p</link>
      <guid>https://dev.to/ssmancha/the-enforcement-gap-why-finding-issues-was-never-the-problem-4b6p</guid>
      <description>&lt;p&gt;Eightfold, a talent intelligence platform, recently shared something remarkable: they used AI agents to achieve WCAG 2.2 AA compliance in two months. The same work would have taken six to ten months manually.&lt;/p&gt;

&lt;p&gt;The headline is impressive. But the interesting part isn't how fast they found the issues. It's what happened after they found them: every fix was reviewed by humans, verified against criteria, and tracked through to completion. The AI did the finding. A system did the enforcement.&lt;/p&gt;

&lt;p&gt;Most teams trying the same approach today will get the first part right and miss the second entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everyone is shipping accessibility agents
&lt;/h2&gt;

&lt;p&gt;The landscape changed fast. In the last few months alone:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An open-source project called Community Access released &lt;a href="https://github.com/Community-Access/accessibility-agents" rel="noopener noreferrer"&gt;57 accessibility agents&lt;/a&gt; for Claude Code, GitHub Copilot, and Claude Desktop. They enforce WCAG 2.2 AA by intercepting every prompt and delegating to specialist reviewers.&lt;/li&gt;
&lt;li&gt;BrowserStack launched accessibility DevTools that lint code in real time, detecting WCAG violations before a commit even happens.&lt;/li&gt;
&lt;li&gt;Deque shipped an &lt;a href="https://www.deque.com/axe/ai/" rel="noopener noreferrer"&gt;axe MCP server&lt;/a&gt;, connecting their scanning engine directly to AI coding assistants.&lt;/li&gt;
&lt;li&gt;Siteimprove published a framework for what they call "agentic accessibility," where AI agents handle compliance tasks autonomously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern is clear: accessibility tooling is moving into the places where code is written. The era of running a separate scan after the fact is ending.&lt;/p&gt;

&lt;p&gt;This is genuinely good. AI coding agents are better at finding structural accessibility issues than most developers working from memory. They don't forget to check heading hierarchy. They don't skip alt text on the fifteenth image. They apply rules consistently, across every file, every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding issues is a solved problem. The hard part was never finding.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What happens after finding
&lt;/h2&gt;

&lt;p&gt;Here's the question nobody is answering well: once the AI finds 40 accessibility issues in your codebase, then what?&lt;/p&gt;

&lt;p&gt;The AI fixes them. Some of them. In this session. And tomorrow, a new conversation starts with no memory of what happened. A different developer runs a different agent and gets a different set of findings. Nobody knows which issues from Tuesday were actually resolved. Nobody knows if the fixes introduced new problems. Nobody can produce a list of what was reviewed and when.&lt;/p&gt;

&lt;p&gt;This is the enforcement gap: the space between "an AI found issues" and "we can prove those issues were fixed, verified, and tracked."&lt;/p&gt;

&lt;p&gt;It shows up in three specific ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixes without verification
&lt;/h3&gt;

&lt;p&gt;An AI agent adds &lt;code&gt;aria-label="navigation"&lt;/code&gt; to a &lt;code&gt;&amp;lt;nav&amp;gt;&lt;/code&gt; element. Is that the right label? Does it conflict with an existing &lt;code&gt;aria-labelledby&lt;/code&gt;? Does the computed accessible name make sense in context? The AI made a change and moved on. Nobody checked whether the change actually improved anything.&lt;/p&gt;

&lt;p&gt;Automated scanning tools can verify some of these fixes. APCA can check contrast ratios against perceptual thresholds. axe-core can validate the rendered DOM. But if these verification steps aren't connected to the fix workflow, they don't happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  Findings without persistence
&lt;/h3&gt;

&lt;p&gt;Every AI coding session starts from zero. The agent doesn't know that it found a keyboard trap in the modal component last Wednesday, that a developer partially fixed it on Thursday, or that the fix broke focus management in a different component on Friday.&lt;/p&gt;

&lt;p&gt;Without persistent tracking, the same issues get rediscovered, re-reported, and re-fixed. Or worse: they get found once, partially addressed, and forgotten.&lt;/p&gt;

&lt;h3&gt;
  
  
  Evidence without structure
&lt;/h3&gt;

&lt;p&gt;A compliance officer asks: "Which components were reviewed for accessibility? When? What issues were found? Were they resolved?"&lt;/p&gt;

&lt;p&gt;If the answer lives in chat transcripts scattered across multiple AI sessions, it's not evidence. It's archaeology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters now
&lt;/h2&gt;

&lt;p&gt;Since June 2025, the European Accessibility Act has been in active enforcement across Europe. The penalties are real and vary by country: up to €250,000 in France, up to €600,000 in Spain, with some member states allowing consumers to initiate civil proceedings directly. In the US, ADA lawsuits continue to climb. The UK Equality Act already covers digital services.&lt;/p&gt;

&lt;p&gt;The regulatory question has shifted from "should products be accessible?" to &lt;strong&gt;"can you demonstrate that they are?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"We use AI agents that follow WCAG rules" is a statement about intent. Regulators want evidence of outcomes: what was checked, what was found, what was fixed, how it was verified. That requires a system, not a prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  What closing the loop looks like
&lt;/h2&gt;

&lt;p&gt;The teams getting this right share a common pattern. They don't just find issues. They enforce a cycle:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find.&lt;/strong&gt; AI agents and automated scanners identify accessibility barriers in the code. Static analysis catches structural issues. Runtime scanning catches rendered DOM issues. Guided review handles the 40% that resists automation entirely: focus order, cognitive load, content reflow, color vision accessibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Report.&lt;/strong&gt; Every finding goes into a persistent tracking system with the specific WCAG criterion, the affected component, the severity, and a plain-language explanation of who is impacted. Not a chat transcript. Not a terminal log. A structured record.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix.&lt;/strong&gt; The developer (or the AI, with guidance) addresses the issue. The fix is scoped to the specific finding, not a vague "make it accessible" pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify.&lt;/strong&gt; The fix is checked against quality criteria. Did the contrast ratio actually improve? Does the screen reader announce the correct label? Does keyboard navigation work through the component without traps? If the verification fails, the issue stays open.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evidence.&lt;/strong&gt; The entire cycle is preserved. What was found, when, by whom, how it was fixed, whether it passed verification. This is what an auditor can review. This is what survives team changes, deadline pressure, and the six months between audits.&lt;/p&gt;

&lt;p&gt;Skip any step and the gap reappears. Find without report means invisible work. Fix without verify means false confidence. Verify without evidence means unprovable compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The WCAG 3.0 signal
&lt;/h2&gt;

&lt;p&gt;The W3C published a &lt;a href="https://www.w3.org/TR/2026/WD-wcag-3.0-20260303/" rel="noopener noreferrer"&gt;new Working Draft of WCAG 3.0&lt;/a&gt; in March 2026, introducing 174 new outcomes. The shift from "success criteria" to "outcomes" isn't just naming. It reflects a move toward measuring results, not just checking boxes.&lt;/p&gt;

&lt;p&gt;WCAG 3.0 won't be finalized until 2028 at the earliest. But the direction is clear: the next version of the world's accessibility standard will ask not just "does this element have an alt attribute?" but "does this content achieve an accessible outcome for the people who use it?"&lt;/p&gt;

&lt;p&gt;That's an enforcement question, not a finding question. And it makes the gap between "AI found issues" and "we can prove outcomes" even wider for teams without a system to close it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The landscape converging
&lt;/h2&gt;

&lt;p&gt;Something interesting is happening. The accessibility tool market and the AI coding tool market are merging.&lt;/p&gt;

&lt;p&gt;Deque, the biggest name in accessibility scanning, ships an MCP server. Community Access builds agents that run inside coding assistants. BrowserStack adds real-time accessibility checking to their DevTools. TestParty embeds remediation into GitHub workflows.&lt;/p&gt;

&lt;p&gt;At the same time, MCP itself is maturing. The ecosystem hit 97 million monthly SDK downloads. Marketplace platforms like MCP-Hive are adding billing layers for commercial tool servers. The infrastructure for connecting AI agents to specialized tools is becoming production-ready.&lt;/p&gt;

&lt;p&gt;The teams that treat accessibility as something the AI handles "by default" will discover the gap when a regulator or client asks for evidence. The teams that connect their AI to an enforcement system will have the answer ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Ask your AI coding assistant to review a component for accessibility. It will probably find real issues. Now ask it: which components have already been reviewed? What was found last week? Can you show me the evidence?&lt;/p&gt;

&lt;p&gt;The silence after those questions is the enforcement gap.&lt;/p&gt;

&lt;p&gt;Closing it is what turns "we care about accessibility" from a claim into a fact.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building &lt;a href="https://jeikin.com" rel="noopener noreferrer"&gt;Jeikin&lt;/a&gt;, an accessibility compliance tool that works inside AI coding agents. Instead of overlays or separate audits, it checks your actual code and tracks evidence on a dashboard. Try it with &lt;code&gt;npx jeikin&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>webdev</category>
      <category>ai</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Accessibility belongs where developers already work</title>
      <dc:creator>Sergi Sánchez</dc:creator>
      <pubDate>Mon, 30 Mar 2026 11:51:58 +0000</pubDate>
      <link>https://dev.to/ssmancha/accessibility-belongs-where-developers-already-work-4mf4</link>
      <guid>https://dev.to/ssmancha/accessibility-belongs-where-developers-already-work-4mf4</guid>
      <description>&lt;p&gt;A developer I spoke with recently said something that stuck with me: "I can just add accessibility rules to my CLAUDE.md. Why would I need a tool for that?"&lt;/p&gt;

&lt;p&gt;He's right that you can. You can write "follow WCAG AA" in your AI instructions and your coding assistant will try. It will add alt text sometimes. It will use semantic HTML when it remembers. It will suggest aria attributes in contexts where it has seen them before.&lt;/p&gt;

&lt;p&gt;But try this: ask the same AI to review a codebase for accessibility tomorrow. It won't remember what it found yesterday. Ask it whether a fix it applied last week actually passed. It can't tell you. Ask it to prove to your compliance officer that every component was reviewed. There's nothing to show.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instructions without enforcement are suggestions.&lt;/strong&gt; And suggestions don't survive deadlines, team changes, or the EAA inspector who shows up asking for evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The enforcement gap
&lt;/h2&gt;

&lt;p&gt;Since June 2025, the European Accessibility Act has been in active enforcement. France issued legal notices to four major retailers within days. Penalties reach up to three million euros or 4% of annual revenue. The ADA lawsuit count in the US keeps climbing. The UK Equality Act already covers websites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The question for development teams has shifted from "should we care about accessibility?" to "can we prove we do?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And that's where the gap appears. An AI coding assistant can write accessible code. But it can't track what it reviewed, verify that fixes actually passed quality checks, or produce an evidence trail for an auditor. Those are system-level capabilities, not instruction-level ones.&lt;/p&gt;

&lt;p&gt;The difference matters. "We told our AI to follow WCAG" is not compliance evidence. "Here's a dashboard showing 86 criteria evaluated, 12 issues found, 12 fixes verified" is.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the AI sees versus what it misses
&lt;/h2&gt;

&lt;p&gt;WCAG 2.2 has 86 success criteria across four levels. Some are things AI handles well: structural issues like heading hierarchy, missing alt text, buttons without labels. A well-prompted AI catches these reliably.&lt;/p&gt;

&lt;p&gt;But many criteria resist automation entirely. Focus order only shows up at runtime. Contrast ratios depend on computed styles, not source code. Keyboard traps require simulating tab traversal. Content reflow at 320px needs a real viewport. Color vision accessibility requires simulating what deuteranopia, protanopia, and tritanopia actually look like.&lt;/p&gt;

&lt;p&gt;The industry is converging on a layered approach. Static analysis (ESLint rules, code patterns) catches about 30% of issues. Runtime scanning (&lt;a href="https://github.com/dequelabs/axe-core" rel="noopener noreferrer"&gt;axe-core&lt;/a&gt; against the rendered DOM) adds another 30%. The remaining 40% requires guided human review: reading order, cognitive load, sensory characteristics, orientation handling.&lt;/p&gt;

&lt;p&gt;No single tool covers everything. The question is how these layers coordinate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The workflow that actually works
&lt;/h2&gt;

&lt;p&gt;We've been thinking about this problem for months, and we keep arriving at the same conclusion: accessibility tools need to be invisible. Not invisible in the sense of overlays that hide problems, but invisible in the sense that the developer never switches context.&lt;/p&gt;

&lt;p&gt;Today we shipped Jeikin to two places where developers already spend their time.&lt;/p&gt;

&lt;h3&gt;
  
  
  In the editor
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://marketplace.visualstudio.com/items?itemName=jeikin.jeikin-accessibility" rel="noopener noreferrer"&gt;Jeikin extension for VS Code&lt;/a&gt; connects your AI coding assistant to a compliance system. Install it, click Connect, pick your compliance level. From that point forward, every AI interaction has your project's accessibility rules loaded via MCP.&lt;/p&gt;

&lt;p&gt;This isn't just "instructions in a file." When the AI finds an issue, it reports it to a tracking system. When it fixes something, it has to verify the fix against quality checks. When the checks fail, it can't mark the issue as done. There's a system enforcing the loop: find, report, fix, verify. Skip a step and the dashboard shows the gap.&lt;/p&gt;

&lt;p&gt;The extension itself is tiny (under 40 KB). It handles onboarding and shows your open issue count in the status bar. The real work happens through MCP, which means it works with Claude Code, GitHub Copilot, Cursor, Windsurf, and Cline.&lt;/p&gt;

&lt;h3&gt;
  
  
  In the pull request
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://jeikin.com/github" rel="noopener noreferrer"&gt;Jeikin GitHub App&lt;/a&gt; reviews every PR before merge. Issues appear as inline annotations in the code diff, not in a separate report. Each annotation includes the severity, the specific WCAG criterion, a plain-language explanation of who is affected, and a one-click link to fix it in Cursor or VS Code.&lt;/p&gt;

&lt;p&gt;Critical violations block the merge. You can't ship inaccessible code by accident.&lt;/p&gt;

&lt;p&gt;This catches what the AI in the editor missed. Maybe a junior developer didn't use the AI for that component. Maybe the AI hallucinated an aria attribute that doesn't exist. The PR review is the safety net.&lt;/p&gt;

&lt;h3&gt;
  
  
  On the dashboard
&lt;/h3&gt;

&lt;p&gt;Everything flows to a &lt;a href="https://jeikin.com" rel="noopener noreferrer"&gt;central dashboard&lt;/a&gt;. Issues found, fixes verified, criteria evaluated, evidence preserved. The developer never needs to open it. But their engineering manager, their compliance officer, or their client can see exactly what was reviewed and what the results were.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is different from adding rules to a file
&lt;/h2&gt;

&lt;p&gt;There's a specific objection worth addressing directly, because it's the most reasonable one: "My AI already follows accessibility rules I've written in my project instructions."&lt;/p&gt;

&lt;p&gt;Here's what project instructions can do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tell the AI to use semantic HTML&lt;/li&gt;
&lt;li&gt;Remind it to add alt text to images&lt;/li&gt;
&lt;li&gt;Set a general standard like "follow WCAG AA"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's what they can't do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Track&lt;/strong&gt; which files were actually reviewed and which were skipped&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforce&lt;/strong&gt; that a fix was verified before marking it done&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prove&lt;/strong&gt; to an auditor what was checked and when&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coordinate&lt;/strong&gt; across tools (editor AI, PR bot, runtime scanner) into one view&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remember&lt;/strong&gt; findings across sessions. Every new conversation starts from zero&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run quality gates&lt;/strong&gt; like APCA contrast, readability scoring, color vision simulation, or focus order tracing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Block&lt;/strong&gt; a merge when a critical accessibility violation is present&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instructions are the input. A compliance system is the loop: input, review, evidence, enforcement. The loop is what auditors need to see. The loop is what prevents regressions. The loop is what makes "we care about accessibility" into a provable claim.&lt;/p&gt;

&lt;h2&gt;
  
  
  The landscape is moving
&lt;/h2&gt;

&lt;p&gt;Accessibility tooling is converging with AI coding tools faster than most teams realize. Deque shipped an &lt;a href="https://www.deque.com/" rel="noopener noreferrer"&gt;axe MCP server&lt;/a&gt;. AccessiMind does real-time WCAG analysis in VS Code. TestParty embeds remediation into GitHub workflows. The era of accessibility as a separate activity is ending.&lt;/p&gt;

&lt;p&gt;The tools that win will be the ones developers don't have to think about. Not because accessibility doesn't matter, but because it matters too much to depend on someone remembering to run a scan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Install the &lt;a href="https://marketplace.visualstudio.com/items?itemName=jeikin.jeikin-accessibility" rel="noopener noreferrer"&gt;VS Code extension&lt;/a&gt; and ask your AI to review your code for accessibility.&lt;/p&gt;

&lt;p&gt;Or install the &lt;a href="https://jeikin.com/github" rel="noopener noreferrer"&gt;GitHub App&lt;/a&gt; and open your next PR.&lt;/p&gt;

&lt;p&gt;The first review usually surfaces things you didn't expect. Not because your code is bad, but because accessibility barriers are invisible until something looks for them systematically.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building &lt;a href="https://jeikin.com" rel="noopener noreferrer"&gt;Jeikin&lt;/a&gt;, an accessibility compliance tool that works inside AI coding agents. Instead of overlays or separate audits, it checks your actual code and tracks evidence on a dashboard. Try it with &lt;code&gt;npx jeikin&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>webdev</category>
      <category>ai</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Accessibility overlays: why painting over problems doesn't fix them</title>
      <dc:creator>Sergi Sánchez</dc:creator>
      <pubDate>Fri, 27 Mar 2026 17:19:34 +0000</pubDate>
      <link>https://dev.to/ssmancha/accessibility-overlays-why-painting-over-problems-doesnt-fix-them-2anb</link>
      <guid>https://dev.to/ssmancha/accessibility-overlays-why-painting-over-problems-doesnt-fix-them-2anb</guid>
      <description>&lt;p&gt;There's a category of product that promises to make any website accessible by adding a single line of JavaScript. A floating widget appears in the corner, offering controls for contrast, font size, reading guides, and color adjustments. Some even claim compliance with WCAG, the EAA, or the ADA.&lt;/p&gt;

&lt;p&gt;The pitch is appealing: paste a script tag, get a toolbar, check the compliance box. No code changes, no design work, no developer time.&lt;/p&gt;

&lt;p&gt;It sounds too good because it is.&lt;/p&gt;

&lt;h2&gt;
  
  
  What overlays actually do
&lt;/h2&gt;

&lt;p&gt;These widgets typically inject CSS and JavaScript that modifies how a page looks and behaves on the client side. Common features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Font size and spacing controls&lt;/strong&gt; that adjust text display properties&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contrast modes&lt;/strong&gt; that swap color palettes to higher-contrast themes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reading aids&lt;/strong&gt; like a line guide that follows the cursor, link highlighting, and title emphasis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Animation controls&lt;/strong&gt; for pausing motion on the page&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Profile presets&lt;/strong&gt; ("seizure-safe", "ADHD-friendly", "vision-impaired") that bundle several adjustments together&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some of these features sound genuinely helpful. The problem isn't the idea of giving users control over their experience. The problem is where these controls live, and what they replace.&lt;/p&gt;

&lt;h2&gt;
  
  
  The core issue: overlays don't fix the code
&lt;/h2&gt;

&lt;p&gt;If a form has no labels, an overlay can't add meaningful ones. If images have no alt text, the widget can't describe what's in the photo. If the tab order is broken, injecting CSS won't fix keyboard navigation. If heading levels jump from &lt;code&gt;h1&lt;/code&gt; to &lt;code&gt;h4&lt;/code&gt;, no client-side script can rebuild your document structure.&lt;/p&gt;

&lt;p&gt;Overlays address &lt;em&gt;presentation&lt;/em&gt;, how things look. But most accessibility barriers are &lt;em&gt;structural&lt;/em&gt;, how things are built. A screen reader doesn't read CSS. It reads the DOM. And if the DOM is broken, the overlay can't help.&lt;/p&gt;

&lt;p&gt;This is like putting a fresh coat of paint on a building with no wheelchair ramp and calling it accessible. The paint looks nice. The ramp is still missing.&lt;/p&gt;

&lt;h2&gt;
  
  
  They can make things worse
&lt;/h2&gt;

&lt;p&gt;This isn't theoretical. Research and user reports consistently show problems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conflicts with assistive technology.&lt;/strong&gt; People who use screen readers, magnification software, or custom stylesheets already have their own tools configured for their needs. An overlay that injects its own focus management, color overrides, or font changes can interfere with these existing configurations. Users end up fighting two systems instead of one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance impact.&lt;/strong&gt; Overlays load additional JavaScript, sometimes significantly. For users on slower connections or older devices (the people who often need accessibility features most), this adds load time and can delay the page becoming interactive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inconsistent behavior.&lt;/strong&gt; The overlay might work on one page but break on another where the site's own JavaScript conflicts with the injected modifications. Users can't predict when their adjustments will hold.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A separate experience.&lt;/strong&gt; When accessibility depends on a widget instead of the native interface, users with disabilities get a fundamentally different experience from everyone else. The goal of accessible design is one good experience for everyone, not two separate paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  The compliance question
&lt;/h2&gt;

&lt;p&gt;Some overlay vendors market their products as making sites legally compliant. This claim doesn't hold up.&lt;/p&gt;

&lt;p&gt;The European Accessibility Act (EAA), which took effect in June 2025, requires that products and services &lt;em&gt;be&lt;/em&gt; accessible, not that they offer a separate tool for accessibility. WCAG conformance is measured against the site itself, not against what a third-party widget adds on top.&lt;/p&gt;

&lt;p&gt;In the United States, courts have consistently ruled that overlays don't constitute compliance with the ADA. Organizations using overlays have been defendants in lawsuits &lt;em&gt;because of&lt;/em&gt; their overlays, not despite them.&lt;/p&gt;

&lt;p&gt;An overlay can't produce conformance with standards that require the underlying HTML to be correct. &lt;code&gt;aria-label&lt;/code&gt; attributes, semantic heading structure, keyboard operability, form associations: these are code-level requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why overlays exist
&lt;/h2&gt;

&lt;p&gt;This isn't about calling anyone negligent. Overlays became popular because accessibility is genuinely hard, tooling has historically been poor, and most teams face real constraints: limited budgets, tight deadlines, developers who weren't taught accessible patterns in school.&lt;/p&gt;

&lt;p&gt;When someone offers a drop-in solution that promises compliance in five minutes, it's understandable that teams reach for it. The marketing is convincing, the urgency is real, and the alternative (learning and implementing accessible patterns across your entire codebase) sounds overwhelming.&lt;/p&gt;

&lt;p&gt;The problem isn't that people want a shortcut. The problem is that this particular shortcut doesn't lead where it promises.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually works
&lt;/h2&gt;

&lt;p&gt;Accessibility needs to be built into the code, not layered on top. That means:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semantic HTML from the start.&lt;/strong&gt; Use the right elements: &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt; for actions, &lt;code&gt;&amp;lt;nav&amp;gt;&lt;/code&gt; for navigation, &lt;code&gt;&amp;lt;label&amp;gt;&lt;/code&gt; for form fields. This handles a surprising amount of accessibility without any extra effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing with real assistive technology.&lt;/strong&gt; Screen readers, keyboard-only navigation, and magnification tools reveal issues that no automated scan or overlay will find.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated scanning in the development workflow.&lt;/strong&gt; Tools like &lt;a href="https://github.com/dequelabs/axe-core" rel="noopener noreferrer"&gt;axe-core&lt;/a&gt; can catch structural issues during development, before they reach production. Catching a missing &lt;code&gt;alt&lt;/code&gt; attribute during a code review is vastly better than patching it with an overlay after launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous tracking.&lt;/strong&gt; Accessibility isn't a one-time audit. Sites change constantly. Every new feature, every content update, every redesign can introduce new barriers. Teams need a way to verify that what was fixed stays fixed.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://webaim.org/projects/million/" rel="noopener noreferrer"&gt;WebAIM Million&lt;/a&gt; report finds that sites using overlays actually have &lt;em&gt;more&lt;/em&gt; detectable accessibility errors on average than sites without them. This statistic alone tells the story.&lt;/p&gt;

&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;If you're evaluating accessibility solutions, be cautious of anything that promises compliance without touching your codebase. Real accessibility is structural. It lives in your HTML, your ARIA attributes, your keyboard interactions, your color choices, your content hierarchy.&lt;/p&gt;

&lt;p&gt;The good news: building accessible products is more achievable than it's ever been. Modern frameworks have better defaults, design systems are incorporating accessible patterns, and tools that integrate into development workflows can catch issues before they ship.&lt;/p&gt;

&lt;p&gt;The work is worth doing properly. The people who depend on it deserve better than a widget.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building &lt;a href="https://jeikin.com" rel="noopener noreferrer"&gt;Jeikin&lt;/a&gt;, an accessibility compliance tool that works inside AI coding agents (Claude Code, Cursor, Windsurf). Instead of an overlay, it checks your actual code and tracks evidence on a dashboard. If you're interested, you can try it with &lt;code&gt;npx jeikin&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>ux</category>
    </item>
  </channel>
</rss>
