<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cameron Cundiff</title>
    <description>The latest articles on DEV Community by Cameron Cundiff (@cameron-accesslint).</description>
    <link>https://dev.to/cameron-accesslint</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cameron-accesslint"/>
    <language>en</language>
    <item>
      <title>Accessibility Tooling for Agentic Coding Loops</title>
      <dc:creator>Cameron Cundiff</dc:creator>
      <pubDate>Mon, 23 Feb 2026 00:41:23 +0000</pubDate>
      <link>https://dev.to/cameron-accesslint/accessibility-tooling-for-agentic-coding-loops-5b1h</link>
      <guid>https://dev.to/cameron-accesslint/accessibility-tooling-for-agentic-coding-loops-5b1h</guid>
      <description>&lt;p&gt;Coding agents are writing and modifying front-end code at scale. If your team maintains a design system or owns accessibility on a product, this changes the calculus: code that once went through a human review loop is now generated in seconds, often without any accessibility check at all.&lt;/p&gt;

&lt;p&gt;Existing tools weren't designed for this. They assume a human in the loop; run a scan, read the report, interpret the findings, figure out the fix. That workflow requires domain expertise at every step.&lt;/p&gt;

&lt;p&gt;An agent operating in a coding loop needs something different. Not a report to interpret, but structured diagnostics it can act on directly: machine-executable fix instructions, DOM context for reasoning, a fixability classification for triage, and a verification mechanism to confirm fixes landed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/AccessLint/mcp" rel="noopener noreferrer"&gt;@accesslint/mcp&lt;/a&gt; is an MCP server built on &lt;a href="https://github.com/AccessLint/core" rel="noopener noreferrer"&gt;@accesslint/core&lt;/a&gt;, a rule engine designed from the ground up for agent consumption. It exposes tools for auditing HTML (as a string, file, or URL), diffing before-and-after audits, and listing rules; for Claude Code, Cursor, Windsurf, or any MCP-compatible agent.&lt;/p&gt;

&lt;p&gt;This post walks through the design decisions behind the tool: how violations are structured, how fixability classification works, what context collection looks like per rule, and how the diff loop closes the audit-fix-verify cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does it actually help?
&lt;/h2&gt;

&lt;p&gt;Before getting into the design, here's the evidence. Both approaches - agent with MCP tools vs. agent alone - were benchmarked across 25 HTML test cases covering 67 fixable WCAG violations (3 runs each, Claude Opus):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;With @accesslint/mcp&lt;/th&gt;
&lt;th&gt;Agent alone&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Violations fixed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;99.5% (200/201)&lt;/td&gt;
&lt;td&gt;93.5% (188/201)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Regressions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1.7 / run&lt;/td&gt;
&lt;td&gt;2.0 / run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$0.56 / run&lt;/td&gt;
&lt;td&gt;$0.62 / run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Duration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;270s / run&lt;/td&gt;
&lt;td&gt;377s / run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Timeouts&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0 / 63 tasks&lt;/td&gt;
&lt;td&gt;2 / 63 tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The MCP-assisted path uses 23% fewer output tokens per run. Without tools, the agent has to recall WCAG rules from training data, reason about which rules apply to which elements, and then fix them. The MCP replaces that open-ended reasoning with structured output: specific rule IDs, CSS selectors pointing to exact elements, and concrete fix suggestions. The agent skips straight to applying fixes. Fewer reasoning steps, fewer tokens, less time, lower cost.&lt;/p&gt;

&lt;p&gt;The largest gains are on complex cases. A test case with 6 violations across nested landmark structures completed in 25-38 seconds with MCP tooling. The agent alone timed out at 90 seconds in 2 of 3 runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anatomy of a violation
&lt;/h2&gt;

&lt;p&gt;When an agent calls &lt;code&gt;audit_html&lt;/code&gt;, each violation includes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. [CRITICAL] labels-and-names/button-name
   Button has no discernible text.
   Element: button.icon-search
   HTML: &amp;lt;button class="icon-search" onclick="openSearch()"&amp;gt;&amp;lt;svg aria-hidden="true"&amp;gt;...&amp;lt;/svg&amp;gt;&amp;lt;/button&amp;gt;
   Fix: add-text-content
   Fixability: contextual
   Browser hint: Screenshot the button to identify its icon or visual label,
   then add a matching aria-label.
   Context: Classes: icon-search
   Guidance: Screen reader users need to know what a button does. Add visible
   text content, aria-label, or aria-labelledby. For icon buttons, use
   aria-label describing the action (e.g., aria-label='Close'). If the button
   contains an image, ensure the image has alt text describing the button's
   action.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each field is deliberate:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix&lt;/strong&gt; is a structured instruction from a closed set: &lt;code&gt;add-attribute&lt;/code&gt;, &lt;code&gt;set-attribute&lt;/code&gt;, &lt;code&gt;remove-attribute&lt;/code&gt;, &lt;code&gt;add-element&lt;/code&gt;, &lt;code&gt;remove-element&lt;/code&gt;, &lt;code&gt;add-text-content&lt;/code&gt;, or &lt;code&gt;suggest&lt;/code&gt;. The first six are mechanically executable. &lt;code&gt;suggest&lt;/code&gt; is the escape hatch for violations where the fix depends on intent. About 75% of rules provide a mechanical fix; the remaining 25% use &lt;code&gt;suggest&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fixability&lt;/strong&gt; classifies the violation, not the fix. The &lt;code&gt;button-name&lt;/code&gt; rule provides a mechanical fix type (&lt;code&gt;add-text-content&lt;/code&gt;) that satisfies the rule. But its &lt;code&gt;contextual&lt;/code&gt; classification signals that the agent should use the collected context to determine &lt;em&gt;what&lt;/em&gt; text to add. More on this below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context&lt;/strong&gt; is collected per-rule from the DOM. &lt;strong&gt;Guidance&lt;/strong&gt; provides the remediation principle behind the rule, written for direct LLM consumption. &lt;strong&gt;Browser hint&lt;/strong&gt;, present on 26 of 92 rules, tells agents with browser access how to verify or improve a fix using screenshots or DevTools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Determining "Fixability"
&lt;/h2&gt;

&lt;p&gt;Every rule carries a fixability classification that the MCP server surfaces on each violation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mechanical&lt;/strong&gt; (20 rules): Deterministic. A positive &lt;code&gt;tabindex&lt;/code&gt; gets set to &lt;code&gt;"0"&lt;/code&gt;. A non-valid ARIA role gets flagged with the correct value. No ambiguity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contextual&lt;/strong&gt; (65 rules): Requires surrounding context, but an LLM can reason about it. The violation's context and guidance fields provide the inputs. The structured fix provides a safe floor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual&lt;/strong&gt; (4 rules): Requires rendered output. Color contrast, primarily. Browser hints tell the agent how to inspect computed styles or screenshot the element.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interaction between fixability and the structured fix is the core design decision. Take &lt;code&gt;button-name&lt;/code&gt;: the fix type is &lt;code&gt;add-text-content&lt;/code&gt;, which is mechanically executable. But what text? The &lt;code&gt;contextual&lt;/code&gt; classification is the signal to use the collected context. The violation reports &lt;code&gt;Classes: icon-search&lt;/code&gt;. That's developer intent that never made it into the accessible name. The agent reads the class, infers the action, and adds &lt;code&gt;aria-label="Search"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This also maps to &lt;code&gt;list_rules&lt;/code&gt; filtering. An agent or workflow can query rules by fixability to scope an audit pass: mechanical-only for automated batch remediation, contextual for agent-assisted passes, visual for flagging to human review.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gathering context
&lt;/h2&gt;

&lt;p&gt;Each rule gathers the specific context its violation type needs. This is where the design diverges most from existing tools, which tend to report the element and stop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;button-name&lt;/code&gt;&lt;/strong&gt; reports CSS class names (&lt;code&gt;btn-close&lt;/code&gt;, &lt;code&gt;icon-search&lt;/code&gt;), the enclosing form's label, and the nearest heading. Class names are the key signal. They encode developer intent that never made it into the accessible name. A button with class &lt;code&gt;icon-search&lt;/code&gt; inside a form labeled "Site search" gives the agent two independent signals pointing to &lt;code&gt;aria-label="Search"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;img-alt&lt;/code&gt;&lt;/strong&gt; checks whether the image is inside a link (and captures the &lt;code&gt;href&lt;/code&gt;), looks for a &lt;code&gt;figcaption&lt;/code&gt;, and captures adjacent text. If a figcaption already describes the image, &lt;code&gt;alt=""&lt;/code&gt; avoids redundant adjacent text. If the image is a standalone link, the &lt;code&gt;href&lt;/code&gt; helps the agent infer purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;form-label&lt;/code&gt;&lt;/strong&gt; reports the input's &lt;code&gt;type&lt;/code&gt;, &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;placeholder&lt;/code&gt;, and &lt;code&gt;id&lt;/code&gt;, plus the full accessible name computation chain: &lt;code&gt;aria-labelledby&lt;/code&gt; resolution, &lt;code&gt;aria-label&lt;/code&gt;, associated &lt;code&gt;&amp;lt;label&amp;gt;&lt;/code&gt;, &lt;code&gt;title&lt;/code&gt;, and &lt;code&gt;placeholder&lt;/code&gt; fallback.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;link-name&lt;/code&gt;&lt;/strong&gt; captures the &lt;code&gt;href&lt;/code&gt;, nearby heading text, and parent element context. For a link wrapping only an icon, the &lt;code&gt;href&lt;/code&gt; and surrounding headings are often enough to infer purpose.&lt;/p&gt;

&lt;p&gt;The goal is to front-load enough information that the agent can reason about the fix in a single pass, without a round-trip to read more of the document.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looping on diffs
&lt;/h2&gt;

&lt;p&gt;The diff loop is the verification mechanism. The workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The agent calls &lt;code&gt;audit_html&lt;/code&gt; with &lt;code&gt;name: "before"&lt;/code&gt; to audit and store the result.&lt;/li&gt;
&lt;li&gt;The agent applies fixes.&lt;/li&gt;
&lt;li&gt;The agent calls &lt;code&gt;diff_html&lt;/code&gt; with the updated markup and &lt;code&gt;before: "before"&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The server audits the new HTML, diffs against the stored result, and returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Summary: 2 fixed, 1 new, 3 remaining

FIXED:
  - [CRITICAL] text-alternatives/img-alt at img[src="photo.jpg"]
  - [CRITICAL] labels-and-names/button-name at button.icon-search

NEW:
  - [SERIOUS] aria/aria-roles at div[role="buton"]
    ARIA role "buton" is not a valid role value.
    Fix: set-attribute role="button"

REMAINING:
  - [MODERATE] navigable/heading-order at h4
  - [MODERATE] distinguishable/link-in-text-block at a.subtle
  - [MINOR] text-alternatives/image-alt-words at img[alt="image of logo"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Violations are matched by &lt;code&gt;ruleId + selector&lt;/code&gt;. The agent gets a clear signal: what was fixed, what regressed (with the diagnosis and fix instruction for self-correction), and what remains.&lt;/p&gt;

&lt;p&gt;The NEW category is critical. An agent that adds &lt;code&gt;role="buton"&lt;/code&gt; (a typo) gets the regression surfaced immediately, with a structured fix to correct it. The loop is: audit, fix, diff, self-correct. No human in the middle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Opening up the browser
&lt;/h2&gt;

&lt;p&gt;26 rules carry a browser hint: an instruction for agents with browser access (screenshots, DevTools MCP) on how to verify or improve a fix.&lt;/p&gt;

&lt;p&gt;For a visual rule like &lt;code&gt;color-contrast&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Violation context includes computed colors and ratio. After changing colors, use JavaScript to read getComputedStyle() on the element and recalculate the contrast ratio. Screenshot the element to verify the fix looks correct in context.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For a contextual rule like &lt;code&gt;button-name&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Screenshot the button to identify its icon or visual label, then add a matching aria-label.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These are opt-in. An agent without browser tools ignores them. An agent with browser MCP tools (Chrome DevTools, Playwright) can use them to bridge the gap between static analysis and rendered output, particularly for the 4 visual rules where static analysis alone can't fully verify the fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling page fragments
&lt;/h2&gt;

&lt;p&gt;When the input HTML lacks &lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/code&gt; or &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt;, the server auto-enables component mode, suppressing 22 page-level rules (document title, landmarks, &lt;code&gt;lang&lt;/code&gt; attribute, etc.) that would produce false positives on isolated markup. The agent can override this with the &lt;code&gt;component_mode&lt;/code&gt; parameter.&lt;/p&gt;

&lt;p&gt;This means an agent auditing a React component, a partial template, or a code snippet gets relevant results without noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The MCP server is a thin integration layer. The rule engine is &lt;a href="https://github.com/AccessLint/core" rel="noopener noreferrer"&gt;@accesslint/core&lt;/a&gt;: 92 rules, zero runtime dependencies, synchronous execution. The API is &lt;code&gt;runAudit(doc: Document): AuditResult&lt;/code&gt;. The server handles HTML parsing (happy-dom), fragment detection, audit state for diffing, and violation enrichment (joining each violation with its rule's fixability, browser hint, and guidance).&lt;/p&gt;

&lt;p&gt;The core library covers 23 WCAG 2.1 success criteria across Level A and AA, scoped to what static DOM analysis can meaningfully check. Rules that throw during execution are caught and skipped; the audit always completes.&lt;/p&gt;

&lt;p&gt;The library also exports lower-level primitives (&lt;code&gt;getAccessibleName&lt;/code&gt;, &lt;code&gt;getComputedRole&lt;/code&gt;, &lt;code&gt;isAriaHidden&lt;/code&gt;) and a declarative rule engine for authoring rules as JSON with &lt;code&gt;validateDeclarativeRule&lt;/code&gt; and &lt;code&gt;compileDeclarativeRule&lt;/code&gt;. An agent can write, validate, and register new rules at runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  A note on the core engine
&lt;/h2&gt;

&lt;p&gt;Every MCP tool call is a round-trip through the protocol, and an agent running audit-fix-diff in a loop may make dozens of them. Latency per call matters. Engines like axe-core were designed at the outset for the browser; the execution model is async and demonstrably slow&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. The  latency accrues.&lt;/p&gt;

&lt;p&gt;The rule engine is @accesslint/core: 92 rules, 23 WCAG 2.1 success criteria (Level A and AA), zero runtime dependencies, synchronous execution. The MCP server parses HTML with happy-dom and calls &lt;code&gt;runAudit(doc): AuditResult&lt;/code&gt;. A typical component audit completes in single-digit milliseconds. At that speed, the bottleneck is the LLM, not the tooling.&lt;/p&gt;

&lt;p&gt;The structured output described throughout this post (fix suggestions, fixability classifications, per-rule context, browser hints) are first-class fields on every violation, not bolted on after the fact. The rules are authored with agent consumption in mind.&lt;/p&gt;




&lt;p&gt;To add the MCP server to Claude Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add accesslint &lt;span class="nt"&gt;--&lt;/span&gt; npx &lt;span class="nt"&gt;-y&lt;/span&gt; @accesslint/mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or add it to any MCP client configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"accesslint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@accesslint/mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Testing in the DOM is important for many accessibility tests, and doesn't &lt;em&gt;have&lt;/em&gt; to be slow. Keep an eye out for improvements to the &lt;a href="https://www.npmjs.com/package/@accesslint/storybook-addon" rel="noopener noreferrer"&gt;AccessLint StoryBook Addon&lt;/a&gt; soon that will help.&lt;/p&gt;

&lt;p&gt;I'd love to hear from you! Please share questions and comments, or drop me a note, I'm always happy to nerd out on accessibility.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://observablehq.com/d/e26301f8709bf07a" rel="noopener noreferrer"&gt;@accesslint/core vs axe-core benchmarks&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>webdev</category>
      <category>a11y</category>
      <category>javascript</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
