<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: snapsynapse</title>
    <description>The latest articles on DEV Community by snapsynapse (@snapsynapse).</description>
    <link>https://dev.to/snapsynapse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/snapsynapse"/>
    <language>en</language>
    <item>
      <title>Teach Your AI Coding Agent to Run Accessibility Audits</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Tue, 14 Apr 2026 15:00:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/teach-your-ai-coding-agent-to-run-accessibility-audits-3hn9</link>
      <guid>https://dev.to/snapsynapse/teach-your-ai-coding-agent-to-run-accessibility-audits-3hn9</guid>
      <description>&lt;p&gt;You've got an AI coding agent. It can scaffold components, write tests, refactor modules. But ask it to check whether your app is accessible and you get one of two things: a vague summary of WCAG principles with no actionable output, or a hallucinated audit that references guidelines it didn't actually check against.&lt;/p&gt;

&lt;p&gt;The problem isn't that the agent is incapable. It's that nobody told it how. What tool to install. What rules to run. How to map a violation to a specific WCAG success criterion. What to put in the report. What still needs a human.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/snapsynapse/skill-a11y-audit" rel="noopener noreferrer"&gt;skill-a11y-audit&lt;/a&gt; is a reusable agent skill that solves this. Drop it into your project, and your agent gets a structured protocol for running real accessibility audits with real tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;The skill packages a complete audit workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs &lt;strong&gt;axe-core&lt;/strong&gt; against your pages, targeting WCAG 2.1 AA&lt;/li&gt;
&lt;li&gt;Optionally runs &lt;strong&gt;Lighthouse&lt;/strong&gt; accessibility checks for a second signal&lt;/li&gt;
&lt;li&gt;Maps every violation to the specific WCAG success criterion it fails&lt;/li&gt;
&lt;li&gt;Generates a &lt;strong&gt;markdown report&lt;/strong&gt; with severity, affected elements, and remediation guidance&lt;/li&gt;
&lt;li&gt;Flags what automation can't catch and provides &lt;strong&gt;manual follow-up guidance&lt;/strong&gt; for the things that need a human (keyboard navigation, screen reader behavior, content meaning)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output is a report you can hand to a developer, file as a ticket, or use as a punch list. Not a score. Not a badge. A list of specific things that are broken and what to do about them.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install it
&lt;/h2&gt;

&lt;p&gt;For Claude Code, copy the &lt;code&gt;a11y-audit/&lt;/code&gt; folder into &lt;code&gt;.claude/skills/&lt;/code&gt; in your project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/snapsynapse/skill-a11y-audit.git
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; skill-a11y-audit/a11y-audit .claude/skills/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Codex, copy or symlink the same folder into your Codex skills directory. The skill is platform-agnostic by design -- any agent that reads a SKILL.md file can use it.&lt;/p&gt;

&lt;p&gt;Once installed, you can trigger it with natural language:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Run an accessibility audit on this project."&lt;/li&gt;
&lt;li&gt;"Audit this app for WCAG 2.1 AA issues and generate a markdown report."&lt;/li&gt;
&lt;li&gt;"Use $a11y-audit to scan the homepage, about page, and contact page."&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why a skill instead of just a tool
&lt;/h2&gt;

&lt;p&gt;axe-core exists. Lighthouse exists. pa11y exists. You could tell your agent to install and run any of them. But here's what actually happens when you do that:&lt;/p&gt;

&lt;p&gt;The agent installs axe-core. Runs it with default settings. Dumps raw JSON. Doesn't map violations to WCAG criteria. Doesn't distinguish between things it can verify and things that need manual checking. Doesn't structure the output so anyone can act on it.&lt;/p&gt;

&lt;p&gt;A skill isn't a wrapper around a CLI. It's the knowledge of how to use the tool well. Which rules to run. How to structure the output for a developer who needs to fix things. When to stop and say "a human needs to check this part." That operational knowledge is what the agent is missing, and it's what this skill provides.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the report looks like
&lt;/h2&gt;

&lt;p&gt;The skill generates structured markdown. Each violation gets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The WCAG success criterion it fails (e.g., 1.4.3 Contrast)&lt;/li&gt;
&lt;li&gt;Impact level (critical, serious, moderate, minor)&lt;/li&gt;
&lt;li&gt;The specific elements affected&lt;/li&gt;
&lt;li&gt;What to fix and how&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The report also includes a manual review section: things like keyboard trap testing, focus order verification, and screen reader behavior that no automated tool can fully assess. The skill doesn't pretend automation covers everything. It draws a clear line between what it checked and what still needs a human.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scope and constraints
&lt;/h2&gt;

&lt;p&gt;This skill targets &lt;strong&gt;WCAG 2.1 Level AA&lt;/strong&gt;, which is the standard most organizations are working toward and the baseline for most legal compliance requirements. It uses axe-core's rule set, which is the same engine behind Google Lighthouse's accessibility score and Deque's commercial products.&lt;/p&gt;

&lt;p&gt;Automated tooling catches a meaningful portion of accessibility issues, but not all of them. The skill is explicit about this boundary. It will find missing alt text, insufficient contrast, missing form labels, improper ARIA usage, and similar machine-verifiable failures. It won't tell you whether your tab order makes sense to a human, whether your error messages are comprehensible, or whether your modal traps keyboard focus. That's what the manual follow-up section is for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Developers who want accessibility checks as part of their agent-assisted workflow, not as a separate step they forget&lt;/li&gt;
&lt;li&gt;Teams using Claude Code or Codex who want consistent, structured audit output across projects&lt;/li&gt;
&lt;li&gt;Anyone building agent skills who wants a reference implementation for how to package tool knowledge as a reusable skill&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The skill follows the &lt;a href="https://agentskills.io" rel="noopener noreferrer"&gt;agentskills.io&lt;/a&gt; open standard and uses &lt;a href="https://github.com/snapsynapse/skill-provenance" rel="noopener noreferrer"&gt;skill-provenance&lt;/a&gt; for cross-platform version tracking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get it
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/snapsynapse/skill-a11y-audit" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; -- MIT licensed.&lt;br&gt;
&lt;a href="https://skilla11y.dev/" rel="noopener noreferrer"&gt;Homepage&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're building agent skills and want to see how this one is structured, the SKILL.md is the whole thing. No build step, no dependencies beyond the audit tools themselves. Read it, fork it, improve it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://snapsynapse.com/" rel="noopener noreferrer"&gt;SnapSynapse&lt;/a&gt;. If you want to see this skill in action on a real project, check out the &lt;a href="https://aitool.watch/" rel="noopener noreferrer"&gt;AI Tool Watch&lt;/a&gt;, which was audited using it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>webdev</category>
    </item>
    <item>
      <title>EveryAILaw.com: A Structured Database of AI Compliance Obligations Across Jurisdictions</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Tue, 07 Apr 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/ai-regulation-reference-a-structured-database-of-ai-compliance-obligations-across-jurisdictions-3g06</link>
      <guid>https://dev.to/snapsynapse/ai-regulation-reference-a-structured-database-of-ai-compliance-obligations-across-jurisdictions-3g06</guid>
      <description>&lt;p&gt;If your organization builds or deploys AI systems, you are already subject to overlapping regulations from multiple jurisdictions — and the list is growing. Figuring out which rules apply, what they require, and when enforcement begins means cross-referencing dozens of documents that use different terminology for the same obligations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://everyailaw.com/" rel="noopener noreferrer"&gt;EveryAILaw.com&lt;/a&gt; is a free, structured, database-less reference (built on the &lt;a href="https://knowledge-as-code.com/" rel="noopener noreferrer"&gt;Knowledge-as-Code template&lt;/a&gt;)that tracks compliance obligations across all known AI regulations globally, spanning the EU, US federal agencies, and the key six US states. Instead of organizing by regulation, it uses an obligation-first ontology: stable compliance concepts like transparency, human oversight, and bias prevention serve as anchors, with specific regulatory provisions mapped to them.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use it
&lt;/h2&gt;

&lt;p&gt;You need a single structured source to check which AI regulations apply to your organization, compare obligations across jurisdictions, and track enforcement deadlines. The JSON API and MCP server make it practical to integrate compliance lookups into developer tooling or AI-assisted workflows rather than maintaining a spreadsheet.&lt;/p&gt;

&lt;h2&gt;
  
  
  When not to use it
&lt;/h2&gt;

&lt;p&gt;This is a reference tool, not legal advice. It does not replace regulatory counsel. If you need binding compliance assessments or jurisdiction-specific legal interpretation, consult an attorney. While the coverage is global, the current iteration is weighted toward US state laws and the EU AI Act.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it covers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;51 regulations&lt;/strong&gt;: EU AI Act, Colorado ADMT, California CCPA ADMT, CMS Medicare Advantage, and others across 8 regulatory authorities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10 core obligations&lt;/strong&gt;: transparency, human oversight, risk assessment, bias prevention, incident reporting, and more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3 access layers&lt;/strong&gt;: HTML site for browsing, JSON API for integration, MCP server for AI assistants&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://everyailaw.com/" rel="noopener noreferrer"&gt;Live site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://knowledge-as-code.com/" rel="noopener noreferrer"&gt;Knowledge-as-Code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;The obligation-first structure makes an interesting bet: that the compliance concepts are more stable than the regulations themselves. If you work in AI governance, curious whether that matches your experience.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>showdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Your site works fine in a browser. AI agents can't use it. 🔍</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Thu, 02 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/your-site-works-fine-in-a-browser-ai-agents-cant-use-it-1ipm</link>
      <guid>https://dev.to/snapsynapse/your-site-works-fine-in-a-browser-ai-agents-cant-use-it-1ipm</guid>
      <description>&lt;p&gt;I was building agent workflows for clients when I noticed a pattern that drove me nuts: agents would hit a site, get a 200 OK, and then... nothing useful. No structured data. No clear navigation path. Sometimes a WAF would silently block the request. The agent would fail, the logs would look fine, and I'd waste hours figuring out why.&lt;/p&gt;

&lt;p&gt;The thing is, these weren't bad websites. They ranked well on Google. They looked great in a browser. They just weren't built for anything that wasn't a human clicking around in Chrome.&lt;/p&gt;

&lt;p&gt;I kept running into the same invisible wall across different clients, then I hit it with &lt;a href="https://paice.work/" rel="noopener noreferrer"&gt;my own startup&lt;/a&gt;. Ouch. So I built the diagnostic I wished existed, because I needed it too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meet Siteline
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Siteline&lt;/strong&gt; is a free scanner that grades how usable your public website is for AI agents. You give it a URL, it tells you what works, what's broken, and what to fix — in about 10 seconds.&lt;/p&gt;

&lt;p&gt;It's live right now at &lt;a href="https://siteline.to/" rel="noopener noreferrer"&gt;siteline.to&lt;/a&gt;. Type any URL and see the grade. Totally free.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it checks
&lt;/h2&gt;

&lt;p&gt;Siteline evaluates four pillars — what I call the &lt;strong&gt;SNAP rubric&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signal&lt;/strong&gt;&lt;br&gt;
Can an agent even reach your site? This checks whether your server responds to non-browser clients, whether &lt;code&gt;robots.txt&lt;/code&gt; blocks agent user-agents like &lt;code&gt;ClaudeBot&lt;/code&gt; or &lt;code&gt;GPTBot&lt;/code&gt;, and whether HTTPS is in place. You'd be surprised how many sites return 403 to anything that isn't Chrome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Navigate&lt;/strong&gt;&lt;br&gt;
Once an agent lands on the page, can it figure out where things are? This looks for JSON-LD, site identity signals, clear navigation to key pages (About, Services, Contact), and machine discovery paths like sitemaps and RSS feeds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Absorb&lt;/strong&gt;&lt;br&gt;
Is the actual content machine-readable? Siteline checks whether the initial HTML has meaningful content or if everything hides behind JavaScript rendering. It looks at heading hierarchy, semantic markup, and whether the content model is clear or confused.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perform&lt;/strong&gt;&lt;br&gt;
Can the agent figure out what the user should do next? This checks for interpretable CTAs, form labels, button text, and whether next steps are clear enough for an agent to relay back to a human.&lt;/p&gt;

&lt;p&gt;Each pillar gets a weighted score. The final grade is A through F.&lt;/p&gt;
&lt;h2&gt;
  
  
  What I learned building this
&lt;/h2&gt;

&lt;p&gt;The biggest surprise: &lt;strong&gt;bot-blocking is the #1 failure mode&lt;/strong&gt;, not content quality. Most sites I scanned during development had decent content structure. But their WAF or hosting provider was silently blocking anything that didn't look like a browser. The site owner had no idea.&lt;/p&gt;

&lt;p&gt;The second surprise: most sites that invested heavily in SEO still scored &lt;em&gt;very&lt;/em&gt; poorly. SEO optimizes for Google's crawler. AI agents have different constraints. They need structured data, clear action paths, and machine-readable policy signals that search crawlers don't care about.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Web:&lt;/strong&gt; &lt;a href="https://siteline.to/" rel="noopener noreferrer"&gt;siteline.to&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLI:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx siteline scan yoursite.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;API:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"https://siteline.to/api/scan?url=yoursite.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;MCP Server&lt;/strong&gt; (for agent developers):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx siteline mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This exposes four tools — &lt;code&gt;scan_url&lt;/code&gt;, &lt;code&gt;self_scan&lt;/code&gt;, &lt;code&gt;describe_rubric&lt;/code&gt;, and &lt;code&gt;explain_score&lt;/code&gt; — so your agents can assess sites programmatically before attempting workflows on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Vanilla HTML/CSS/JS — no framework, no build step&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js on Vercel serverless functions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage:&lt;/strong&gt; Supabase PostgreSQL (results cached 24h per domain)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies:&lt;/strong&gt; One — &lt;code&gt;@vercel/og&lt;/code&gt; for dynamic social images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One architectural decision worth mentioning: Siteline tests with a non-browser user-agent first, then falls back to a headless browser if blocked. This lets it distinguish between "your content is bad" and "your firewall is blocking agents" — which is the diagnostic that matters most.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;I'm working on multi-page analysis (right now it evaluates the landing page only) and a comparison mode for benchmarking against competitors. The rubric itself is versioned and will evolve as agent capabilities change.&lt;/p&gt;

&lt;p&gt;How do you currently test whether AI agents can actually use your site, or do you just assume it works?&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How Do You Measure Whether Someone Is Actually Good at Working With AI?</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Tue, 31 Mar 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/how-do-you-measure-whether-someone-is-actually-good-at-working-with-ai-2ofo</link>
      <guid>https://dev.to/snapsynapse/how-do-you-measure-whether-someone-is-actually-good-at-working-with-ai-2ofo</guid>
      <description>&lt;p&gt;Here's a question that sounds simple and isn't: is your team actually good at working with AI, or are they just using it?&lt;/p&gt;

&lt;p&gt;Using means generating output. Good at working with means the human added judgment, caught errors, maintained context, and produced something the organization can defend. The difference matters because when something goes wrong, accountability doesn't attach to the AI. It attaches to the person who signed off.&lt;/p&gt;

&lt;p&gt;Every organization deploying AI needs to answer this question. And almost none of them can, because the tools they're using to measure AI skills don't measure collaboration. They measure knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  The quiz problem
&lt;/h2&gt;

&lt;p&gt;The dominant approach to measuring AI capability in organizations is some form of quiz: multiple choice, scenario-based questions, self-assessment surveys. These tell you whether someone knows what good collaboration looks like. They don't tell you whether someone does it.&lt;/p&gt;

&lt;p&gt;This is the same gap that exists between knowing you should write tests and actually writing tests. Between knowing you should review PR diffs line by line and actually reviewing them. Knowledge and behavior diverge under real conditions, especially when the behavior is effortful and the shortcut is invisible.&lt;/p&gt;

&lt;p&gt;The shortcut with AI is accepting output without meaningful verification. It looks like productivity. It feels like efficiency. And it's undetectable by any assessment that asks what you &lt;em&gt;would&lt;/em&gt; do rather than observing what you &lt;em&gt;actually&lt;/em&gt; do.&lt;/p&gt;

&lt;h2&gt;
  
  
  What behavioral measurement looks like
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://paice.work" rel="noopener noreferrer"&gt;PAICE&lt;/a&gt; takes a different approach. Instead of asking people about AI collaboration, it puts them in one.&lt;/p&gt;

&lt;p&gt;The assessment is a 25-minute conversation with an AI system. It looks and feels like a normal working session: you're given a realistic task, you collaborate with the AI to complete it, and you produce a deliverable. What you don't know is that the AI's outputs contain strategically injected errors -- factual mistakes, logical inconsistencies, subtle hallucinations calibrated to the domain.&lt;/p&gt;

&lt;p&gt;The assessment isn't measuring whether you can use AI. It's measuring what happens when the AI is wrong and you're responsible for the output.&lt;/p&gt;

&lt;p&gt;Do you catch the error? Do you verify claims that sound plausible? When you find a problem, do you fix it or work around it? When the AI pushes back on your correction, do you hold your ground or defer? These behavioral signals are what the scoring model captures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dimensional scoring
&lt;/h2&gt;

&lt;p&gt;Collaboration quality isn't a single number. Someone might be excellent at iterative prompting but terrible at verification. Another person might catch every error but struggle to give the AI useful feedback. A single score flattens these differences into noise.&lt;/p&gt;

&lt;p&gt;PAICE measures across multiple dimensions independently:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accountability&lt;/strong&gt; measures whether someone verifies outputs, detects injected errors, and takes ownership of the final work product. This is consistently the lowest-scoring dimension across all populations tested. People know they should verify. Under real working conditions, most don't verify thoroughly enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrity&lt;/strong&gt; measures whether someone maintains factual standards, catches logical inconsistencies, and refuses to use AI-generated content that doesn't meet quality thresholds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaboration quality&lt;/strong&gt; measures the effectiveness of the human-AI interaction itself: whether feedback is specific and actionable, whether iteration actually improves the output, whether the person understands when AI adds value and when it introduces friction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evolution&lt;/strong&gt; measures adaptive capacity: whether someone builds mental models of AI strengths and weaknesses over time and adjusts their approach accordingly.&lt;/p&gt;

&lt;p&gt;Each dimension produces an independent score. For L&amp;amp;D teams designing targeted training, a dimensional profile is vastly more actionable than a percentage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The engineering problem
&lt;/h2&gt;

&lt;p&gt;Building this required solving several problems that don't have obvious precedents:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error injection that doesn't break immersion.&lt;/strong&gt; The injected errors have to be realistic enough that catching them requires domain judgment, not pattern recognition. If the errors are obviously wrong, you're measuring attention, not expertise. If they're too subtle, the signal-to-noise ratio collapses. The calibration is adaptive -- the system adjusts based on how the participant is performing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavioral signal extraction from conversation.&lt;/strong&gt; The scoring model doesn't grade the deliverable. It analyzes the collaboration process: what the participant questioned, what they accepted, how they responded to pushback, whether their verification was systematic or sporadic. This requires a multi-model architecture where the assessment AI and the scoring model operate independently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-model bias prevention.&lt;/strong&gt; When the AI that runs the conversation is also the AI that scores it, you get circular reasoning. PAICE uses separate models for assessment delivery and scoring, with the scoring model evaluating behavioral signals rather than output quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-post comparison for training ROI.&lt;/strong&gt; The most valuable use case isn't a one-time score. It's administering the assessment before and after a training intervention and measuring whether actual behavior changed. This requires scoring stability across sessions and dimensional granularity fine enough to detect movement in specific skill areas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;p&gt;PAICE is built for Leaders and organizational decision-makers who are deploying AI and need to know whether their people are collaborating with it effectively or just using it as a faster copy-paste.&lt;/p&gt;

&lt;p&gt;If you're a developer interested in the measurement architecture, the &lt;a href="https://paice.work/whitepapers" rel="noopener noreferrer"&gt;Closing the Collaboration Gap&lt;/a&gt; whitepaper covers the technical framework, and the &lt;a href="https://paice.work/blog" rel="noopener noreferrer"&gt;daily blog&lt;/a&gt; explores the intersection of trust, verification, and performance measurement in human-AI systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://paice.work" rel="noopener noreferrer"&gt;paice.work&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;PAICE.work PBC is a public benefit corporation focused on making human-AI collaboration measurable, teachable, and governable.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>showdev</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I built an open spec because every bad 429 was costing me twice</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Thu, 26 Mar 2026 15:00:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/i-built-an-open-spec-because-every-bad-429-was-costing-me-twice-2md7</link>
      <guid>https://dev.to/snapsynapse/i-built-an-open-spec-because-every-bad-429-was-costing-me-twice-2md7</guid>
      <description>&lt;p&gt;I was building an AI agent readiness scanner called &lt;a href="https://siteline.to/" rel="noopener noreferrer"&gt;Siteline&lt;/a&gt; when I noticed something embarrassing: my own rate limiting was making things worse.&lt;/p&gt;

&lt;p&gt;An agent would hit a &lt;code&gt;429 Too Many Requests&lt;/code&gt;. It would get back &lt;code&gt;Retry-After: 60&lt;/code&gt;. So it would wait 60 seconds and try again. Reasonable. But it had no idea whether a cached result already existed for that domain. It had no idea what the actual limit was before it hit it. It had no way to know &lt;em&gt;why&lt;/em&gt; the limit existed -- was this a temporary cooldown, or was it burning through a daily quota?&lt;/p&gt;

&lt;p&gt;Every vague refusal generated follow-up traffic. The rate limit meant to protect the service was creating load on the service.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern that kept showing up
&lt;/h2&gt;

&lt;p&gt;I started looking at how other APIs handle this, and the same gap appeared everywhere. Rate limits exist. Communication about rate limits usually doesn't. And when it does it's just kinda...mean? Like there's a lot of "Stop, don't do this!" but no "Hey, here's the right way to do this."&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;429&lt;/code&gt; with &lt;code&gt;Retry-After: 60&lt;/code&gt; tells a retry loop what to do. It doesn't tell an autonomous agent whether to retry, use a cached result, try a different endpoint, or inform the human. It doesn't tell a developer what the limits are before they hit them. It doesn't tell anyone &lt;em&gt;why&lt;/em&gt; the limit exists.&lt;/p&gt;

&lt;p&gt;When the caller is a person, they shrug and wait. When the caller is an agent, it retries faster, probes more systematically, and lacks the judgment to know when to stop. The waste compounds.&lt;/p&gt;

&lt;p&gt;So I wrote a spec.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Graceful Boundaries&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;[&lt;a href="https://github.com/snapsynapse/graceful-boundaries/blob/main/spec.md%5D%5B1" rel="noopener noreferrer"&gt;https://github.com/snapsynapse/graceful-boundaries/blob/main/spec.md][1&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Graceful Boundaries addresses three gaps that existing standards cover separately but nothing combines:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proactive discovery&lt;/strong&gt; -- limits are machine-readable before they are hit&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured refusal&lt;/strong&gt; -- when a limit is exceeded, the response explains what happened, which limit applies, when to retry, and why&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Constructive guidance&lt;/strong&gt; -- the refusal includes a useful next step, not just a block&lt;/p&gt;

&lt;p&gt;The spec defines four conformance levels, from "you added five fields to your 429s" (Level 1) to "agents can discover your limits, understand your refusals, follow constructive alternatives, and self-throttle on success responses" (Level 4).&lt;/p&gt;

&lt;h2&gt;
  
  
  What a bad refusal looks like
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Too Many Requests"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The caller learns nothing. It retries. You get more traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Graceful Boundaries refusal looks like
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rate_limit_exceeded"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You can run up to 10 scans per hour. Try again in 2400 seconds."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"limit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10 scans per IP per hour"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"retryAfterSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"why"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Siteline is a free service. Rate limits keep it available for everyone and prevent abuse."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"alternativeEndpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/result?id=example.com"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The caller knows the limit. Knows when to retry. Knows &lt;em&gt;why&lt;/em&gt; the limit exists (a security signal, not a courtesy). And knows there's a cached result endpoint it can try right now instead of waiting.&lt;/p&gt;

&lt;p&gt;Zero follow-up requests generated from that refusal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proactive discovery: let agents plan before they fail
&lt;/h2&gt;

&lt;p&gt;Level 2 adds a discovery endpoint. Any agent can hit &lt;code&gt;/api/limits&lt;/code&gt; and get back every enforced limit as structured JSON:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://siteline.to/api/limits | jq &lt;span class="s1"&gt;'{service, limits: .limits.scan}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Siteline"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"limits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scan"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/scan"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"limits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ip-rate"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"maxRequests"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"windowSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10 scans per IP per hour."&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An agent that reads this before making any requests can budget its calls. No discovery-through-failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-throttling on success
&lt;/h2&gt;

&lt;p&gt;Level 4 adds proactive headers to successful responses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;RateLimit: limit=10, remaining=9, reset=3540
RateLimit-Policy: 10;w=3600
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A caller seeing &lt;code&gt;remaining=1&lt;/code&gt; self-throttles before the next request. A caller seeing &lt;code&gt;remaining=9&lt;/code&gt; knows it has budget and won't add artificial delays. This is the highest-leverage traffic reduction mechanism in the spec, and it's aligned with the IETF &lt;code&gt;draft-ietf-httpapi-ratelimit-headers&lt;/code&gt; draft.&lt;/p&gt;

&lt;h2&gt;
  
  
  It applies beyond rate limits
&lt;/h2&gt;

&lt;p&gt;One thing that surprised me during development: the pattern works for every class of HTTP response, not just 429s.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;400&lt;/code&gt; with &lt;code&gt;"why": "Blocks requests to non-public addresses to prevent server-side request forgery"&lt;/code&gt; tells the caller the security model. A &lt;code&gt;404&lt;/code&gt; with &lt;code&gt;"scanAvailable": true&lt;/code&gt; and a &lt;code&gt;scanUrl&lt;/code&gt; tells the caller it can create the resource instead of giving up. A &lt;code&gt;503&lt;/code&gt; with &lt;code&gt;retryAfterSeconds&lt;/code&gt; and a &lt;code&gt;statusUrl&lt;/code&gt; tells the caller when to come back and where to check status.&lt;/p&gt;

&lt;p&gt;The spec covers five response classes (Limit, Input, Access, Not Found, Availability) with specific required and optional fields for each.&lt;/p&gt;

&lt;h2&gt;
  
  
  The security model
&lt;/h2&gt;

&lt;p&gt;Transparency and security are in tension. The spec handles this with a simple principle: &lt;strong&gt;be transparent about rules, not mechanisms.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"10 requests per hour" is a rule. Safe to disclose. "We use Redis with a sliding window" is an implementation. Not safe. The spec includes eight security considerations (SC-1 through SC-8) covering limit calibration attacks, validation oracles, URL origin restrictions, and more. There's a full &lt;a href="https://github.com/snapsynapse/graceful-boundaries/blob/main/SECURITY-AUDIT.md" rel="noopener noreferrer"&gt;security audit&lt;/a&gt; in the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adopting it
&lt;/h2&gt;

&lt;p&gt;Level 1 takes about 20 minutes. Add five fields to your existing 429 responses: &lt;code&gt;error&lt;/code&gt;, &lt;code&gt;detail&lt;/code&gt;, &lt;code&gt;limit&lt;/code&gt;, &lt;code&gt;retryAfterSeconds&lt;/code&gt;, and &lt;code&gt;why&lt;/code&gt;. The &lt;code&gt;why&lt;/code&gt; field is the one that matters most -- it must explain the purpose of the limit, not restate the error.&lt;/p&gt;

&lt;p&gt;Level 2 adds a discovery endpoint. Level 3 adds constructive guidance to refusals. Level 4 adds proactive headers to successes.&lt;/p&gt;

&lt;p&gt;The conformance checker validates any public URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node evals/check.js https://your-service.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;104 tests, zero dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The full spec is at &lt;a href="https://github.com/snapsynapse/graceful-boundaries/blob/main/spec.md" rel="noopener noreferrer"&gt;spec.md&lt;/a&gt;. The reference implementation is &lt;a href="https://siteline.to/" rel="noopener noreferrer"&gt;Siteline&lt;/a&gt;, a Level 4 conformant AI agent readiness scanner you can verify yourself.&lt;/p&gt;

&lt;p&gt;Licensed CC-BY-4.0. Use it, adapt it, build on it.&lt;/p&gt;

&lt;p&gt;How do your APIs currently communicate limits? Is it the structured kind, or the "good luck figuring out what just happened" kind?&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>api</category>
      <category>webdev</category>
      <category>agents</category>
    </item>
    <item>
      <title>Your AI Agent Skills Have a Version Control Problem</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/your-ai-agent-skills-have-a-version-control-problem-5g1g</link>
      <guid>https://dev.to/snapsynapse/your-ai-agent-skills-have-a-version-control-problem-5g1g</guid>
      <description>&lt;p&gt;You build a skill for your AI coding agent. You refine it across five sessions. You upload it to a new conversation and immediately hit the question you can't answer: is this the one from yesterday, or the one from Tuesday?&lt;/p&gt;

&lt;p&gt;If you're working in Claude Chat, there's no persistent filesystem. If you're in Codex, the session is stateless. If you moved the skill from Claude Code to Gemini CLI, you probably renamed a file and forgot which copy is canonical. And if someone else is using your skill, they have no way to know whether their copy matches yours.&lt;/p&gt;

&lt;p&gt;Git solves this for code repositories. But agent skills don't always live in repos. They live in chat uploads, settings panels, Obsidian vaults, &lt;code&gt;.skill&lt;/code&gt; files, and directories that get copied between surfaces with no audit trail. The version information needs to travel with the files, not alongside them in a system the files might never touch.&lt;/p&gt;

&lt;h2&gt;
  
  
  What skill-provenance does
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/snapsynapse/skill-provenance" rel="noopener noreferrer"&gt;skill-provenance&lt;/a&gt; is a meta-skill, or a skill that manages &lt;em&gt;other&lt;/em&gt; skills. Load it alongside any skill project and it handles the versioning bookkeeping at session boundaries.&lt;/p&gt;

&lt;p&gt;Three conventions make it work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version identity lives inside the files.&lt;/strong&gt; Not in a filename suffix, not in a folder name, not in your memory of when you last edited it. The SKILL.md frontmatter carries the version, and the manifest confirms it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A changelog travels with the bundle.&lt;/strong&gt; Every session close appends what changed. When the next session opens, it can read the history without asking you to remember.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A manifest lists every file with roles and hashes.&lt;/strong&gt; Open a session, the skill reads the manifest, checks that all files are present, verifies hashes, and tells you what's stale or missing before you start working.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happens at session boundaries
&lt;/h2&gt;

&lt;p&gt;When you &lt;strong&gt;open a session&lt;/strong&gt;, it reads the manifest, compares hashes, flags missing files, and reports what needs attention.&lt;/p&gt;

&lt;p&gt;When you &lt;strong&gt;close a session&lt;/strong&gt;, it updates version headers, recomputes hashes, appends to the changelog, and flags files that should have been updated but weren't.&lt;/p&gt;

&lt;p&gt;When you &lt;strong&gt;hand off between sessions&lt;/strong&gt;, it generates a handoff note: current state, what was accomplished, stale files, next steps. Because the next instance of Claude has no memory of what you just did, and "I think I left off around..." is not version control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-platform portability
&lt;/h2&gt;

&lt;p&gt;The skill works on any platform that supports the &lt;a href="https://agentskills.io" rel="noopener noreferrer"&gt;agentskills.io&lt;/a&gt; standard. But different platforms have different opinions about SKILL.md frontmatter:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Frontmatter rules&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude&lt;/td&gt;
&lt;td&gt;Full metadata block with version info&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini CLI&lt;/td&gt;
&lt;td&gt;Name and description only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Codex&lt;/td&gt;
&lt;td&gt;Name and description only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Copilot&lt;/td&gt;
&lt;td&gt;Follows agentskills.io spec&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The manifest tracks a &lt;code&gt;frontmatter_mode&lt;/code&gt; field (&lt;code&gt;claude&lt;/code&gt; or &lt;code&gt;minimal&lt;/code&gt;) so the skill knows whether to embed version info in SKILL.md or keep it manifest-only. The repo ships in &lt;code&gt;minimal&lt;/code&gt; mode for maximum portability.&lt;/p&gt;

&lt;p&gt;This means you can author a skill in Claude Code, export it for Gemini CLI, and the version identity carries over without manual conversion.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's in the bundle
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skill-provenance.skill           # Install this in Claude Settings
skill-provenance/
├── SKILL.md                     # The skill definition
├── README.md                    # User guide with worked examples
├── MANIFEST.yaml                # File inventory: roles, versions, hashes
├── CHANGELOG.md                 # Change history
├── evals.json                   # 13 evaluation scenarios
└── validate.sh                  # Local hash verification script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;.skill&lt;/code&gt; file is a ZIP for Claude's Settings UI. The directory is the same content for Claude Code, git repos, and Obsidian vaults. Use whichever format fits your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  The evals
&lt;/h2&gt;

&lt;p&gt;13 structured evaluation scenarios covering: bootstrapping an unversioned bundle, detecting missing and stale files on session open, conflict detection between version headers and manifests, cross-platform bootstrapping for Codex and Gemini CLI, frontmatter mode toggling, generating git commit messages, and handoff notes with per-file change summaries.&lt;/p&gt;

&lt;p&gt;These aren't unit tests. They're prompt-and-expected-behavior pairs you can use to verify the skill works correctly on your platform, or as a reference for how to write evals for your own skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;p&gt;This is for anyone who builds or maintains agent skills and has been bitten by the "which version is this?" problem. If you're a solo author working across multiple surfaces, it catches the drift you'd otherwise miss. If you're handing skills to a team, it gives the next person a manifest they can verify instead of a folder they have to trust.&lt;/p&gt;

&lt;p&gt;It's also a reference implementation for how to structure a skill bundle. If you're building your first skill and wondering what files to include, how to write evals, or how to handle cross-platform compatibility, this is one answer to those questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;

&lt;p&gt;For Claude Chat: download &lt;code&gt;skill-provenance.skill&lt;/code&gt; from the &lt;a href="https://github.com/snapsynapse/skill-provenance/releases" rel="noopener noreferrer"&gt;latest release&lt;/a&gt;, go to Settings, Skills, Add Skill, select the file.&lt;/p&gt;

&lt;p&gt;For Claude Code, Codex, or Gemini CLI: use the &lt;code&gt;skill-provenance/&lt;/code&gt; directory directly.&lt;/p&gt;

&lt;p&gt;Then tell your agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Use the skill-provenance skill to bootstrap this bundle.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://skillprovenance.dev/" rel="noopener noreferrer"&gt;Homepage&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/snapsynapse/skill-provenance" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; -- MIT licensed, v4.0.0.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://snapsynapse.com" rel="noopener noreferrer"&gt;SnapSynapse&lt;/a&gt;. Used as the versioning backbone for &lt;a href="https://github.com/snapsynapse/skill-a11y-audit" rel="noopener noreferrer"&gt;skill-a11y-audit&lt;/a&gt;, &lt;a href="https://github.com/snapsynapse/ai-capability-reference" rel="noopener noreferrer"&gt;ai-capability-reference&lt;/a&gt;, and the rest of the SnapSynapse skill ecosystem.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Knowledge as Code: A Pattern for Knowledge Bases That Verify Themselves</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Wed, 18 Mar 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/knowledge-as-code-a-pattern-for-knowledge-bases-that-verify-themselves-2lhd</link>
      <guid>https://dev.to/snapsynapse/knowledge-as-code-a-pattern-for-knowledge-bases-that-verify-themselves-2lhd</guid>
      <description>&lt;p&gt;Documentation rots. You know this. You've seen internal wikis with pages last updated in 2023 that everyone still treats as authoritative. You've inherited a knowledge base where half the links are dead and nobody knows which facts have drifted.&lt;/p&gt;

&lt;p&gt;The usual response is to assign someone to "maintain the docs." This works until that person gets busy, changes roles, or leaves. Then the decay resumes, silently, until the wrong person relies on the wrong fact.&lt;/p&gt;

&lt;p&gt;What if the knowledge base could detect its own decay?&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;

&lt;p&gt;Knowledge as Code applies software engineering practices to knowledge management. The knowledge lives in version-controlled plain text files. It is validated by automated processes. It produces multiple outputs from a single source. And it actively resists becoming outdated.&lt;/p&gt;

&lt;p&gt;This pattern emerged from building what was originally known as the AI Capability Reference, now &lt;a href="https://github.com/snapsynapse/ai-tool-watch" rel="noopener noreferrer"&gt;AI Tool Watch&lt;/a&gt;, which is an open-source site that tracks AI capabilities, pricing tiers, and platform support across 12 products. The data changes constantly: vendors update pricing, features move between tiers, platforms add or deprecate capabilities. A traditional static site would be stale within weeks. Knowledge as Code is how it stays current.&lt;/p&gt;

&lt;h2&gt;
  
  
  Six properties
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Plain text canonical.&lt;/strong&gt; Knowledge lives in human-readable, version-controlled files. No database, no CMS, no vendor lock-in. In this project: markdown and YAML files in &lt;code&gt;data/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-healing.&lt;/strong&gt; Automated verification detects when the knowledge has drifted from reality. The system flags decay before humans notice it. In this project: a multi-model AI cascade cross-checks all data twice weekly, opens GitHub issues for human review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-output.&lt;/strong&gt; One source produces every format needed. The results are human-readable, machine-readable, agent-queryable, and search-optimized. In this project: HTML site, JSON API, MCP server, 125 SEO bridge pages, sitemap, &lt;code&gt;llms.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero-dependency.&lt;/strong&gt; No external packages. The build uses only language built-ins. Nothing breaks when you come back in a year. In this project: one Node.js script, no &lt;code&gt;package.json&lt;/code&gt;, no &lt;code&gt;node_modules&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git-native.&lt;/strong&gt; Git is the collaboration layer, the audit trail, the deployment trigger, and the contribution workflow. Issues, PRs, CI/CD, version history -- all through Git.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ontology-driven.&lt;/strong&gt; A vendor-neutral taxonomy of concepts maps to vendor-specific implementations. The structure is the data model. In this project: 18 capabilities map to 72 implementations across 12 products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why these compound
&lt;/h2&gt;

&lt;p&gt;Any one of these is a reasonable design choice. The value is in the combination.&lt;/p&gt;

&lt;p&gt;Plain text plus Git means anyone can contribute with no dev environment. Edit a file, open a PR. Plain text plus zero-dependency build means the project still builds in five years. Nothing to update, nothing to break.&lt;/p&gt;

&lt;p&gt;Ontology plus multi-output means one correction fixes the site, the API, the MCP server, and every bridge page at once. Self-healing plus Git means verification results are tracked as issues with full audit trail. Nothing is silently changed. Zero-dependency plus self-healing means maintenance cost stays low even as the knowledge grows. The system scales through automation, not staffing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The self-healing mechanism
&lt;/h2&gt;

&lt;p&gt;This is the piece that makes Knowledge as Code more than "docs as code with a new name."&lt;/p&gt;

&lt;p&gt;Twice a week, a three-layer multi-model verification cascade runs. Gemini, Perplexity, Grok, and Claude each cross-check every tracked data point: pricing tiers, platform availability, feature status, gating, regional restrictions. To prevent provider bias, models are skipped when verifying their own platform (Gemini doesn't verify Google features). A change only gets flagged when at least three models agree on a discrepancy.&lt;/p&gt;

&lt;p&gt;Flagged changes become GitHub issues for human review. Nothing auto-merges. Every data point carries a &lt;code&gt;Checked&lt;/code&gt; date; anything not re-verified within seven days is treated as stale.&lt;/p&gt;

&lt;p&gt;Link integrity gets checked every week too, using both automated CI checks and a browser-based checker that runs through real Chrome to bypass bot protection.&lt;/p&gt;

&lt;p&gt;This is anti-entropy for knowledge. In distributed systems like Dynamo and Cassandra, anti-entropy is the process that detects and repairs divergence from desired state. The verification cascade does the same thing, it finds where reality has moved away from what the files say and flags the gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standing on shoulders
&lt;/h2&gt;

&lt;p&gt;This pattern draws from established work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File over app.&lt;/strong&gt; Steph Ango's &lt;a href="https://stephango.com/file-over-app" rel="noopener noreferrer"&gt;argument&lt;/a&gt; that durable digital artifacts must be files you can control, in formats that are easy to retrieve and read. Derek Sivers on &lt;a href="https://sive.rs/plaintext" rel="noopener noreferrer"&gt;plain text permanence&lt;/a&gt;. The permacomputing movement on resilient, minimal-dependency software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docs as code.&lt;/strong&gt; Managing documentation with the same tools as software -- version control, pull requests, CI, plain text formats. Popularized by the Write the Docs community. Tom Preston-Werner (Jekyll, 2008), Eric Holscher (Read the Docs), Anne Gentle (&lt;em&gt;Docs Like Code&lt;/em&gt;, 2017), Andrew Etter (&lt;em&gt;Modern Technical Writing&lt;/em&gt;, 2016).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Living documentation.&lt;/strong&gt; Cyrille Martraire's framework for documentation that evolves at the same pace as the system it describes. His approach generates docs from code annotations and tests. This pattern extends the idea: the knowledge isn't derived from code, and verification uses AI models rather than test suites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitOps.&lt;/strong&gt; Coined by Weaveworks (2017). Git as single source of truth, with automated agents that detect drift between declared state and actual state, then reconcile. Originally for infrastructure, but it maps directly to knowledge:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;GitOps (infrastructure)&lt;/th&gt;
&lt;th&gt;Knowledge as Code&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;YAML declares desired state&lt;/td&gt;
&lt;td&gt;Markdown declares what's true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller detects drift&lt;/td&gt;
&lt;td&gt;AI cascade detects drift from reality&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto-reconciliation or alert&lt;/td&gt;
&lt;td&gt;GitHub issues for human review&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Git as single source of truth&lt;/td&gt;
&lt;td&gt;Git as single source of truth&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Multi-model verification.&lt;/strong&gt; Academic foundations for using multiple AI models as cross-checking judges: Zheng et al.'s "Judging LLM-as-a-Judge" (NeurIPS 2023), Verga et al.'s "PoLL: Panel of LLM Evaluators" (2024), Du et al.'s "Multiagent Debate" (2023), Huang &amp;amp; Zhou's "LLMs Cannot Self-Correct Reasoning Yet" (ICLR 2024).&lt;/p&gt;

&lt;h2&gt;
  
  
  What we think is new
&lt;/h2&gt;

&lt;p&gt;We haven't found prior art for these specific applications. If you know of any, we'd like to hear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Knowledge as Code" as a named pattern -- the "-as-code" lineage is well-established, but this specific application to maintained knowledge bases doesn't appear to be named&lt;/li&gt;
&lt;li&gt;AI verification cascades for documentation -- multi-model evaluation exists in academic literature, but applying it as a scheduled process to maintain a knowledge base's factual accuracy&lt;/li&gt;
&lt;li&gt;Multi-format output from the same plain text -- HTML, JSON API, MCP endpoints, and SEO bridge pages, all from markdown/YAML, with zero dependencies&lt;/li&gt;
&lt;li&gt;Ontology-driven static site generation -- using a formal taxonomy to drive site structure, navigation, and programmatic pages&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The entire project is open source. There is nothing to install.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/snapsynapse/knowledge-as-code.git
&lt;span class="nb"&gt;cd &lt;/span&gt;knowledge-as-code
node scripts/build.js
open docs/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/snapsynapse/knowledge-as-code-template" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; | &lt;a href="https://knowledge-as-code.com/" rel="noopener noreferrer"&gt;Homepage&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is a working title and an active discussion. If you've seen this pattern elsewhere, named or unnamed, &lt;a href="https://github.com/snapsynapse/knowledge-as-code-template/discussions" rel="noopener noreferrer"&gt;tell us&lt;/a&gt;. Built by &lt;a href="https://snapsynapse.com" rel="noopener noreferrer"&gt;SnapSynapse&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>showdev</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>HardGuard25: A 25-character alphabet for human-readable unique IDs</title>
      <dc:creator>snapsynapse</dc:creator>
      <pubDate>Tue, 10 Mar 2026 12:01:00 +0000</pubDate>
      <link>https://dev.to/snapsynapse/hardguard25-a-25-character-alphabet-for-human-readable-unique-ids-1652</link>
      <guid>https://dev.to/snapsynapse/hardguard25-a-25-character-alphabet-for-human-readable-unique-ids-1652</guid>
      <description>&lt;p&gt;Standard ID alphabets include characters that look identical at normal reading sizes. O and 0. I and 1 and l. S and 5. B and 8. For dyslexic readers, d and b, q and p. Every one of these is a support ticket, a failed lookup, or a phone call where someone spells the same code three times.&lt;/p&gt;

&lt;p&gt;Crockford Base32 has been the go-to fix since 2002, but it only removes 4 characters. HardGuard25 removes 11: the digit lookalikes, the dyslexia mirror pairs, and operator lookalikes like T (+) and X (*) that break spreadsheets and URLs.&lt;/p&gt;

&lt;p&gt;The alphabet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0 1 2 3 4 5 6 7 8 9 A C D F G H J K M N P R U W Y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rule: when a letter and a digit compete for the same visual slot, the digit always wins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quickstart
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;JavaScript&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @snapsynapse/hardguard25
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;checkDigit&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@snapsynapse/hardguard25&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;                          &lt;span class="c1"&gt;// "AC3H7PUW"&lt;/span&gt;
&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;checkDigit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;    &lt;span class="c1"&gt;// "AC3H7PUW" + check char&lt;/span&gt;
&lt;span class="nf"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AC3H-7PUW&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;               &lt;span class="c1"&gt;// true&lt;/span&gt;
&lt;span class="nf"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ac3h-7puw&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;              &lt;span class="c1"&gt;// "AC3H7PUW"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Python&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;hardguard25
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;hardguard25&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;check_digit&lt;/span&gt;

&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;                           &lt;span class="c1"&gt;# "AC3H7PUW"
&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;check_digit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;         &lt;span class="c1"&gt;# "AC3H7PUW" + check char
&lt;/span&gt;&lt;span class="nf"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AC3H-7PUW&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;                &lt;span class="c1"&gt;# True
&lt;/span&gt;&lt;span class="nf"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ac3h-7puw&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;               &lt;span class="c1"&gt;# "AC3H7PUW"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Go&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"github.com/snapsynapse/hardguard25/go"&lt;/span&gt;

&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;hardguard25&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;hardguard25&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"AC3H-7PUW"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;hardguard25&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ac3h-7puw"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No library needed? The alphabet is the standard. Use it directly: &lt;code&gt;0123456789ACDFGHJKMNPRUWY&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it fits
&lt;/h2&gt;

&lt;p&gt;Order numbers, tracking codes, license keys, patient IDs, booking references, device IDs, promo codes, QR payloads, one-time passcodes. If it gets printed on a label, read over the phone, entered by hand, or scanned by OCR, it should be HardGuard25.&lt;/p&gt;

&lt;h2&gt;
  
  
  How long should IDs be?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Length&lt;/th&gt;
&lt;th&gt;Unique IDs&lt;/th&gt;
&lt;th&gt;Good for&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;390,625&lt;/td&gt;
&lt;td&gt;Small inventory, tickets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;244 million&lt;/td&gt;
&lt;td&gt;Medium businesses&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;152 billion&lt;/td&gt;
&lt;td&gt;Large systems&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;3.55 x 10^22&lt;/td&gt;
&lt;td&gt;Cross-system identifiers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each character provides 4.64 bits of entropy.&lt;/p&gt;

&lt;h2&gt;
  
  
  When not to use it
&lt;/h2&gt;

&lt;p&gt;Cryptographic keys (use proper key derivation), blockchain consensus (use domain-specific formats), systems requiring UUID guarantees (use UUIDv7 or ULID), or machine-only contexts where no human ever sees the ID.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://hardguard25.com/" rel="noopener noreferrer"&gt;Homepage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/snapsynapse/hardguard25" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://snapsynapse.github.io/hardguard25/" rel="noopener noreferrer"&gt;Interactive generator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/snapsynapse/hardguard25/blob/main/SPEC.md" rel="noopener noreferrer"&gt;Full specification (SPEC.md)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Spec is CC BY 4.0. Code is MIT.&lt;/p&gt;




&lt;p&gt;What character set are you using for human-facing IDs? Curious how many people have hit the O/0 problem in production.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>python</category>
    </item>
  </channel>
</rss>
