<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ryan Nelson</title>
    <description>The latest articles on DEV Community by Ryan Nelson (@commonguy).</description>
    <link>https://dev.to/commonguy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/commonguy"/>
    <language>en</language>
    <item>
      <title>Authproof</title>
      <dc:creator>Ryan Nelson</dc:creator>
      <pubDate>Thu, 16 Apr 2026 02:22:22 +0000</pubDate>
      <link>https://dev.to/commonguy/authproof-kdm</link>
      <guid>https://dev.to/commonguy/authproof-kdm</guid>
      <description>&lt;p&gt;When an AI agent does something it shouldn't, the company running it can say anything. "The user authorized this." "The model went rogue." "We have no record of that."&lt;/p&gt;

&lt;p&gt;Right now, there is no cryptographic record of what a user actually authorized before an agent acted. The operator — the company running the agent — is a trusted third party with no binding commitment. Every AI agent deployment in the world has this gap.&lt;/p&gt;

&lt;p&gt;I kept thinking about it like the early internet. For years there was no SSL. Websites just asked you to trust them with your credit card. Then someone built the cryptographic primitive that made trust unnecessary. That padlock in your browser is SSL.&lt;/p&gt;

&lt;p&gt;AI agents need the same thing. Not monitoring. Not logs. A cryptographic receipt that existed before the first action.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>cybersecurity</category>
      <category>security</category>
    </item>
    <item>
      <title>I built the authorization primitive AI agents are missing — here's what a week of building in public taught me</title>
      <dc:creator>Ryan Nelson</dc:creator>
      <pubDate>Thu, 16 Apr 2026 02:21:15 +0000</pubDate>
      <link>https://dev.to/commonguy/i-built-the-authorization-primitive-ai-agents-are-missing-heres-what-a-week-of-building-in-4cno</link>
      <guid>https://dev.to/commonguy/i-built-the-authorization-primitive-ai-agents-are-missing-heres-what-a-week-of-building-in-4cno</guid>
      <description>&lt;p&gt;A week ago I posted a question on Reddit asking how people cryptographically prove what an AI agent was authorized to do.&lt;br&gt;
I had an idea. I had no code. I wanted to see if the problem was real before I built anything.&lt;/p&gt;

&lt;p&gt;The thread got technical fast. A senior developer called it vibecoded junk and asked five hard questions. I answered them. He asked five more. By the end he was stress testing the design instead of dismissing it. That told me the problem was real.&lt;br&gt;
So I built it.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
