<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nnaa</title>
    <description>The latest articles on DEV Community by Nnaa (@dividebyzerogt).</description>
    <link>https://dev.to/dividebyzerogt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dividebyzerogt"/>
    <language>en</language>
    <item>
      <title>What Happens When an AI Agent Goes Rogue</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:06:29 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/what-happens-when-an-ai-agent-goes-rogue-47he</link>
      <guid>https://dev.to/dividebyzerogt/what-happens-when-an-ai-agent-goes-rogue-47he</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/what-happens-when-ai-agent-goes-rogue" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It is 2:47 AM on a Tuesday. Your monitoring dashboard lights up. One of your AI agents, a customer service bot that normally handles 200 requests per hour, has suddenly spiked to 5,000 requests per hour. It is accessing customer records it has never touched before. It is making API calls to services outside its normal workflow. Something is very wrong.&lt;/p&gt;

&lt;p&gt;What do you do?&lt;/p&gt;

&lt;p&gt;If your agents use shared API keys, your options are limited and painful. You can rotate the API key, but that kills every agent using that key, not just the one that is misbehaving. You can try to identify and block the specific container or IP address, but in a containerized environment with dynamic networking, that is whack a mole. You can shut down the entire service, but that takes everything offline.&lt;/p&gt;

&lt;p&gt;None of these options are good. They are all too slow, too broad, or too destructive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kill Switch
&lt;/h2&gt;

&lt;p&gt;With machine identity, you have a better option: the kill switch. One click in the console, one API call from your code, or one automated policy trigger, and the specific agent's identity is revoked. Here is what happens in the next few seconds:&lt;/p&gt;

&lt;p&gt;The agent's DID is marked as revoked in the trust registry. This is an atomic operation that takes milliseconds.&lt;/p&gt;

&lt;p&gt;All active sessions for the agent are immediately invalidated. Any system that validates the agent's session token will get a "revoked" response on the next check.&lt;/p&gt;

&lt;p&gt;A revocation event is broadcast through the transparency log's real time synchronization channel. Connected systems receive the revocation notification within seconds.&lt;/p&gt;

&lt;p&gt;The revocation is recorded in the transparency log with the timestamp, the identity of whoever triggered it, and the reason. This creates a permanent, tamper evident record for post incident investigation.&lt;/p&gt;

&lt;p&gt;Every other agent in your environment continues operating normally. The kill switch is surgical. It targets one identity, not a credential shared by many.&lt;/p&gt;

&lt;h2&gt;
  
  
  Triggering the Kill Switch
&lt;/h2&gt;

&lt;p&gt;There are three ways to trigger the kill switch, designed for different scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual activation from the console.&lt;/strong&gt; A security analyst sees something suspicious and clicks the kill switch button on the agent's detail page. The analyst provides a reason, confirms the action, and the revocation executes immediately. This is the human in the loop option for situations where judgment is needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API activation from code.&lt;/strong&gt; Your monitoring system detects anomalous behavior and calls the revocation API programmatically. This enables integration with your existing incident response automation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;revoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;anomalous_behavior_detected&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;triggeredBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;automated_monitoring&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Policy based automatic activation.&lt;/strong&gt; You define rules in advance: "if any agent's trust score drops below 20, revoke immediately" or "if any agent exceeds 10 scope violations in an hour, revoke immediately." The platform evaluates these rules continuously and triggers the kill switch automatically when conditions are met. This is the fastest response option because there is no human delay.&lt;/p&gt;

&lt;h2&gt;
  
  
  After the Kill Switch
&lt;/h2&gt;

&lt;p&gt;Stopping the agent is step one. Understanding what happened is step two.&lt;/p&gt;

&lt;p&gt;The transparency log contains a complete record of the agent's activity leading up to the revocation. Security teams can query the log by agent DID and time range to see every action the agent took: which APIs it called, which data it accessed, which scopes it used, and at what timestamps.&lt;/p&gt;

&lt;p&gt;The trust score history shows when the agent's behavior started deviating from its baseline. The individual trust factors pinpoint which dimension changed: was it scope violations? Anomalous request patterns? Session management issues? This narrows the investigation to the specific behavioral change that triggered the concern.&lt;/p&gt;

&lt;p&gt;If the agent was compromised through a prompt injection attack, the log entries show the moment the agent's behavior shifted. If a key was stolen, the authentication patterns may reveal unusual source characteristics. If it was a software bug, the trust factor breakdown will show which aspect of behavior changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recovery
&lt;/h2&gt;

&lt;p&gt;Once the root cause is identified and remediated, you can register a new agent identity with fresh cryptographic keys. The old identity remains permanently revoked in the trust registry. This is by design: revocation is irreversible because you can never be fully certain that a compromised identity is clean. A new identity is always safer.&lt;/p&gt;

&lt;p&gt;The new agent starts with a fresh trust score and builds its reputation from scratch. This is the trust model working as intended: a new agent should not inherit the trust of an old agent, because the old agent's history includes the period when it was compromised.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Your Response Plan
&lt;/h2&gt;

&lt;p&gt;Every organization deploying AI agents should have an incident response plan that covers agent compromise scenarios. At minimum, the plan should define who has authority to trigger the kill switch, what thresholds trigger automatic revocation policies, what the investigation workflow looks like after a revocation, and how recovered agents are re deployed with new identities.&lt;/p&gt;

&lt;p&gt;The kill switch is not just a feature. It is the last line of defense in your agent security architecture. Everything else, from trust scores to scope enforcement to behavioral monitoring, is designed to prevent the situation where you need to use it. But when you do need it, you need it to work instantly and decisively.&lt;/p&gt;

&lt;p&gt;Configure your kill switch policies in the &lt;a href="https://console.truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks Console&lt;/a&gt; under Agent Management. The &lt;a href="https://docs.truthlocks.com/guides/machine-identity" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; includes a complete guide to incident response for AI agent compromises.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>killswitch</category>
      <category>aiagents</category>
      <category>incidentresponse</category>
    </item>
    <item>
      <title>Ten Minutes to Your First Machine Identity</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:05:53 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/ten-minutes-to-your-first-machine-identity-2c1h</link>
      <guid>https://dev.to/dividebyzerogt/ten-minutes-to-your-first-machine-identity-2c1h</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/ten-minutes-to-your-first-machine-identity" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is a hands on guide. By the end of it, you will have registered an AI agent with a cryptographic identity, created a bounded session, and verified the identity programmatically. The whole thing takes about ten minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You need a Truthlocks account (free tier works), an API key from the console, and Node.js 18 or later. If you prefer Python, the same steps work with the Python SDK. I will show both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Install the SDK
&lt;/h2&gt;

&lt;p&gt;JavaScript:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install @truthlocks/sdk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Python:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install truthlocks&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Register Your Agent
&lt;/h2&gt;

&lt;p&gt;Every agent needs an identity before it can operate. Registration creates a DID, generates a key pair, and records the agent in the trust registry.&lt;/p&gt;

&lt;p&gt;JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TruthlockClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@truthlocks/sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TruthlockClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TRUTHLOCK_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;onboarding-assistant&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Handles customer onboarding workflow&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;scopes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customers:read&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customers:write&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;email:send&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Agent ID:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Agent DID:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;did&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;truthlocks&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TruthlockClient&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TruthlockClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;onboarding-assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Handles customer onboarding workflow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;scopes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;customers:read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;customers:write&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email:send&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent ID: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agent_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent DID: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;did&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response includes the agent's unique identifier, its DID (something like &lt;code&gt;did:truthlock:agent:a1b2c3d4&lt;/code&gt;), and the public key that other systems can use to verify the agent's identity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a Session
&lt;/h2&gt;

&lt;p&gt;Sessions are bounded execution contexts. They define when the agent can operate, what scopes it has for this specific run, and when the session expires.&lt;/p&gt;

&lt;p&gt;JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;scopes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customers:read&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;email:send&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;durationMinutes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Session token:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Expires at:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;expiresAt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;agent_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agent_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;scopes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;customers:read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email:send&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;duration_minutes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Session token: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Expires at: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;expires_at&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that the session scopes are a subset of the agent's registered scopes. The session requests only &lt;code&gt;customers:read&lt;/code&gt; and &lt;code&gt;email:send&lt;/code&gt;, even though the agent is also authorized for &lt;code&gt;customers:write&lt;/code&gt;. This is the principle of least privilege applied at the session level: the agent requests only what it needs for this specific execution context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Verify the Identity
&lt;/h2&gt;

&lt;p&gt;When your agent calls another service, that service needs to verify the agent's identity. The receiving service takes the session token from the request header and validates it:&lt;/p&gt;

&lt;p&gt;JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;verification&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Valid:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;valid&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Agent:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;agentName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Trust score:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;trustScore&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Scopes:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scopes&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;verification&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Valid: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;valid&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agent_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Trust score: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trust_score&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Scopes: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;scopes&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The validation checks the cryptographic signature, confirms the session has not expired, verifies the agent's identity against the trust registry, and returns the current trust score and authorized scopes. All of this happens in a single API call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Check the Trust Score
&lt;/h2&gt;

&lt;p&gt;Your agent now has a trust score that reflects its behavioral history. As the agent operates cleanly over time, the score rises. You can check it anytime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getTrustScore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Trust score:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Trend:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;trend&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Factors:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;factors&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A brand new agent starts with a baseline score. The score adjusts based on the five trust factors: behavioral compliance, scope adherence, anomaly detection, peer attestations, and session hygiene.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Just Happened
&lt;/h2&gt;

&lt;p&gt;In about ten minutes, you gave an AI agent a cryptographic identity that can be independently verified by any system, scoped authorization that enforces what the agent can and cannot do, a bounded session that limits when and how long the agent operates, a trust score that tracks the agent's reliability over time, and an audit trail in the transparency log recording every step.&lt;/p&gt;

&lt;p&gt;This is the foundation that everything else builds on: anti fraud detection, compliance reporting, cross tenant delegation, automated kill switches. It all starts with giving the agent an identity.&lt;/p&gt;

&lt;p&gt;Explore the full &lt;a href="https://docs.truthlocks.com/api-reference/machine-identity" rel="noopener noreferrer"&gt;API reference&lt;/a&gt; for advanced features, or visit the &lt;a href="https://console.truthlocks.com" rel="noopener noreferrer"&gt;console&lt;/a&gt; to see your agent in the dashboard.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>sdk</category>
      <category>machineidentity</category>
      <category>quickstart</category>
    </item>
    <item>
      <title>AI Agents Are Your Enterprise's Newest Security Blind Spot</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:05:17 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/ai-agents-are-your-enterprises-newest-security-blind-spot-1od3</link>
      <guid>https://dev.to/dividebyzerogt/ai-agents-are-your-enterprises-newest-security-blind-spot-1od3</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/ai-agents-enterprise-security-blind-spot" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Your security operations center monitors every human login. Your SIEM correlates events across every server, endpoint, and network device. Your EDR watches every process on every workstation. Your DLP scans every file that leaves the perimeter. You have spent millions building visibility into what happens inside your enterprise.&lt;/p&gt;

&lt;p&gt;But right now, somewhere in your environment, an AI agent is accessing a customer database, calling an external API, or making a decision that affects your business. And your security team probably has no idea it is happening.&lt;/p&gt;

&lt;p&gt;AI agents are the fastest growing category of unmonitored actors in enterprise environments. They are not endpoints, so EDR does not see them. They are not humans, so identity governance does not cover them. They authenticate with service accounts or shared API keys, so they blend into legitimate service to service traffic. They are invisible to the security stack you built for a world where only humans needed watching.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Visibility Problem
&lt;/h2&gt;

&lt;p&gt;Ask your security team these questions. How many AI agents are operating in your environment right now? Who authorized them? What data can they access? What actions can they take? When was the last time their permissions were reviewed?&lt;/p&gt;

&lt;p&gt;If your organization is typical, the answer to most of these questions is "we do not know." This is not because your security team is negligent. It is because the tools and processes they use were designed for a different threat model.&lt;/p&gt;

&lt;p&gt;Traditional security monitoring works by establishing a baseline of normal human behavior and flagging deviations. AI agents do not fit this model. They operate at machine speed, making hundreds or thousands of API calls per minute. They do not follow human activity patterns. They do not have working hours. They do not take lunch breaks. The behavioral models that detect a compromised human account are useless against a compromised AI agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Attack Scenarios
&lt;/h2&gt;

&lt;p&gt;The lack of visibility creates attack surfaces that sophisticated adversaries are already exploring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt injection for lateral movement.&lt;/strong&gt; An attacker compromises a low privilege AI agent through a prompt injection attack. The agent then uses its API access to query internal systems, exfiltrate data, or escalate privileges. Because the agent is using legitimate credentials, the activity looks like normal service to service communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shadow agents.&lt;/strong&gt; Developers deploy AI agents for productivity without going through security review. These agents have broad API access, no monitoring, and no incident response plan. They are shadow IT, but faster and more autonomous than any previous generation of shadow IT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supply chain agent compromise.&lt;/strong&gt; A third party AI agent integrated into your workflow is compromised at the vendor level. The agent continues to operate normally for most tasks but exfiltrates data or modifies transactions on specific triggers. Because you do not control the agent's code, you cannot inspect it. Because it authenticates with valid credentials, you cannot distinguish it from the legitimate version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing the Blind Spot
&lt;/h2&gt;

&lt;p&gt;Closing the AI agent blind spot requires three capabilities that most security stacks lack today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent inventory.&lt;/strong&gt; You need a complete, authoritative registry of every AI agent operating in your environment. Not a spreadsheet maintained by each team. A centralized registry where agents must be registered before they can operate, with metadata about their purpose, owner, capabilities, and authorization level. The Truthlocks trust registry provides this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent specific detection.&lt;/strong&gt; You need monitoring rules designed for agent behavior patterns. This means baselining each agent's normal API call patterns, data access patterns, and interaction sequences, then alerting on deviations. Traditional SIEM rules designed for human behavior will generate either too many false positives (because agent behavior looks "abnormal" by human standards) or too few true positives (because the attacker's behavior looks "normal" by agent standards). Trust scores address this by building agent specific behavioral baselines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent specific response.&lt;/strong&gt; When you detect a compromised agent, you need to be able to revoke its access immediately without disrupting other agents or services. Rotating a shared API key is a sledgehammer that breaks everything. The kill switch provides surgical revocation: one agent's identity is revoked, all its sessions are terminated, and every connected system is notified, in seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration With Your Security Stack
&lt;/h2&gt;

&lt;p&gt;The Truthlocks transparency log integrates with existing security infrastructure through webhook notifications and structured log export. Trust score changes, scope violation attempts, and kill switch activations can be forwarded to your SIEM as structured events. This means your SOC can incorporate AI agent events into their existing correlation rules, dashboards, and incident response playbooks.&lt;/p&gt;

&lt;p&gt;You do not need to replace your security stack. You need to extend it to cover the actors it was never designed to see.&lt;/p&gt;

&lt;p&gt;Start by getting visibility. Register your agents in the &lt;a href="https://console.truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks Console&lt;/a&gt;, enable trust scoring, and connect the event feed to your SIEM. Once you can see what your agents are doing, you can start making informed security decisions about them.&lt;/p&gt;

&lt;p&gt;The agents are already in your environment. The question is whether you can see them.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>enterprise</category>
      <category>aiagents</category>
      <category>observability</category>
    </item>
    <item>
      <title>How We Built a Trust Registry in Go</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:04:40 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/how-we-built-a-trust-registry-in-go-45o3</link>
      <guid>https://dev.to/dividebyzerogt/how-we-built-a-trust-registry-in-go-45o3</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/building-trust-registry-in-go" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The trust registry is the most critical service in the Truthlocks platform. Every verification request, every agent identity check, every trust score query, every scope validation passes through it. If the trust registry is slow, everything is slow. If it is down, everything is down. If it has a bug, every verification decision is potentially wrong.&lt;/p&gt;

&lt;p&gt;We built it in Go. This post explains why, and walks through the engineering patterns that let us serve verification requests with P99 latency under 40 milliseconds while maintaining strict multi tenant isolation and a complete audit trail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Go
&lt;/h2&gt;

&lt;p&gt;We evaluated Go, Rust, Java, and TypeScript for the trust registry. We chose Go for three reasons.&lt;/p&gt;

&lt;p&gt;First, Go's concurrency model matches our workload perfectly. The trust registry handles thousands of concurrent verification requests, each of which involves multiple I/O operations: database queries, cache lookups, and occasionally external service calls. Goroutines let us handle this concurrency without the thread pool tuning that Java requires or the async complexity that Rust and TypeScript introduce.&lt;/p&gt;

&lt;p&gt;Second, Go produces small, statically linked binaries that start in milliseconds. Our Docker images are under 30 MB. This matters for our deployment model (AWS ECS Fargate) where container startup time directly affects scaling responsiveness and cost.&lt;/p&gt;

&lt;p&gt;Third, Go's standard library is excellent for the kind of service we are building. The &lt;code&gt;net/http&lt;/code&gt; server, the &lt;code&gt;database/sql&lt;/code&gt; interface, the &lt;code&gt;encoding/json&lt;/code&gt; package, the &lt;code&gt;crypto&lt;/code&gt; packages for Ed25519 operations: these are production quality, well tested, and performant. We added relatively few external dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Service Architecture
&lt;/h2&gt;

&lt;p&gt;The trust registry follows a clean layered architecture. The HTTP handler layer parses requests, validates input, and delegates to the service layer. The service layer contains business logic and orchestrates calls to the repository layer. The repository layer handles all database interactions through a PostgreSQL connection pool.&lt;/p&gt;

&lt;p&gt;Each layer has a defined interface. Handlers depend on service interfaces. Services depend on repository interfaces. The concrete implementations are wired together at startup through dependency injection. This is not a framework. It is plain Go interfaces and struct composition. No magic, no reflection, no code generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi Tenant Isolation
&lt;/h2&gt;

&lt;p&gt;Every request to the trust registry carries a tenant context. The API gateway extracts the tenant identifier from the authentication token and passes it as an &lt;code&gt;X-Tenant-ID&lt;/code&gt; header. The first thing the handler does is extract this header and propagate the tenant context through the entire request lifecycle.&lt;/p&gt;

&lt;p&gt;At the database level, every table has a &lt;code&gt;tenant_id&lt;/code&gt; column, and PostgreSQL row level security (RLS) policies enforce that queries can only access rows belonging to the current tenant. We set the tenant context on the database connection before executing any query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;withTenantContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tenantID&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UUID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fn&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tx&lt;/span&gt; &lt;span class="n"&gt;pgx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Tx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BeginFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tx&lt;/span&gt; &lt;span class="n"&gt;pgx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Tx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;tx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"SET LOCAL app.current_tenant = $1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tenantID&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;SET LOCAL&lt;/code&gt; ensures the tenant context applies only to the current transaction. It is automatically cleared when the transaction completes, preventing tenant context leakage between requests that share the same connection.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verification Hot Path
&lt;/h2&gt;

&lt;p&gt;The verification endpoint is the most latency sensitive path in the system. When a verifier submits an attestation for verification, the service needs to:&lt;/p&gt;

&lt;p&gt;Resolve the issuer's identity from the trust registry. Validate the issuer's trust level and active status. Retrieve the signing key and verify the cryptographic signature. Check the revocation registry. Verify transparency log inclusion.&lt;/p&gt;

&lt;p&gt;We execute as many of these steps concurrently as possible. The issuer lookup and revocation check are independent and run in parallel goroutines. The signature verification starts as soon as the key is available. The transparency log check runs concurrently with the revocation check.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;verificationService&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Verify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="n"&gt;VerifyRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;VerifyResult&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;errgroup&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;issuer&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Issuer&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;revoked&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;

    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Go&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
        &lt;span class="n"&gt;issuer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;registry&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetIssuer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IssuerDID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Go&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
        &lt;span class="n"&gt;revoked&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;revocation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IsRevoked&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AttestationID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// Now verify signature with the retrieved key&lt;/span&gt;
    &lt;span class="n"&gt;valid&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;crypto&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;VerifyEd25519&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;issuer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PublicKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Signature&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;VerifyResult&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Valid&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="n"&gt;valid&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;revoked&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;TrustLevel&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;issuer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TrustLevel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The concurrent execution pattern shaves roughly 60% off the total latency compared to sequential execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching Strategy
&lt;/h2&gt;

&lt;p&gt;The trust registry uses a two tier caching strategy. An in process LRU cache holds recently accessed issuer records and key material. A connection pool to PostgreSQL with prepared statements handles cache misses. We do not use an external cache like Redis because the added network hop would negate the latency benefit for our access patterns.&lt;/p&gt;

&lt;p&gt;Cache invalidation is straightforward because the trust registry uses event sourcing. When an issuer's state changes (trust level update, key rotation, revocation), the event handler invalidates the corresponding cache entries. The next request fetches the current state from the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Graceful Shutdown and Health Checks
&lt;/h2&gt;

&lt;p&gt;In a containerized environment, graceful shutdown is critical. When ECS sends a SIGTERM, the service stops accepting new requests, waits for in flight requests to complete (with a 30 second timeout), closes database connections cleanly, and then exits.&lt;/p&gt;

&lt;p&gt;Health checks run at two levels. The liveness check confirms the process is running and responsive. The readiness check confirms that the database connection pool is healthy and the service can serve requests. ECS uses the readiness check to determine whether to route traffic to the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Would Do Differently
&lt;/h2&gt;

&lt;p&gt;If we were starting the trust registry from scratch today, we would change two things.&lt;/p&gt;

&lt;p&gt;We would use structured logging from day one. We started with unstructured log lines and migrated to structured JSON logging later. The migration was straightforward but time consuming. Starting with structured logging would have saved us weeks of log parsing scripts and dashboard rebuilds.&lt;/p&gt;

&lt;p&gt;We would implement distributed tracing earlier. We added OpenTelemetry instrumentation after the initial launch. Having traces from the beginning would have significantly accelerated our ability to diagnose latency issues and optimize the verification hot path.&lt;/p&gt;

&lt;p&gt;Everything else, from the language choice to the architecture to the database strategy, has proven out well under production load. Go was the right choice for this workload, and we are confident it will scale to the next order of magnitude.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>engineering</category>
      <category>microservices</category>
      <category>backend</category>
    </item>
    <item>
      <title>The Compliance Case for Machine Identity</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:04:01 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/the-compliance-case-for-machine-identity-551i</link>
      <guid>https://dev.to/dividebyzerogt/the-compliance-case-for-machine-identity-551i</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/compliance-case-for-machine-identity" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If your organization is subject to SOC 2, ISO 27001, HIPAA, or similar compliance frameworks, you have spent considerable effort documenting how human users are identified, authenticated, authorized, and audited. Your access control matrices are meticulous. Your audit trails are comprehensive. Your identity lifecycle management is well defined.&lt;/p&gt;

&lt;p&gt;Now here is the uncomfortable question: does any of that documentation cover your AI agents?&lt;/p&gt;

&lt;p&gt;For most organizations, the answer is no. AI agents exist in a compliance gray zone. They are not human users, so they do not fit neatly into existing access control frameworks. They are not traditional service accounts, because they make autonomous decisions that affect business outcomes. They are something new, and the compliance frameworks have not caught up yet.&lt;/p&gt;

&lt;p&gt;But auditors have.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Auditors Are Starting to Ask
&lt;/h2&gt;

&lt;p&gt;Forward thinking SOC 2 auditors are already including AI agent governance in their assessment scope. The questions they ask follow a predictable pattern:&lt;/p&gt;

&lt;p&gt;How do you identify AI agents operating in your environment? Can you provide an inventory of every agent, its purpose, its owner, and its access level?&lt;/p&gt;

&lt;p&gt;How are AI agents authenticated? Are they using individual credentials or shared secrets? If shared, how do you distinguish one agent's actions from another's?&lt;/p&gt;

&lt;p&gt;How are AI agents authorized? What access control model governs what agents can and cannot do? How is least privilege enforced?&lt;/p&gt;

&lt;p&gt;How do you monitor AI agent behavior? Do you have alerting for anomalous agent activity? What constitutes an "incident" involving an AI agent?&lt;/p&gt;

&lt;p&gt;How do you revoke AI agent access? If an agent is compromised, what is your response time to terminate its access? Can you do it without disrupting other agents?&lt;/p&gt;

&lt;p&gt;If you cannot answer these questions today, you have a gap that will eventually become a finding.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Machine Identity Closes the Gap
&lt;/h2&gt;

&lt;p&gt;Machine identity, as implemented through the MAIP protocol, maps directly to the controls that compliance frameworks require.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identification and inventory.&lt;/strong&gt; Every agent registered in the Truthlocks trust registry has a unique identity (DID), a human readable name, an owning tenant, a description, a version, and metadata about its purpose and capabilities. The registry is your agent inventory. It is always current because agents cannot operate without being registered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication.&lt;/strong&gt; Agents authenticate using cryptographic key pairs, not shared secrets. Each agent has its own keys, so every API call can be attributed to a specific agent. This maps directly to SOC 2's requirement for unique user identification (CC6.1).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorization.&lt;/strong&gt; Agents have explicit scope definitions that follow the principle of least privilege. Scopes are assigned at registration and enforced at every API boundary. This maps to SOC 2's logical access controls (CC6.3) and ISO 27001's access control policy (A.9).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and anomaly detection.&lt;/strong&gt; The trust score system continuously monitors agent behavior and flags anomalies. Trust score drops trigger automated reviews. This maps to SOC 2's monitoring requirements (CC7.2) and ISO 27001's event logging (A.12.4).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revocation.&lt;/strong&gt; The kill switch provides immediate, targeted revocation of individual agent identities. Revocation propagates within seconds. This maps to SOC 2's access removal requirements (CC6.2) and ISO 27001's access rights management (A.9.2).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit trail.&lt;/strong&gt; The transparency log provides a tamper evident record of every significant agent action. The log is cryptographically chained and independently verifiable. This maps to SOC 2's audit logging requirements (CC7.2) and ISO 27001's protection of log information (A.12.4.3).&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation That Writes Itself
&lt;/h2&gt;

&lt;p&gt;One of the most time consuming aspects of compliance is documentation. With machine identity in place, most of the evidence collection for AI agent governance is automated. The trust registry provides the agent inventory. Session logs provide authentication evidence. Scope definitions provide authorization documentation. Trust score history provides monitoring evidence. The transparency log provides the audit trail.&lt;/p&gt;

&lt;p&gt;When your auditor asks for evidence that your AI agents are properly governed, you point them to the registry and the logs. The evidence is structured, timestamped, and cryptographically verifiable. It is significantly more robust than the spreadsheets and screenshots that typically pass for compliance evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Ahead of the Curve
&lt;/h2&gt;

&lt;p&gt;Compliance frameworks evolve slowly, but they do evolve. The controls for AI agent governance that are "nice to have" today will be mandatory tomorrow. Organizations that implement machine identity now will have mature, evidence rich programs in place when the requirements formalize. Organizations that wait will be scrambling to retrofit controls under audit pressure.&lt;/p&gt;

&lt;p&gt;The cost of implementing machine identity is a fraction of the cost of a compliance finding. More importantly, it is a fraction of the cost of a breach caused by an ungoverned AI agent.&lt;/p&gt;

&lt;p&gt;Start with the &lt;a href="https://docs.truthlocks.com/guides/machine-identity" rel="noopener noreferrer"&gt;Machine Identity documentation&lt;/a&gt; and the &lt;a href="https://console.truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks Console&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>compliance</category>
      <category>soc2</category>
      <category>machineidentity</category>
      <category>security</category>
    </item>
    <item>
      <title>Adding a Trust Layer to Your Agent Orchestration</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:03:25 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/adding-a-trust-layer-to-your-agent-orchestration-jho</link>
      <guid>https://dev.to/dividebyzerogt/adding-a-trust-layer-to-your-agent-orchestration-jho</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/agent-orchestration-trust-layer" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you are building multi agent systems with LangChain, CrewAI, AutoGen, or your own orchestration framework, you have probably spent significant time thinking about how agents communicate, how tasks are decomposed, and how results are aggregated. But have you thought about whether you can actually trust the agents in your pipeline?&lt;/p&gt;

&lt;p&gt;Most orchestration frameworks treat agents as interchangeable execution units. They assume that if an agent is in the system, it is legitimate. They do not verify identity. They do not check authorization. They do not track behavioral history. They do not enforce scope boundaries.&lt;/p&gt;

&lt;p&gt;This works fine in a prototype. In production, it is a security hole.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem in Concrete Terms
&lt;/h2&gt;

&lt;p&gt;Consider a multi agent workflow for processing customer onboarding. Agent A collects customer information. Agent B runs identity verification. Agent C creates accounts in your systems. Agent D sends the welcome email.&lt;/p&gt;

&lt;p&gt;In a typical orchestration setup, all four agents share the same service account credentials. If Agent A is compromised through a prompt injection attack, it can impersonate Agent C and create unauthorized accounts. If someone deploys a malicious Agent E that pretends to be Agent B, the orchestrator has no way to detect the impostor. If Agent D starts accessing customer data that is not part of its task, nothing stops it.&lt;/p&gt;

&lt;p&gt;The orchestrator is blind to identity. It routes tasks based on capability declarations, not verified identity. This is equivalent to routing sensitive work to anyone who claims they can do it, without checking their badge or their background.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trust Layer Pattern
&lt;/h2&gt;

&lt;p&gt;The fix is a trust layer that sits between your orchestrator and your agents. The trust layer handles three responsibilities:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity verification.&lt;/strong&gt; Before an agent participates in any workflow, the trust layer verifies its MAIP identity against the Truthlocks trust registry. The agent must present a valid session token signed with its registered keys. If the identity check fails, the agent is not allowed to participate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scope enforcement.&lt;/strong&gt; When the orchestrator assigns a task to an agent, the trust layer checks that the agent's authorized scopes include the permissions needed for that task. Agent D, authorized for &lt;code&gt;email:send&lt;/code&gt;, cannot be assigned a task that requires &lt;code&gt;customers:write&lt;/code&gt;. The scope check happens before the task is dispatched, not after.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust gating.&lt;/strong&gt; Critical tasks can be gated on minimum trust scores. Account creation in the onboarding example might require a trust score of 80 or above. A newly registered agent starts with a lower score and cannot perform that task until it has built up a track record with less sensitive work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration With LangChain
&lt;/h2&gt;

&lt;p&gt;For LangChain users, the trust layer integrates as a custom callback handler. Before any tool invocation, the callback verifies the calling agent's identity and checks that the tool falls within its authorized scopes. The integration requires adding about 20 lines of code to your existing chain setup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TruthlockClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@truthlocks/sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;truthlock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TruthlockClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TRUTHLOCK_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Verify agent before task execution&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;truthlock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;agentToken&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;trustScore&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;requiredMinimum&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Agent trust score below threshold&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scopes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;requiredScope&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Agent not authorized for this scope&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Proceed with task execution&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Integration With CrewAI and AutoGen
&lt;/h2&gt;

&lt;p&gt;CrewAI and AutoGen use different orchestration patterns, but the trust layer integration follows the same principle. For CrewAI, you add identity verification to the crew's task assignment logic. For AutoGen, you add it to the conversation manager's agent selection logic. In both cases, the trust layer acts as a gatekeeper that the orchestrator consults before dispatching work.&lt;/p&gt;

&lt;p&gt;The Truthlocks SDKs (JavaScript, Python, Go) provide the building blocks. The trust layer is not a separate service you need to deploy. It is a set of API calls you add to your existing orchestration code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Get
&lt;/h2&gt;

&lt;p&gt;Once the trust layer is in place, your orchestration system gains capabilities that were previously impossible:&lt;/p&gt;

&lt;p&gt;You can answer "which agent did this?" for any action in any workflow, because every action is tied to a verified identity.&lt;/p&gt;

&lt;p&gt;You can enforce least privilege at the agent level, not just the service account level, because scopes are checked per agent per task.&lt;/p&gt;

&lt;p&gt;You can automatically quarantine misbehaving agents without disrupting healthy ones, because the kill switch targets individual identities, not shared credentials.&lt;/p&gt;

&lt;p&gt;You can prove to auditors exactly what each agent was authorized to do and what it actually did, because the transparency log captures the full decision trail.&lt;/p&gt;

&lt;p&gt;The agents are already there. The orchestration is already running. The trust layer is the piece that makes it production grade.&lt;/p&gt;

&lt;p&gt;Get started with the &lt;a href="https://docs.truthlocks.com/guides/machine-identity" rel="noopener noreferrer"&gt;integration guide&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>langchain</category>
      <category>crewai</category>
      <category>orchestration</category>
      <category>aiagents</category>
    </item>
    <item>
      <title>MAIP: An Open Protocol for Machine Identity</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:02:49 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/maip-an-open-protocol-for-machine-identity-2dla</link>
      <guid>https://dev.to/dividebyzerogt/maip-an-open-protocol-for-machine-identity-2dla</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/open-source-maip-protocol" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When we set out to build machine identity infrastructure, we faced a choice that would shape everything that followed. We could build a proprietary system that locks customers into our platform, or we could create an open protocol that anyone can implement and let our platform compete on execution quality.&lt;/p&gt;

&lt;p&gt;We chose open. The Machine Agent Identity Protocol (MAIP) is published under the Apache 2.0 license at &lt;a href="https://github.com/truthlocks/maip" rel="noopener noreferrer"&gt;github.com/truthlocks/maip&lt;/a&gt;. The specification has zero proprietary dependencies. Any organization can implement a MAIP compatible system without paying us a cent or asking our permission.&lt;/p&gt;

&lt;p&gt;This was not an altruistic decision. It was a strategic one. Machine identity will only work if it becomes a shared standard. A world where every vendor has its own incompatible agent identity system is worse than no agent identity at all, because it fragments trust instead of building it. We believe the best way to establish the standard is to make it free, open, and excellent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Protocol Covers
&lt;/h2&gt;

&lt;p&gt;MAIP is a complete specification for the agent identity lifecycle. It is not a framework, not an SDK, and not a product. It is a protocol: a set of rules that any system can implement to be interoperable with any other MAIP compatible system.&lt;/p&gt;

&lt;p&gt;The protocol defines six core areas:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent identity and registration.&lt;/strong&gt; How agents are assigned cryptographically verifiable identities using Decentralized Identifiers (DIDs). The DID method, key generation requirements, registration workflow, and identity metadata schema are all specified. Any system that follows the spec can issue and verify MAIP agent identities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scope based authorization.&lt;/strong&gt; How agent permissions are expressed and enforced using a hierarchical scope system. The scope syntax, wildcard and negation semantics, scope validation rules, and authorization decision flow are specified. Any system that follows the spec can make consistent authorization decisions for MAIP agents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust score computation.&lt;/strong&gt; How behavioral signals are combined into a composite trust score. The five trust factors (behavioral compliance, scope adherence, anomaly score, peer attestations, session hygiene), their weights, the scoring algorithm, and the update cadence are all specified. Any system that follows the spec will compute compatible trust scores.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session management.&lt;/strong&gt; How bounded execution contexts are created, validated, and revoked. The session token format (JWT based), required claims, expiry semantics, scope ceiling enforcement, and revocation mechanism are specified. Any system that follows the spec can issue and validate MAIP session tokens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delegation credentials.&lt;/strong&gt; How agents delegate authority across trust boundaries. The delegation credential format, scope limitation rules, temporal constraints, chain depth limits, and verification procedure are specified. Any system that follows the spec can issue and verify MAIP delegation credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Witness events.&lt;/strong&gt; How significant agent actions are recorded for auditability. The event schema, required fields, cryptographic chaining requirements, and inclusion proof format are specified. Any system that follows the spec can produce and verify MAIP witness events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Principles
&lt;/h2&gt;

&lt;p&gt;We designed MAIP with three principles that guided every decision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementable.&lt;/strong&gt; Every section of the spec includes enough detail that a competent engineering team can build a compliant implementation without contacting us. If a section is ambiguous, that is a bug in the spec, and we want to hear about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minimal.&lt;/strong&gt; The protocol specifies the minimum necessary for interoperability. It does not dictate internal implementation details, database schemas, or deployment architectures. You can build a MAIP compatible system on PostgreSQL, on DynamoDB, on a blockchain, or on flat files. The protocol does not care as long as the external behavior matches the spec.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extensible.&lt;/strong&gt; The data models include extension points for additional metadata, custom trust factors, and domain specific fields. These extensions are namespaced to avoid collisions between implementations.&lt;/p&gt;

&lt;h2&gt;
  
  
  SDKs and Integration
&lt;/h2&gt;

&lt;p&gt;Alongside the protocol specification, we publish reference SDKs in three languages:&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;JavaScript/TypeScript SDK&lt;/strong&gt; (&lt;code&gt;@truthlocks/maip-sdk&lt;/code&gt; on npm) provides client classes for agent registration, session management, scope validation, and trust score queries. It works in Node.js and in the browser.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Python SDK&lt;/strong&gt; (&lt;code&gt;truthlocks-maip&lt;/code&gt; on PyPI) provides the same capabilities with native async support and full type annotations. It is designed for AI/ML teams working with frameworks like LangChain, CrewAI, and AutoGen.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Go SDK&lt;/strong&gt; (&lt;code&gt;github.com/truthlocks/maip-sdk-go&lt;/code&gt;) provides the same capabilities with idiomatic Go interfaces. It is designed for infrastructure teams building high performance services.&lt;/p&gt;

&lt;p&gt;All three SDKs are also Apache 2.0 licensed. They are reference implementations, not the only way to interact with MAIP. If you prefer to call the API directly or build your own client library, the protocol spec is your guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  IDE and CI/CD Integration
&lt;/h2&gt;

&lt;p&gt;For developer workflow integration, we publish a &lt;strong&gt;VS Code extension&lt;/strong&gt; that adds MAIP identity context to your editor, a &lt;strong&gt;JetBrains plugin&lt;/strong&gt; for IntelliJ based IDEs, a &lt;strong&gt;GitHub Action&lt;/strong&gt; that validates agent identities in your CI/CD pipeline, and a &lt;strong&gt;CLI tool&lt;/strong&gt; (&lt;code&gt;@truthlocks/protect-cli&lt;/code&gt;) for local development and testing.&lt;/p&gt;

&lt;p&gt;These tools make it easy to adopt MAIP incrementally. You can start by adding the GitHub Action to your pipeline to verify that agents interacting with your repositories have valid MAIP identities, then expand to runtime enforcement as your infrastructure matures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contributing
&lt;/h2&gt;

&lt;p&gt;The protocol is maintained in the open. We welcome issues, pull requests, and RFC style proposals for protocol changes. Major changes go through a review process where community feedback is incorporated before the spec is updated.&lt;/p&gt;

&lt;p&gt;If you are implementing MAIP and find a gap, ambiguity, or improvement opportunity, open an issue. If you are building a MAIP compatible system and want to contribute interoperability tests, we will work with you to add them to the test suite.&lt;/p&gt;

&lt;p&gt;Machine identity is too important to be owned by any single company. MAIP is the protocol. Truthlocks is one implementation. We want there to be many.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>protocol</category>
      <category>aiagents</category>
      <category>machineidentity</category>
    </item>
    <item>
      <title>Anti Fraud for AI to AI Transactions</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:02:13 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/anti-fraud-for-ai-to-ai-transactions-6ph</link>
      <guid>https://dev.to/dividebyzerogt/anti-fraud-for-ai-to-ai-transactions-6ph</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/anti-fraud-for-ai-to-ai-transactions" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Fraud detection systems have spent decades getting good at catching humans behaving badly. They analyze IP addresses, device fingerprints, typing patterns, mouse movements, and behavioral biometrics. They flag transactions that do not match a user's historical spending patterns. They use machine learning models trained on millions of examples of human fraud.&lt;/p&gt;

&lt;p&gt;None of that works when both parties in a transaction are AI agents.&lt;/p&gt;

&lt;p&gt;AI to AI transactions do not have IP addresses that mean anything, because agents run in cloud containers with ephemeral networking. There are no device fingerprints, because there are no devices. There are no behavioral biometrics, because there are no humans. The entire foundation of modern fraud detection is built on assumptions about human behavior, and those assumptions collapse when the actors are autonomous software.&lt;/p&gt;

&lt;p&gt;This is not a future problem. AI agents are already making purchases, executing trades, signing contracts, and transferring value. The question is not whether AI to AI fraud will happen. The question is whether your systems are ready to detect it when it does.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI to AI Fraud Looks Like
&lt;/h2&gt;

&lt;p&gt;The fraud patterns in agent to agent transactions are different from human fraud, but they are just as damaging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity spoofing.&lt;/strong&gt; An agent claims to represent Organization A but is actually controlled by a malicious actor. Without cryptographic identity verification, the receiving agent has no way to confirm who it is dealing with. Shared API keys make this trivially easy: if you obtain the key, you become the agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scope escalation.&lt;/strong&gt; An agent authorized to perform read operations starts executing write operations. Or an agent authorized for one category of transactions starts processing a different category. Without scope enforcement, these escalations go undetected until something breaks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replay attacks.&lt;/strong&gt; A legitimate transaction between two agents is captured and replayed at a later time. Without temporal validation and session management, the receiving system processes the replayed transaction as if it were new.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delegation abuse.&lt;/strong&gt; Agent A legitimately delegates authority to Agent B for a specific task. Agent B then delegates to Agent C, and Agent C delegates further. Each delegation step dilutes accountability. By the time a fraudulent action occurs, it is buried under layers of delegation that are difficult to untangle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collusion.&lt;/strong&gt; Two or more agents coordinate to execute a scheme that would be flagged if any single agent attempted it alone. One agent creates the opportunity, another executes the transaction, and a third covers the tracks. Without cross agent behavioral analysis, these patterns are invisible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Anti Fraud Layer
&lt;/h2&gt;

&lt;p&gt;An effective anti fraud system for AI to AI transactions needs four capabilities that traditional fraud systems lack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cryptographic identity verification.&lt;/strong&gt; Every agent in a transaction must prove its identity using cryptographic signatures, not shared secrets. The Truthlocks Machine Agent Identity Protocol gives every agent a DID (Decentralized Identifier) and a key pair. When Agent A initiates a transaction with Agent B, both agents present signed identity proofs that can be independently verified against the trust registry. You cannot fake a cryptographic identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real time scope enforcement.&lt;/strong&gt; Every transaction is checked against the initiating agent's authorized scopes. If an agent tries to execute a transaction type that is not in its scope set, the transaction is rejected before it executes. The rejection is logged, and the agent's trust score takes a hit. Repeated scope violations trigger automated review or revocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavioral anomaly detection.&lt;/strong&gt; Because traditional behavioral biometrics do not apply, you need a different baseline. The anti fraud system builds behavioral profiles based on each agent's transaction patterns: typical transaction sizes, frequency, counterparties, time of day patterns, and resource access sequences. Deviations from the baseline generate risk signals that can block transactions in real time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delegation chain auditing.&lt;/strong&gt; Every delegation in the system is recorded with the full chain of authority: who delegated to whom, with what scope limitations, for what duration. When a transaction occurs through a delegation chain, the anti fraud system validates every link in the chain and confirms that the final agent has authority for the specific transaction type. Broken chains or expired delegations result in immediate rejection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Risk Signals in Practice
&lt;/h2&gt;

&lt;p&gt;The Truthlocks anti fraud system processes multiple risk signals for every transaction and produces a composite risk score. Signals include the trust scores of both agents, the transaction's deviation from historical patterns, the depth and validity of any delegation chain, the geographic and temporal context, and the sensitivity of the resources being accessed.&lt;/p&gt;

&lt;p&gt;Organizations configure risk policies that determine how to handle transactions at different risk levels. A low risk transaction proceeds normally. A medium risk transaction might require additional verification or be flagged for review. A high risk transaction is blocked automatically. Critical risk triggers the kill switch on the initiating agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Transparency Layer
&lt;/h2&gt;

&lt;p&gt;Every anti fraud decision is recorded in the transparency log with full context: the transaction details, the risk signals that were evaluated, the policy that was applied, and the outcome. This creates an auditable record that satisfies compliance requirements and enables post incident investigation.&lt;/p&gt;

&lt;p&gt;When a fraud analyst needs to understand why a transaction was blocked or why a suspicious pattern was not caught, they can query the transparency log for the complete decision trail. No guessing, no log correlation, no forensic reconstruction. The evidence is cryptographically chained and tamper evident.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;If you are building systems where AI agents transact with each other, the anti fraud layer should be designed in from the start, not bolted on after an incident. The &lt;a href="https://docs.truthlocks.com/guides/anti-fraud" rel="noopener noreferrer"&gt;anti fraud documentation&lt;/a&gt; covers risk signal configuration, policy rules, and integration with the machine identity system.&lt;/p&gt;

&lt;p&gt;The agents are already transacting. The question is whether you can see what they are doing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents. Register, verify, and manage non-human identities with trust scoring and instant revocation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fraud</category>
      <category>aiagents</category>
      <category>security</category>
      <category>fintech</category>
    </item>
    <item>
      <title>The Identity Gap in Agentic AI</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 09:49:39 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/the-identity-gap-in-agentic-ai-2iin</link>
      <guid>https://dev.to/dividebyzerogt/the-identity-gap-in-agentic-ai-2iin</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/the-identity-gap-in-agentic-ai" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Over the past thirty years, the technology industry built an entire discipline around human identity. We created passwords, then realized they were not enough. We added multi factor authentication. We built single sign on systems so people could authenticate once and access everything they needed. We developed role based access control so different people could have different permissions. We created audit trails so that when something went wrong, we could trace exactly who did what and when.&lt;/p&gt;

&lt;p&gt;Then AI agents arrived, and we threw all of that away.&lt;/p&gt;

&lt;p&gt;Most AI agents in production today authenticate with a shared API key or a service account. The agent might be calling your internal APIs, accessing customer data, triggering workflows, or making decisions that affect your business. But from the perspective of your systems, the agent is invisible. You cannot distinguish it from any other agent using the same credentials. You cannot control what specific actions it takes. You cannot trace a particular action back to a particular agent.&lt;/p&gt;

&lt;p&gt;This is the identity gap in agentic AI, and it is only getting wider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Gap Exists
&lt;/h2&gt;

&lt;p&gt;The identity gap did not happen because anyone made a bad decision. It happened because AI agents evolved faster than the infrastructure designed to manage them.&lt;/p&gt;

&lt;p&gt;When organizations first started using AI, the "agent" was a simple API call embedded in application code. It was not really autonomous. It ran when the application told it to, it did one specific thing, and the application handled the result. This model did not need agent identity because the agent was not really an agent. It was a function call.&lt;/p&gt;

&lt;p&gt;But AI agents today are fundamentally different. They are autonomous. They make decisions. They call tools. They interact with multiple systems. They operate over extended periods. They can even spawn other agents to delegate subtasks. Some agents are now building and deploying other agents.&lt;/p&gt;

&lt;p&gt;The infrastructure did not keep up. We are using the identity model from the "function call" era to manage the "autonomous agent" era, and the mismatch is becoming dangerous.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Gap Actually Costs You
&lt;/h2&gt;

&lt;p&gt;The identity gap creates four categories of risk that grow with every agent you deploy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security risk.&lt;/strong&gt; Without individual agent identities, a compromised agent is indistinguishable from a legitimate one. If an attacker gains access to the shared API key, they can impersonate any agent. You cannot revoke access for a single compromised agent without disrupting every agent that shares the same credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance risk.&lt;/strong&gt; Regulators increasingly want to know who or what accessed sensitive data and why. "An AI agent" is not an acceptable answer. They want specifics: which agent, authorized by whom, operating under what constraints, accessing what data, for what purpose. Without agent identity, you cannot provide these answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational risk.&lt;/strong&gt; When something goes wrong in a multi agent system, the first question is "which agent did this?" Without individual identities, debugging becomes forensic archaeology. You are sifting through generic logs trying to correlate timestamps and payload patterns to figure out which of your dozens or hundreds of agents caused the problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust risk.&lt;/strong&gt; As agents interact with external systems, partners, and customers, those counterparties need to know they can trust the agent they are dealing with. A verified identity with a track record is fundamentally different from an anonymous API call. The identity is what enables trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing the Gap
&lt;/h2&gt;

&lt;p&gt;Closing the identity gap requires treating AI agents the way we treat human users: as first class identities with their own credentials, permissions, behavioral profiles, and audit trails.&lt;/p&gt;

&lt;p&gt;This is exactly what the Machine Agent Identity Protocol does. Every agent gets a cryptographically verifiable identity. Every agent has defined scopes that control what it can do. Every agent's behavior contributes to a trust score that determines how much access it earns. Every significant action is recorded in a tamper evident transparency log.&lt;/p&gt;

&lt;p&gt;The protocol is open source under Apache 2.0 because we believe machine identity should be a shared standard, not a proprietary lock in. The specification is at &lt;a href="https://github.com/truthlocks/maip" rel="noopener noreferrer"&gt;github.com/truthlocks/maip&lt;/a&gt;. Truthlocks provides a production grade implementation, but anyone can build on the protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Window Is Now
&lt;/h2&gt;

&lt;p&gt;Every week, organizations deploy more agents with more autonomy and more access. The identity gap compounds. The longer you wait to address it, the harder it becomes to retrofit identity onto agents that are already running in production.&lt;/p&gt;

&lt;p&gt;The organizations that close this gap now will have a structural advantage. They will be able to scale their agent fleets with confidence, satisfy compliance requirements, respond to incidents quickly, and build trust with partners and customers.&lt;/p&gt;

&lt;p&gt;The organizations that wait will eventually face an incident they cannot explain, an audit they cannot pass, or a breach they cannot contain, because they never gave their agents the identity infrastructure that the situation demanded.&lt;/p&gt;

&lt;p&gt;Start closing the gap today. The &lt;a href="https://docs.truthlocks.com/guides/machine-identity" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; will walk you through registering your first agent in under ten minutes.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>identity</category>
      <category>security</category>
      <category>agenticai</category>
    </item>
    <item>
      <title>Why Your AI Agents Need a Trust Score</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 09:49:38 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/why-your-ai-agents-need-a-trust-score-375c</link>
      <guid>https://dev.to/dividebyzerogt/why-your-ai-agents-need-a-trust-score-375c</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://truthlocks.com/blog/why-your-ai-agents-need-a-trust-score" rel="noopener noreferrer"&gt;Truthlocks Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Think about how you onboard a new hire. On day one, they get a badge, a laptop, and access to the tools they need for their specific role. They do not get the root password to the production database. Over time, as they prove themselves reliable and competent, their access expands. If they make a serious mistake, access gets dialed back. This is common sense for humans. For AI agents, we have been doing the opposite.&lt;/p&gt;

&lt;p&gt;Most organizations deploying AI agents today hand them a shared API key and let them loose. The agent that was spun up five minutes ago to test a new prompt template gets the same level of access as the agent that has been running flawlessly in production for six months. There is no mechanism to distinguish between them. There is no way to say "this agent has earned our trust" or "this agent is brand new and should be on a short leash."&lt;/p&gt;

&lt;p&gt;That is what trust scores fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Trust Score Actually Is
&lt;/h2&gt;

&lt;p&gt;A trust score is a number between 0 and 100 that represents how much confidence you should place in a specific AI agent. It is not a static label assigned at registration. It is a living metric that goes up when the agent behaves well and goes down when it does not.&lt;/p&gt;

&lt;p&gt;The score is computed from five signals that we call trust factors:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavioral compliance&lt;/strong&gt; measures whether the agent stays within its expected operating patterns. An agent that consistently completes its assigned tasks without errors or unexpected actions scores high. An agent that starts making API calls it has never made before, or that suddenly increases its request volume tenfold, scores low.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scope adherence&lt;/strong&gt; tracks whether the agent respects the boundaries it was given. Every agent in the Truthlocks system has a defined set of scopes that describe what it is allowed to do. An agent authorized for &lt;code&gt;customers:read&lt;/code&gt; that never attempts a write operation scores high. An agent that repeatedly tries to access resources outside its scopes scores low, and those attempts are logged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anomaly score&lt;/strong&gt; is the inverse of how unusual the agent's recent behavior looks compared to its historical baseline. Machine learning models analyze request patterns, timing, payload structures, and resource access sequences. The more normal the behavior, the higher the score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Peer attestations&lt;/strong&gt; captures trust signals from other systems and agents. If an agent's outputs are consistently accepted and acted upon by downstream systems without errors or rejections, that is a positive signal. If downstream systems are frequently rejecting or rolling back the agent's work, that is a negative signal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session hygiene&lt;/strong&gt; evaluates whether the agent manages its sessions properly. Does it authenticate cleanly? Does it respect session timeouts? Does it request only the scopes it needs rather than asking for everything? Good session management indicates a well engineered agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Trust Scores Change Real Decisions
&lt;/h2&gt;

&lt;p&gt;Trust scores are not just a dashboard metric. They are an input to authorization decisions that happen in real time.&lt;/p&gt;

&lt;p&gt;Consider a financial services company that uses AI agents to process customer data. They can set a policy that says: any agent with a trust score below 60 can only access anonymized data. Agents scoring between 60 and 80 can access full customer records but cannot make changes. Only agents scoring above 80 can modify customer data.&lt;/p&gt;

&lt;p&gt;This means a newly registered agent starts with restricted access. As it operates cleanly over days and weeks, its trust score rises and its capabilities expand automatically. If something goes wrong and the agent starts behaving erratically, the score drops and access is immediately constrained. No human intervention required.&lt;/p&gt;

&lt;p&gt;This is not theoretical. This is how the Truthlocks trust score system works today.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kill Switch Connection
&lt;/h2&gt;

&lt;p&gt;Trust scores also connect to the kill switch. Organizations can set automated policies: if any agent's trust score drops below 20, revoke its identity immediately. All active sessions are terminated, all tokens are invalidated, and a revocation event is broadcast to every connected system. The agent is effectively shut down in seconds.&lt;/p&gt;

&lt;p&gt;This is critical for the scenario every security team worries about: a compromised agent. Whether the compromise comes from a prompt injection attack, a stolen key, or a bug that causes the agent to go haywire, the trust score will detect the abnormal behavior and the kill switch can activate automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Trust Into Your Agent Architecture
&lt;/h2&gt;

&lt;p&gt;If you are building AI agents today, start thinking about trust as a first class architectural concern. Register every agent with a unique identity. Define explicit scopes for what each agent is allowed to do. Monitor behavioral signals and use them to gate access. Have a plan for what happens when an agent goes wrong.&lt;/p&gt;

&lt;p&gt;The agents you deploy today will multiply. The access they have today will expand. The damage they can do if things go wrong will grow. Trust scores give you a systematic way to manage that risk as your agent fleet scales.&lt;/p&gt;

&lt;p&gt;To start building with trust scores, visit the &lt;a href="https://docs.truthlocks.com/guides/machine-identity" rel="noopener noreferrer"&gt;Machine Identity documentation&lt;/a&gt; or sign in to the &lt;a href="https://console.truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks Console&lt;/a&gt; to register your first agent.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; provides machine identity infrastructure for AI agents.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trust</category>
      <category>aiagents</category>
      <category>security</category>
      <category>machineidentity</category>
    </item>
    <item>
      <title>Machine Identity Crisis: Why the AI Era Needs a New Trust Protocol</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Thu, 09 Apr 2026 12:35:51 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/machine-identity-crisis-why-the-ai-era-needs-a-new-trust-protocol-kkb</link>
      <guid>https://dev.to/dividebyzerogt/machine-identity-crisis-why-the-ai-era-needs-a-new-trust-protocol-kkb</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Great Identity Shift of 2026&lt;/li&gt;
&lt;li&gt;Understanding the 82:1 Machine Identity Crisis&lt;/li&gt;
&lt;li&gt;Why Traditional IAM Falls Short in the AI Era&lt;/li&gt;
&lt;li&gt;The Rise of Identity-Based Attacks&lt;/li&gt;
&lt;li&gt;MAIP: The Machine Agent Identity Protocol&lt;/li&gt;
&lt;li&gt;Building the Machine Trust Ecosystem&lt;/li&gt;
&lt;li&gt;Truthlocks' Infrastructure for Scale&lt;/li&gt;
&lt;li&gt;From Crisis to Opportunity&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We've hit a breaking point. For every human on Earth, 82 machine identities now operate in our digital ecosystem. This isn't some distant future scenario—it's happening right now in 2026, creating the biggest security challenge our industry has ever seen.&lt;/p&gt;

&lt;p&gt;The path from 2024's AI excitement to 2025's widespread experimentation has led us to what security leaders call the "Year of the Defender." Organizations everywhere are racing to secure an autonomous economy where AI agents, smart contracts, and automated workflows vastly outnumber human users.&lt;/p&gt;

&lt;p&gt;Here's what the 82:1 machine identity crisis means, why our current security models can't handle it, and how a new trust protocol is emerging to solve this fundamental problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Great Identity Shift of 2026
&lt;/h2&gt;

&lt;p&gt;The numbers paint a clear picture. In 2024, we celebrated AI breakthroughs and proof-of-concepts. In 2025, businesses experimented with AI agents and automation at scale. Now in 2026, we're dealing with the fallout of rapid AI adoption without proper identity infrastructure.&lt;/p&gt;

&lt;p&gt;Machine identities encompass AI agents, IoT devices, automated trading systems, smart contracts, robotic process automation (RPA) bots, and countless other autonomous entities. Each needs authentication, authorization, and verification—yet most run on security models built for human users.&lt;/p&gt;

&lt;p&gt;This shift happened faster than anyone expected. Enterprise AI deployments took off through 2025, with organizations launching thousands of AI agents for customer service, data analysis, and workflow automation. IoT ecosystems exploded simultaneously, while blockchain applications introduced entirely new categories of autonomous actors.&lt;/p&gt;

&lt;p&gt;The result? A digital world where machine-to-machine interactions dominate, but security protocols still assume human operators are calling the shots.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk66luuspdqsbgq2o20df.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk66luuspdqsbgq2o20df.jpeg" alt=" " width="800" height="1448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the 82:1 Machine Identity Crisis
&lt;/h2&gt;

&lt;p&gt;The 82:1 ratio isn't just about numbers—it exposes a massive gap between our security infrastructure and reality. Traditional identity and access management (IAM) systems expect human operators with predictable patterns: login times, geographic locations, and consistent behavior.&lt;/p&gt;

&lt;p&gt;Machine identities work completely differently. They authenticate thousands of times per second, operate across multiple regions at once, and execute complex workflows without human oversight. They don't use passwords, don't follow business hours, and don't exhibit the human behavioral patterns that traditional security systems depend on for anomaly detection.&lt;/p&gt;

&lt;p&gt;Take a typical enterprise AI agent managing supply chain operations. It authenticates with dozens of external APIs, processes real-time data feeds, and makes autonomous decisions affecting millions of dollars in transactions. Most organizations secure these agents using the same methods they use for human employees.&lt;/p&gt;

&lt;p&gt;This creates dangerous blind spots. Security teams can't see into machine-to-machine communications, struggle to audit autonomous decisions, and can't verify whether AI-generated actions are authentic. The attack surface keeps expanding while traditional security tools fall further behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Traditional IAM Falls Short in the AI Era
&lt;/h2&gt;

&lt;p&gt;Human-centric IAM systems make several assumptions that completely break down when machines take over:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Password-based authentication&lt;/strong&gt; makes no sense for AI agents that need programmatic authentication. Static credentials create security risks, while dynamic token management adds complexity that doesn't scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role-based access control (RBAC)&lt;/strong&gt; can't handle autonomous agents that need flexible permissions based on real-time context. An AI agent managing cloud resources might need different access levels depending on current system load, time of day, or detected anomalies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavioral analytics&lt;/strong&gt; built for human patterns can't establish baselines for machine behavior. AI agents legitimately exhibit patterns that would trigger fraud alerts for human users—accessing systems 24/7, processing data at superhuman speeds, or operating from multiple locations simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit trails&lt;/strong&gt; become unmanageable when machines generate thousands of authentication events per minute. Traditional logging systems can't capture the context needed to understand machine decision-making or verify autonomous actions.&lt;/p&gt;

&lt;p&gt;The core issue is trust. With humans, we rely on legal frameworks, employment contracts, and social accountability. With machines, we need cryptographic proof and mathematical certainty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8yv83diiclettdwc2i9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8yv83diiclettdwc2i9.jpeg" alt=" " width="800" height="1261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rise of Identity-Based Attacks
&lt;/h2&gt;

&lt;p&gt;The security implications are already here. Recent studies show over 90% of organizations experienced an identity-based attack in the past year, with machine identities increasingly becoming the primary target.&lt;/p&gt;

&lt;p&gt;Attackers go after machine identities because they often have elevated privileges, operate with minimal monitoring, and can execute actions at scale. A compromised AI agent can access sensitive data, manipulate automated processes, or impersonate legitimate systems across an entire network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgokz2cp9b1derxake49.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgokz2cp9b1derxake49.jpeg" alt=" " width="800" height="758"&gt;&lt;/a&gt;&lt;br&gt;
Common attack patterns include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credential harvesting&lt;/strong&gt; from poorly secured AI agents or automated systems. Unlike human credentials, machine credentials often sit in configuration files or environment variables with weak protection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Privilege escalation&lt;/strong&gt; through compromised automation workflows. Attackers exploit the broad permissions typically granted to AI agents to access systems and data far beyond their intended scope.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impersonation attacks&lt;/strong&gt; where malicious actors create fake machine identities that mimic legitimate AI agents or automated systems. Without proper verification mechanisms, these imposters can operate undetected for months.&lt;/p&gt;

&lt;p&gt;Detection is the real challenge. Traditional security tools excel at spotting unusual human behavior but struggle with the complex, high-volume patterns that legitimate machine operations generate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0mxvbyrhgx34tbost12.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0mxvbyrhgx34tbost12.jpeg" alt=" " width="800" height="1429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  MAIP: &lt;a href="https://github.com/truthlocks/maip" rel="noopener noreferrer"&gt;The Machine Agent Identity Protocol&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The solution requires moving from human-centric to cryptographic trust models. Enter MAIP (&lt;a href="https://www.truthlocks.com/product/machine-identity" rel="noopener noreferrer"&gt;Machine Agent Identity Protocol&lt;/a&gt;)—an emerging open standard built specifically for AI agent authentication and machine-to-machine verification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu99f3ij9xh6690j3bil.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu99f3ij9xh6690j3bil.jpeg" alt=" " width="800" height="1239"&gt;&lt;/a&gt;&lt;br&gt;
MAIP tackles the unique requirements of machine identities through several key innovations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cryptographic identity anchoring&lt;/strong&gt; gives each machine identity a mathematically verifiable foundation. Instead of passwords or tokens, MAIP uses public-key cryptography to establish immutable identity proofs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time verification&lt;/strong&gt; enables instant authentication without depending on central authorities. Machine identities can prove their authenticity through cryptographic signatures that any system can verify independently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context-aware permissions&lt;/strong&gt; let machine identities present different capabilities based on current operational context. An AI agent can cryptographically prove not just its identity, but also its current authorization level and operational constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit-ready attestations&lt;/strong&gt; provide complete visibility into machine actions through cryptographically signed logs. Every decision, access request, and data operation generates verifiable proof that can be audited without slowing down operations.&lt;/p&gt;

&lt;p&gt;MAIP shifts the paradigm from "trust but verify" to "verify then trust"—a model that actually works for autonomous systems operating at machine speed and scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Machine Trust Ecosystem
&lt;/h2&gt;

&lt;p&gt;Implementing MAIP demands infrastructure that can handle machine-scale authentication demands. The system must process thousands of verification requests per second while maintaining cryptographic integrity and providing real-time responses.&lt;/p&gt;

&lt;p&gt;Key infrastructure requirements include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-throughput processing&lt;/strong&gt; to handle the volume of machine-to-machine authentication requests that modern AI systems generate. Traditional IAM systems built for human login patterns simply can't scale to machine requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware security modules (HSMs)&lt;/strong&gt; to protect cryptographic keys and ensure tamper-resistant operations. Machine identities need higher security standards than human credentials because they often operate with elevated privileges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-region redundancy&lt;/strong&gt; to support global AI operations without introducing latency or single points of failure. Machine identities need consistent authentication performance regardless of where they operate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Offline verification capabilities&lt;/strong&gt; to ensure machine identities can be authenticated even when network connectivity is limited or compromised. This is crucial for edge computing and IoT deployments.&lt;/p&gt;

&lt;p&gt;The infrastructure must also integrate seamlessly with existing enterprise systems while providing flexibility to support emerging AI architectures and use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Truthlocks' Infrastructure for Scale
&lt;/h2&gt;

&lt;p&gt;Truthlocks provides the foundational infrastructure for implementing MAIP at enterprise scale. The platform combines cryptographic proof generation with high-performance verification capabilities designed specifically for machine identity requirements.&lt;/p&gt;

&lt;p&gt;The system processes over 10,000 transactions per second while maintaining sub-40ms verification latency. This performance enables real-time authentication for AI agents and automated systems without introducing operational delays.&lt;/p&gt;

&lt;p&gt;Security features include FIPS 140-2 Level 3 HSM integration for cryptographic key protection and multi-region redundancy to ensure global availability. The infrastructure supports both online and offline verification through an SDK that enables independent proof validation without central authority dependencies.&lt;/p&gt;

&lt;p&gt;Organizations can implement MAIP through Truthlocks' verification infrastructure while maintaining complete control over their machine identity policies and operational requirements. The platform provides the cryptographic foundation while allowing enterprises to customize authentication workflows for their specific AI architectures.&lt;/p&gt;

&lt;p&gt;Learn more about enterprise implementation options at &lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;truthlocks.com&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Crisis to Opportunity
&lt;/h2&gt;

&lt;p&gt;The 82:1 machine identity crisis represents both a major challenge and a transformative opportunity. Organizations that tackle machine identity security proactively will gain competitive advantages in AI deployment, operational efficiency, and regulatory compliance.&lt;/p&gt;

&lt;p&gt;Moving to cryptographic trust models opens up new possibilities: AI agents that can prove their decisions, automated systems with verifiable audit trails, and machine-to-machine interactions with mathematical certainty rather than assumed trust.&lt;/p&gt;

&lt;p&gt;Early adopters are already seeing results. Enterprises implementing MAIP report improved security posture, fewer identity-related incidents, and enhanced ability to scale AI operations with confidence.&lt;/p&gt;

&lt;p&gt;The key insight is recognizing that machine identities aren't just more human identities—they require fundamentally different approaches to authentication, authorization, and verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What exactly is the 82:1 machine identity crisis?
&lt;/h3&gt;

&lt;p&gt;The 82:1 machine identity crisis refers to the current ratio of machine identities (AI agents, IoT devices, automated systems) to human users in our digital ecosystem. This represents a fundamental shift where traditional human-centric security models are inadequate for protecting machine-dominated environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does MAIP differ from traditional IAM systems?
&lt;/h3&gt;

&lt;p&gt;MAIP uses cryptographic proof rather than password-based authentication, provides real-time verification without central authorities, and offers context-aware permissions designed for autonomous systems. Unlike traditional IAM, MAIP is built specifically for machine-scale operations and verification requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why can't existing security tools handle machine identities effectively?
&lt;/h3&gt;

&lt;p&gt;Traditional security tools rely on human behavioral patterns, password-based authentication, and role-based access control. Machine identities operate 24/7, authenticate thousands of times per second, and need flexible permissions based on real-time context—requirements that overwhelm human-centric security models.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the main security risks of unmanaged machine identities?
&lt;/h3&gt;

&lt;p&gt;Unmanaged machine identities create risks including credential harvesting, privilege escalation, impersonation attacks, and blind spots in audit trails. Since machine identities often have elevated privileges and minimal monitoring, they present attractive targets for attackers seeking to operate at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does cryptographic verification work for machine identities?
&lt;/h3&gt;

&lt;p&gt;Cryptographic verification uses public-key cryptography to create mathematically provable identity assertions. Each machine identity generates cryptographic signatures that can be verified independently, providing certainty about authenticity without requiring central authority validation.&lt;/p&gt;

&lt;h3&gt;
  
  
  What infrastructure requirements are needed for MAIP implementation?
&lt;/h3&gt;

&lt;p&gt;MAIP implementation requires high-throughput processing (10,000+ TPS), hardware security modules for key protection, multi-region redundancy for global operations, and offline verification capabilities. The infrastructure must handle machine-scale authentication while maintaining cryptographic integrity.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can organizations start addressing their machine identity challenges?
&lt;/h3&gt;

&lt;p&gt;Organizations should begin by inventorying their machine identities, assessing current security gaps, and evaluating MAIP-compatible infrastructure. Pilot programs with critical AI agents or automated systems provide practical experience before full-scale implementation.&lt;/p&gt;

&lt;p&gt;The machine identity crisis of 2026 demands immediate attention, but the solutions are within reach. Organizations that act now to implement cryptographic trust protocols will be positioned to thrive in an AI-dominated future while maintaining security and operational integrity.``&lt;/p&gt;

&lt;p&gt;© 2026 Truthlocks, Inc.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What Is Proof of Authorship? How Cryptographic Timestamps Protect Your Work</title>
      <dc:creator>Nnaa</dc:creator>
      <pubDate>Wed, 25 Mar 2026 02:59:06 +0000</pubDate>
      <link>https://dev.to/dividebyzerogt/what-is-proof-of-authorship-how-cryptographic-timestamps-protect-your-work-o48</link>
      <guid>https://dev.to/dividebyzerogt/what-is-proof-of-authorship-how-cryptographic-timestamps-protect-your-work-o48</guid>
      <description>&lt;h2&gt;
  
  
  cryptographic proof of authorship, timestamp proof of ownership, digital proof of creation, SHA-256 file hash proof
&lt;/h2&gt;

&lt;p&gt;You finish a design, write a piece of code, or draft a report. You know you created it first. But if someone disputes that later, what do you actually have to show for it?&lt;/p&gt;

&lt;p&gt;An email to yourself does not hold up. A screenshot has no verifiable timestamp. Even a notarized document can be questioned if the notary is unavailable. The problem is not that you lack proof. The problem is that the proof you have depends on someone else believing you.&lt;/p&gt;

&lt;p&gt;Cryptographic proof of authorship solves this by removing the need for belief entirely. This article explains what it is, how it works technically, and why both individual creators and large organizations are starting to treat it as standard practice.&lt;/p&gt;




&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What Proof of Authorship Actually Means&lt;/li&gt;
&lt;li&gt;How SHA-256 Hashing Creates a Digital Fingerprint&lt;/li&gt;
&lt;li&gt;What a Cryptographic Timestamp Does&lt;/li&gt;
&lt;li&gt;How Merkle Trees and Transparency Logs Strengthen the Chain&lt;/li&gt;
&lt;li&gt;Proof of Authorship vs. Copyright Registration&lt;/li&gt;
&lt;li&gt;Who Needs This and Why&lt;/li&gt;
&lt;li&gt;How Truthlocks Implements This&lt;/li&gt;
&lt;li&gt;Independent Verification Without a Central Authority&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Sources&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What Proof of Authorship Actually Means
&lt;/h2&gt;

&lt;p&gt;Proof of authorship is a verifiable record that links a specific piece of content to a specific creator at a specific point in time. It answers three questions at once: What was created? Who created it? When did it exist in this exact form?&lt;/p&gt;

&lt;p&gt;Traditional methods answer these questions through institutional trust. A notary signs a document. A court accepts a postmark. A platform logs an upload. All of these work until the institution is unavailable, compromised, or simply not trusted by the other party.&lt;/p&gt;

&lt;p&gt;Cryptographic proof of authorship answers the same three questions using math. No institution needs to vouch for you. The proof either checks out or it does not. As &lt;a href="https://decerts.com/en/blog/proof-of-authorship-timestamps" rel="noopener noreferrer"&gt;Decerts explains in their overview of proof of authorship timestamps&lt;/a&gt;, the goal is to produce evidence that is independently reproducible, not dependent on a single authority's word.&lt;/p&gt;




&lt;h2&gt;
  
  
  How SHA-256 Hashing Creates a Digital Fingerprint
&lt;/h2&gt;

&lt;p&gt;SHA-256 is a cryptographic hash function. You feed it any file, any size, and it produces a fixed 64-character string. Change a single character in the file, and the hash changes completely. The same file always produces the same hash.&lt;/p&gt;

&lt;p&gt;This makes SHA-256 useful as a fingerprint. You do not need to store the file to prove it existed. You only need to store the hash.&lt;/p&gt;

&lt;p&gt;Here is a simple example. A 10MB video file might produce a hash like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That string is unique to that exact version of that file. If even one frame changes, the hash is completely different. &lt;a href="https://blog.pagefreezer.com/sha-256-benefits-evidence-authentication" rel="noopener noreferrer"&gt;Legal and compliance experts at PageFreezer note&lt;/a&gt; that SHA-256 is widely accepted in legal and regulatory contexts precisely because of this determinism. It is not a claim. It is a mathematical fact.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.thebulldog.law/sha-256-encryption-and-its-legal-implications-in-digital-asset-cases" rel="noopener noreferrer"&gt;SHA-256 also carries significant legal weight in digital asset cases&lt;/a&gt;, where courts have increasingly accepted hash-based evidence as proof of file integrity and identity. And as &lt;a href="https://legalstamp.app/en/blog/sha-256-hash-prove-file-integrity" rel="noopener noreferrer"&gt;LegalStamp's technical breakdown explains&lt;/a&gt;, the hash alone proves file integrity, but it needs a timestamp to prove &lt;em&gt;when&lt;/em&gt; that file existed.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh98vbdb079k2gotnih4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh98vbdb079k2gotnih4.png" alt="truthlocks proof of authorship"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Cryptographic Timestamp Does
&lt;/h2&gt;

&lt;p&gt;A hash tells you what existed. A timestamp tells you when. Together, they establish that a specific version of a file existed at a specific moment.&lt;/p&gt;

&lt;p&gt;A cryptographic timestamp works by anchoring your hash to an external, tamper-evident record. This might be a blockchain, a transparency log, or a signed certificate from a trusted time authority. The key property is that the timestamp cannot be backdated without breaking the cryptographic chain.&lt;/p&gt;

&lt;p&gt;This is different from a file's "created" metadata. Anyone can change that field in seconds. A cryptographic timestamp is embedded in a structure that would require rewriting history to falsify.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://timeproofs.io/proofspec.html" rel="noopener noreferrer"&gt;The open ProofSpec protocol&lt;/a&gt; formalizes exactly this: a proof of existence combines a content hash with a timestamp anchored in a verifiable data structure, producing a record that any party can independently check. &lt;a href="https://www.certifywebcontent.com/the-new-standard-for-digital-evidence-hashes-timestamps-and-forensic-declarations/" rel="noopener noreferrer"&gt;Digital evidence standards are moving in this direction&lt;/a&gt;, with hashes, timestamps, and forensic declarations forming the new baseline for admissible digital proof.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Merkle Trees and Transparency Logs Strengthen the Chain
&lt;/h2&gt;

&lt;p&gt;Individual timestamps are useful. But what makes cryptographic proof of authorship genuinely robust is the data structure underneath it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Merkle Trees
&lt;/h3&gt;

&lt;p&gt;A Merkle tree is a binary tree where every leaf node contains a hash, and every parent node contains the hash of its children. This means any single record can be verified against the root hash of the entire tree without exposing every other record.&lt;/p&gt;

&lt;p&gt;If you anchor your file hash in a Merkle tree, anyone can verify your specific record using only a short proof path, not the entire dataset. &lt;a href="https://transparency.dev/verifiable-data-structures/" rel="noopener noreferrer"&gt;Google's transparency.dev documentation&lt;/a&gt; covers how these structures guarantee tamper-evidence at scale. The math ensures that altering any record changes the root hash, making tampering immediately detectable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transparency Logs
&lt;/h3&gt;

&lt;p&gt;Transparency logs apply Merkle trees to append-only ledgers. New records are added but never deleted or modified. Anyone can audit the full history.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://research.swtch.com/tlog" rel="noopener noreferrer"&gt;Russ Cox's analysis of transparent logs&lt;/a&gt; explains why this matters for skeptical clients: you do not have to trust the log operator. You can verify the log's consistency yourself using the published root hashes. &lt;a href="https://stacklok.com/blog/decoding-rekor-understanding-sigstores-transparency-log" rel="noopener noreferrer"&gt;Sigstore's Rekor transparency log&lt;/a&gt; is a practical example of this applied to software artifact signing, and the same principles apply directly to proof of authorship systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Proof of Authorship vs. Copyright Registration
&lt;/h2&gt;

&lt;p&gt;These are not the same thing, and confusing them creates real problems.&lt;/p&gt;

&lt;p&gt;Copyright registration is a legal process. In the US, it gives you the right to sue for statutory damages and attorney's fees. It requires filing with the Copyright Office, paying a fee, and waiting weeks for processing. It is powerful, but it is slow and jurisdiction-specific.&lt;/p&gt;

&lt;p&gt;Proof of authorship is a technical record. It does not grant you legal rights. What it does is establish a verifiable timeline. If you later register a copyright and someone claims they created the work first, your cryptographic timestamp is evidence that your version existed before theirs.&lt;/p&gt;

&lt;p&gt;Think of it as the difference between a deed and a survey. The deed gives you ownership. The survey proves exactly what you own and when the boundaries were established. You want both.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.scoredetect.com/blog/posts/encryption-techniques-for-copyright-protection-explained" rel="noopener noreferrer"&gt;Encryption and cryptographic techniques for copyright protection&lt;/a&gt; are increasingly used alongside formal registration, not as a replacement. And as &lt;a href="https://copyrights.live/the-rise-of-digital-verification-what-creators-need-to-know" rel="noopener noreferrer"&gt;Copyrights.live notes in their guide for creators&lt;/a&gt;, digital verification is becoming a standard first step before any formal legal process.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Needs This and Why
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Individual Creators
&lt;/h3&gt;

&lt;p&gt;Writers, designers, photographers, and developers all produce work that can be copied or disputed. A cryptographic timestamp created before you publish gives you a timestamped record that predates any copy. You do not need to prove you are better at your craft. You just need to prove you were first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enterprises
&lt;/h3&gt;

&lt;p&gt;Companies produce contracts, reports, source code, and internal communications that may become evidence in disputes or audits. A SHA-256 file hash proof created at the time of production is far stronger than a file server log that an administrator could theoretically alter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governments and Regulated Industries
&lt;/h3&gt;

&lt;p&gt;Regulatory compliance often requires demonstrating that records were not altered after a certain date. Cryptographic proof of authorship provides exactly that guarantee, without requiring a third party to vouch for your record-keeping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://truthlocks.com/solutions" rel="noopener noreferrer"&gt;Truthlocks solutions for enterprises and governments&lt;/a&gt; are built around this infrastructure, supporting attestation at scale for organizations that need verifiable records across large volumes of documents and data.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Truthlocks Implements This
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;Truthlocks&lt;/a&gt; is built around the workflow described above. You upload a file, the system computes its SHA-256 hash, and it generates a cryptographically signed, timestamped proof of authorship. That proof is yours to keep and share.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://truthlocks.com/product" rel="noopener noreferrer"&gt;Truthlocks product&lt;/a&gt; is designed for two audiences. Individual creators use it to protect original work before publishing, getting a verifiable record in seconds without needing to understand the cryptography underneath. Enterprises and governments use the same infrastructure to mint, anchor, and verify attestations at scale through an SDK.&lt;/p&gt;

&lt;p&gt;A few things worth understanding about how this is built:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The hash is computed from your file. Truthlocks does not need to store your file to verify the proof later.&lt;/li&gt;
&lt;li&gt;The timestamp is cryptographically signed, meaning it cannot be altered without invalidating the signature.&lt;/li&gt;
&lt;li&gt;Verification works offline or online. You do not need Truthlocks to be running to verify a proof. The math works independently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This last point matters more than it might seem. A proof that requires a central authority to verify is only as reliable as that authority. A proof that anyone can verify independently is reliable by design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://proofstamper.com/en/compare/file-timestamp-tools-2026" rel="noopener noreferrer"&gt;A 2026 comparison of file timestamp tools&lt;/a&gt; highlights independent verifiability as one of the most important criteria when evaluating these systems, and it is the property that separates cryptographic proof from simple platform-based logging.&lt;/p&gt;




&lt;h2&gt;
  
  
  Independent Verification Without a Central Authority
&lt;/h2&gt;

&lt;p&gt;The phrase "no central authority required" sounds like marketing. It is actually a technical property worth understanding.&lt;/p&gt;

&lt;p&gt;When Truthlocks generates a proof, the verification process uses the SHA-256 hash, the cryptographic signature, and the anchored timestamp. Anyone with the original file and the proof document can run the verification themselves. They hash the file, check that the hash matches the proof, verify the signature, and confirm the timestamp is anchored in a tamper-evident structure.&lt;/p&gt;

&lt;p&gt;None of those steps require Truthlocks to be involved. If Truthlocks shut down tomorrow, every proof it ever generated would still be verifiable. That is what "no central authority" actually means in practice.&lt;/p&gt;

&lt;p&gt;This is the same property that makes transparency logs useful for software supply chain security. The log operator cannot lie without being caught, because anyone can check the math.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is proof of authorship in simple terms?&lt;/strong&gt;&lt;br&gt;
Proof of authorship is a verifiable record showing that a specific person created a specific piece of content at a specific time. Cryptographic versions of this proof use SHA-256 hashing and timestamps to make the record tamper-evident and independently verifiable without relying on any single authority.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is a SHA-256 hash proof legally valid?&lt;/strong&gt;&lt;br&gt;
SHA-256 hashes are accepted as evidence in many legal and regulatory contexts, particularly for establishing file integrity and existence at a point in time. They are not a substitute for copyright registration, but they provide strong supporting evidence in disputes. Courts in multiple jurisdictions have accepted hash-based evidence in digital asset and intellectual property cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can someone fake a cryptographic timestamp?&lt;/strong&gt;&lt;br&gt;
Backdating a cryptographic timestamp would require altering the underlying data structure, which changes the root hash of the Merkle tree and makes the tampering immediately detectable. In practice, a properly anchored cryptographic timestamp cannot be backdated without breaking the verification chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the difference between proof of authorship and copyright?&lt;/strong&gt;&lt;br&gt;
Copyright is a legal right that gives you the ability to sue for infringement. Proof of authorship is a technical record that establishes when a specific version of a file existed. You can have one without the other. Most creators benefit from having both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I need to store my original file to verify a proof later?&lt;/strong&gt;&lt;br&gt;
Yes. Verification requires hashing the original file and comparing it to the hash in the proof. If the file has changed, the hashes will not match. The proof document itself does not contain your file, only its fingerprint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does verification require an internet connection?&lt;/strong&gt;&lt;br&gt;
Not necessarily. Truthlocks supports offline verification through its SDK. The verification process uses the proof document and the original file, not a live connection to any server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who uses proof of authorship beyond individual creators?&lt;/strong&gt;&lt;br&gt;
Enterprises use it for document integrity in audits and disputes. Governments use it to certify that records were not altered after a specific date. Legal teams use it as supporting evidence in intellectual property cases. Any organization that needs to prove a document's state at a specific time has a use case for this.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Proof of authorship is not a new concept. What is new is the ability to generate it yourself, in seconds, without depending on an institution to vouch for you. SHA-256 hashing gives you a unique fingerprint of your work. A cryptographic timestamp anchors that fingerprint in time. Together, they produce a record that anyone can verify independently, now or years from now.&lt;/p&gt;

&lt;p&gt;If you create anything worth protecting, generating a proof before you publish costs you almost nothing. Not having one when you need it can cost you everything.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://decerts.com/en/blog/proof-of-authorship-timestamps" rel="noopener noreferrer"&gt;https://decerts.com/en/blog/proof-of-authorship-timestamps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.pagefreezer.com/sha-256-benefits-evidence-authentication" rel="noopener noreferrer"&gt;https://blog.pagefreezer.com/sha-256-benefits-evidence-authentication&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.thebulldog.law/sha-256-encryption-and-its-legal-implications-in-digital-asset-cases" rel="noopener noreferrer"&gt;https://www.thebulldog.law/sha-256-encryption-and-its-legal-implications-in-digital-asset-cases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://legalstamp.app/en/blog/sha-256-hash-prove-file-integrity" rel="noopener noreferrer"&gt;https://legalstamp.app/en/blog/sha-256-hash-prove-file-integrity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://transparency.dev/verifiable-data-structures/" rel="noopener noreferrer"&gt;https://transparency.dev/verifiable-data-structures/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://research.swtch.com/tlog" rel="noopener noreferrer"&gt;https://research.swtch.com/tlog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://proofstamper.com/en/compare/file-timestamp-tools-2026" rel="noopener noreferrer"&gt;https://proofstamper.com/en/compare/file-timestamp-tools-2026&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://timeproofs.io/proofspec.html" rel="noopener noreferrer"&gt;https://timeproofs.io/proofspec.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.certifywebcontent.com/the-new-standard-for-digital-evidence-hashes-timestamps-and-forensic-declarations/" rel="noopener noreferrer"&gt;https://www.certifywebcontent.com/the-new-standard-for-digital-evidence-hashes-timestamps-and-forensic-declarations/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.scoredetect.com/blog/posts/encryption-techniques-for-copyright-protection-explained" rel="noopener noreferrer"&gt;https://www.scoredetect.com/blog/posts/encryption-techniques-for-copyright-protection-explained&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://copyrights.live/the-rise-of-digital-verification-what-creators-need-to-know" rel="noopener noreferrer"&gt;https://copyrights.live/the-rise-of-digital-verification-what-creators-need-to-know&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stacklok.com/blog/decoding-rekor-understanding-sigstores-transparency-log" rel="noopener noreferrer"&gt;https://stacklok.com/blog/decoding-rekor-understanding-sigstores-transparency-log&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://truthlocks.com" rel="noopener noreferrer"&gt;https://truthlocks.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://truthlocks.com/product" rel="noopener noreferrer"&gt;https://truthlocks.com/product&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://truthlocks.com/solutions" rel="noopener noreferrer"&gt;https://truthlocks.com/solutions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>blockchain</category>
      <category>computerscience</category>
      <category>cybersecurity</category>
      <category>security</category>
    </item>
  </channel>
</rss>
