<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arian Gogani</title>
    <description>The latest articles on DEV Community by Arian Gogani (@ariangogani).</description>
    <link>https://dev.to/ariangogani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ariangogani"/>
    <language>en</language>
    <item>
      <title>How I built tamper-proof audit logs for AI agents at 15</title>
      <dc:creator>Arian Gogani</dc:creator>
      <pubDate>Wed, 04 Mar 2026 05:27:53 +0000</pubDate>
      <link>https://dev.to/ariangogani/how-i-built-tamper-proof-audit-logs-for-ai-agents-at-15-g49</link>
      <guid>https://dev.to/ariangogani/how-i-built-tamper-proof-audit-logs-for-ai-agents-at-15-g49</guid>
      <description>&lt;p&gt;Software makes promises it can't prove it kept.&lt;br&gt;
"I won't transfer more than $500." "I'll only access these three APIs." "I won't touch production data." Every AI agent makes commitments like these. But when something goes wrong, all you have are logs — logs that the software itself wrote. That's like asking a suspect to write their own police report.&lt;br&gt;
I'm 15, and I spent the last few months building Nobulex to fix this.&lt;br&gt;
The problem&lt;br&gt;
AI agents are moving from demos to production. They're handling money, making procurement decisions, managing infrastructure. But there's no standard way to:&lt;/p&gt;

&lt;p&gt;Define what an agent is allowed to do&lt;br&gt;
Enforce those rules at runtime&lt;br&gt;
Prove — cryptographically — that the agent followed them&lt;/p&gt;

&lt;p&gt;Existing solutions are either post-hoc monitoring (you find out after the damage) or prompt-level guardrails (which can be bypassed). Nothing sits at the action layer with tamper-proof logging.&lt;br&gt;
What I built&lt;br&gt;
Nobulex is open-source middleware with three components:&lt;br&gt;
A rule language. Cedar-inspired, dead simple:&lt;br&gt;
permit read;&lt;br&gt;
forbid transfer where amount &amp;gt; 500;&lt;br&gt;
require log_all;&lt;br&gt;
Runtime enforcement. Every agent action passes through the middleware. If it violates a rule, it gets blocked before execution. Not logged and reported — blocked.&lt;br&gt;
Tamper-proof audit trail. Every action (allowed or blocked) gets recorded in a hash-chained log. Each entry includes the action, the rule that matched, a timestamp, and a cryptographic hash linking it to the previous entry. Anyone can independently verify the entire chain. If a single entry is altered, the chain breaks.&lt;br&gt;
The 3-line integration&lt;br&gt;
javascriptconst { protect } = require('@nobulex/quickstart');&lt;br&gt;
const agent = protect('permit read; forbid transfer where amount &amp;gt; 500; require log_all;');&lt;br&gt;
const result = agent.check({ action: 'transfer', amount: 200 });&lt;br&gt;
// result.allowed === true&lt;br&gt;
That's it. One import, one setup, one check.&lt;br&gt;
Why hash chains matter&lt;br&gt;
A traditional log is a text file. Anyone with access can edit it. A hash-chained log works differently — each entry's hash is computed from its content plus the previous entry's hash. Change one entry and every hash after it becomes invalid. It's the same principle behind blockchains, but without the overhead.&lt;br&gt;
This means a third party can take your audit log, recompute every hash from the first entry, and mathematically verify that nothing was tampered with. No trust required.&lt;br&gt;
The hard part&lt;br&gt;
Building the cryptography and middleware was straightforward. The hard part was designing the rule language.&lt;br&gt;
My first version was too complex — it had nested conditions, boolean logic, inheritance hierarchies. Nobody wanted to write rules in it. I scrapped it and started over with three keywords: permit, forbid, require. If you can't express your rule with those three words, the rule is too complex.&lt;br&gt;
The where clause handles conditions: forbid transfer where amount &amp;gt; 500. That's readable by someone who's never seen the DSL before. That was the goal.&lt;br&gt;
What's next&lt;br&gt;
The project has 6,100+ tests across 61 npm packages. It's MIT licensed and works today.&lt;br&gt;
What I'm working on:&lt;/p&gt;

&lt;p&gt;Framework integrations — LangChain works now, CrewAI and MCP are next&lt;br&gt;
A hosted version — managed compliance infrastructure so you don't have to self-host the verification&lt;br&gt;
Certification badges — "Nobulex Verified" for agents that pass continuous compliance checks&lt;/p&gt;

&lt;p&gt;Try it&lt;br&gt;
Live demo: nobulex.com&lt;br&gt;
GitHub: github.com/nobulexdev/nobulex&lt;br&gt;
npm: npm install @nobulex/quickstart&lt;br&gt;
I'd love feedback on the rule language. Is permit/forbid/require intuitive? Would you design it differently?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>javascript</category>
      <category>security</category>
      <category>opensource</category>
    </item>
    <item>
      <title>341 Malicious AI Agent Skills, 1.5M Leaked Tokens — I Built the Fix</title>
      <dc:creator>Arian Gogani</dc:creator>
      <pubDate>Mon, 23 Feb 2026 00:58:23 +0000</pubDate>
      <link>https://dev.to/ariangogani/341-malicious-ai-agent-skills-15m-leaked-tokens-i-built-the-fix-161c</link>
      <guid>https://dev.to/ariangogani/341-malicious-ai-agent-skills-15m-leaked-tokens-i-built-the-fix-161c</guid>
      <description>&lt;p&gt;Three weeks ago, OpenClaw went from zero to 135K GitHub stars. Then it collapsed: 18,000 exposed instances, 341 malicious skills on ClawHub, 1.5 million leaked API tokens, and a one-click RCE exploit that took milliseconds.&lt;/p&gt;

&lt;p&gt;The response was predictable — scanners, audits, curated marketplaces. All reactive. All after the damage was done.&lt;/p&gt;

&lt;p&gt;I think the real problem is deeper: &lt;strong&gt;AI agents get capabilities without ever making verifiable commitments about how they’ll use them.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There’s no protocol for an agent to say “I will not exfiltrate data” in a way that a third party can independently verify. That’s the gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accountability ≠ Observability
&lt;/h2&gt;

&lt;p&gt;Observability tells you what an agent did. That’s a security camera.&lt;/p&gt;

&lt;p&gt;Accountability makes agents commit to behavioral constraints &lt;em&gt;before&lt;/em&gt; execution. That’s a locked door.&lt;/p&gt;

&lt;p&gt;The difference matters. OpenClaw’s skills had full disk access, terminal permissions, and OAuth tokens. Scanning them after installation is like reading the autopsy report. The fix is requiring agents to declare constraints cryptographically before they get access.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built an open protocol called Nobulex where agents publish cryptographically signed behavioral covenants — and anyone can verify compliance independently.&lt;/p&gt;

&lt;p&gt;Three lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;protect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@nobulex/sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;protect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-agent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no-data-leak&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;read-only&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;verify&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under the hood:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ed25519 signatures&lt;/strong&gt; — the agent’s operator signs behavioral constraints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SHA-256 content addressing&lt;/strong&gt; — covenants are tamper-proof&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Constraint language (CCL)&lt;/strong&gt; — human-readable rules like &lt;code&gt;deny write on '/external/**'&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;11 independent verification checks&lt;/strong&gt; — any third party can verify without trusting the operator&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The covenant is immutable once signed. The agent can’t quietly change its constraints. And verification is trustless — you don’t need to trust the operator, the framework, or the platform. You just verify the math.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Now
&lt;/h2&gt;

&lt;p&gt;The EU AI Act enforcement begins August 2026. Conformity assessments will demand behavioral documentation and audit trails for AI systems. Not PDFs. Not compliance checklists. Machine-verifiable proofs.&lt;/p&gt;

&lt;p&gt;Companies deploying autonomous agents — in finance, healthcare, infrastructure — will need this. The OpenClaw crisis was the preview. Agents will only get more autonomous, manage more resources, and have more access to critical systems.&lt;/p&gt;

&lt;p&gt;The question isn’t whether we need accountability infrastructure. It’s whether we build it before or after the next breach.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Part Nobody Expects
&lt;/h2&gt;

&lt;p&gt;I’m 15. I built this over the past several months — 106,000 lines of TypeScript, 5,053 passing tests, 44 packages, MIT licensed. Every function is real, every test passes, every cryptographic primitive works.&lt;/p&gt;

&lt;p&gt;I’m not saying this to impress anyone. I’m saying it because if a teenager can build the infrastructure, there’s no excuse for the industry to not have it yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/agbusiness195/NOBULEX" rel="noopener noreferrer"&gt;github.com/agbusiness195/NOBULEX&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install @nobulex/sdk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I want honest feedback. What would make this useful for your agents? What’s missing? What’s wrong?&lt;/p&gt;

&lt;p&gt;Drop a comment or open an issue. I’ll respond to everything.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>security</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
