DEV Community

Arian Gogani
Arian Gogani

Posted on

341 Malicious AI Agent Skills, 1.5M Leaked Tokens — I Built the Fix

Three weeks ago, OpenClaw went from zero to 135K GitHub stars. Then it collapsed: 18,000 exposed instances, 341 malicious skills on ClawHub, 1.5 million leaked API tokens, and a one-click RCE exploit that took milliseconds.

The response was predictable — scanners, audits, curated marketplaces. All reactive. All after the damage was done.

I think the real problem is deeper: AI agents get capabilities without ever making verifiable commitments about how they’ll use them.

There’s no protocol for an agent to say “I will not exfiltrate data” in a way that a third party can independently verify. That’s the gap.

Accountability ≠ Observability

Observability tells you what an agent did. That’s a security camera.

Accountability makes agents commit to behavioral constraints before execution. That’s a locked door.

The difference matters. OpenClaw’s skills had full disk access, terminal permissions, and OAuth tokens. Scanning them after installation is like reading the autopsy report. The fix is requiring agents to declare constraints cryptographically before they get access.

What I Built

I built an open protocol called Nobulex where agents publish cryptographically signed behavioral covenants — and anyone can verify compliance independently.

Three lines:

const { protect } = require('@nobulex/sdk');
const agent = await protect({
  name: 'my-agent',
  rules: ['no-data-leak', 'read-only']
});
const result = await agent.verify(); // true
Enter fullscreen mode Exit fullscreen mode

Under the hood:

  • Ed25519 signatures — the agent’s operator signs behavioral constraints
  • SHA-256 content addressing — covenants are tamper-proof
  • Constraint language (CCL) — human-readable rules like deny write on '/external/**'
  • 11 independent verification checks — any third party can verify without trusting the operator

The covenant is immutable once signed. The agent can’t quietly change its constraints. And verification is trustless — you don’t need to trust the operator, the framework, or the platform. You just verify the math.

Why Now

The EU AI Act enforcement begins August 2026. Conformity assessments will demand behavioral documentation and audit trails for AI systems. Not PDFs. Not compliance checklists. Machine-verifiable proofs.

Companies deploying autonomous agents — in finance, healthcare, infrastructure — will need this. The OpenClaw crisis was the preview. Agents will only get more autonomous, manage more resources, and have more access to critical systems.

The question isn’t whether we need accountability infrastructure. It’s whether we build it before or after the next breach.

The Part Nobody Expects

I’m 15. I built this over the past several months — 106,000 lines of TypeScript, 5,053 passing tests, 44 packages, MIT licensed. Every function is real, every test passes, every cryptographic primitive works.

I’m not saying this to impress anyone. I’m saying it because if a teenager can build the infrastructure, there’s no excuse for the industry to not have it yet.

Try It

GitHub: github.com/agbusiness195/NOBULEX

npm install @nobulex/sdk

I want honest feedback. What would make this useful for your agents? What’s missing? What’s wrong?

Drop a comment or open an issue. I’ll respond to everything.

Top comments (0)