DEV Community

Cover image for Why We Built Dhara — An Open Protocol Standard for AI Agents, Not Another Product
Arjun Nayak
Arjun Nayak

Posted on • Originally published at zosma.ai

Why We Built Dhara — An Open Protocol Standard for AI Agents, Not Another Product

Coding agents today are products, not platforms. Every single one ships as a bloated, all-in-one package that forces you into its way of working. Claude Code wants you to use their plugin system. Codex wants you to use their workspace model. Pi wants you to write TypeScript extensions. Opencode wants you to configure a dozen YAML files before you can do anything useful.

We built Dhara because we got tired of adapting to someone else's opinionated harness. The agent loop itself — LLM calls tools, observes results, plans next step, repeats — is a simple state machine. It shouldn't require a platform. It shouldn't lock you into a language, a plugin API, or a vendor's release cycle.

Dhara is a protocol standard for how agents talk to tools. Like HTTP standardized how web servers talk to browsers, like LSP standardized how editors talk to language servers — Dhara standardizes how coding agents talk to extensions. The spec is the product. Anyone can implement it.

The Frustration That Started This

Every coding agent harness available in 2026 has the same structural problems:

Claude Code / Codex

They're the most polished products in the space. But they're also the most locked down. Extensions require their proprietary plugin APIs. Customization happens within their walls. The system prompt changes between releases, sometimes breaking behavior you relied on. Hidden context injection means you never know exactly what the model sees.

And they ask you to trust their code. You can't read it. You can't modify it. If something goes wrong, you file a ticket and wait.

Opencode

Open source is great. But Opencode ships with a client/server architecture, MCP servers, custom agents, themes, keybindings, config files, built-in tools, LSP support, and a plugin ecosystem — before you write a single line of your own code. It's a full platform. You spend more time configuring than coding.

The AI adapts to their platform, not to your workflow.

Pi

Pi's extension system is simpler, but it's TypeScript-only. If you want to write a tool in Python, Rust, or Go — tough luck. In-process extensions mean one crash kills your whole session. And the "security model" is a single sentence in the docs: "extensions execute arbitrary code." There's no sandbox. No capability model. No isolation.

These aren't bad products. They're good products that are solving the wrong problem at the wrong layer. They're building agent platforms when what the ecosystem needs is an agent protocol.

What Makes Dhara Different

Dhara is not a product you install and configure. Dhara is a specification — a JSON-RPC 2.0 protocol that defines how agents and tools communicate. The reference implementation is MIT-licensed. The spec is CC-BY-4.0. Anyone can implement the protocol in any language, for any use case.

1. Protocol Over API

Not a TypeScript API you import. A wire protocol. Extensions communicate via JSON-RPC 2.0 over stdin/stdout, WASM, or TCP sockets.

# Your tool in Python — the harness doesn't know or care
def handle_request(method, params):
    if method == "tools/execute":
        result = subprocess.run(["grep", "-rn", params["pattern"]])
        return {"content": [{"type": "text", "text": result.stdout}]}
Enter fullscreen mode Exit fullscreen mode

Write extensions in any language. Python, Rust, Go, TypeScript, Zig, whatever. The protocol is language-agnostic. This is not a feature request. It's a fundamental architectural decision that every other harness gets wrong.

2. Security by Design — Not a Disclaimer

Remember Pi's security model: "extensions execute arbitrary code — review the source before installing." That's not a security model. That's a disclaimer.

Dhara implements capability-based security. Every extension declares what it needs at install time:

  ✓ filesystem:read    → can read files
  ✗ filesystem:write  → NOT granted
  ✗ network:outbound  → NOT granted
  ✗ process:spawn     → NOT granted
Enter fullscreen mode Exit fullscreen mode

The sandbox enforces these at runtime. Like Android app permissions. Like Deno. Unlike every other coding agent.

The sandbox is 245 lines of code. You can read every line. You can audit every line. You can modify every line.

3. Lossless Memory

Every other coding agent compacts your session history by throwing away older messages. "Lossy compaction" is the industry standard. Once a message is compacted away, it's gone forever. You can't go back.

Dhara's compaction produces structured summaries but never deletes the original conversation:

Full transcript     → preserves everything
Compaction summary  → what the LLM sees (with backlinks to originals)
On-demand recall    → request any range of full entries
Enter fullscreen mode Exit fullscreen mode

Tiered memory. Like virtual memory for agents. Your full history is always available, always searchable, always recoverable.

4. Minimal Core, Rich Ecosystem

The core implementation is under 2,000 lines — just the agent loop, protocol, session format, event bus, and sandbox. You can read the entire core in an afternoon.

Module Lines What it does
agent-loop.ts ~180 LLM → tool → LLM state machine
protocol.ts ~125 JSON-RPC 2.0 communication
sandbox.ts ~245 Capability enforcement & audit
session.ts ~305 Open session format (JSON Schema)
config.ts ~355 Configuration management

Everything else is an extension. LLM providers. File tools. The TUI. Compaction strategies. The core doesn't care what LLM you use, what files you edit, or how you render output. That's the extension layer's job.

The Architecture

┌─────────────────────────────────────────────────────┐
│                    ECOSYSTEM LAYER                   │
│   Packages  ·  Themes  ·  Skills  ·  Prompts        │
├─────────────────────────────────────────────────────┤
│                   EXTENSION LAYER                    │
│   Tools  ·  Providers  ·  Renderers  ·  Hooks       │
│   (any language, sandboxed, capability-declared)     │
│          ↕  JSON-RPC 2.0 protocol ↕                 │
├─────────────────────────────────────────────────────┤
│                     CORE LAYER                       │
│   Agent Loop  ·  Protocol  ·  Session Format        │
│   Event Bus  ·  Sandbox                             │
│   (< 2,000 lines — pure agent machinery)           │
└─────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

The ecosystem layer is where the community builds value — packages, themes, skills, prompts. The extension layer provides the runtime for those to execute safely. The core just runs the loop.

Dhara TUI running in the terminal — agent loop with tool calls, streaming responses, and session view

Contrast this with Claude Code where the "ecosystem" is whatever Anthropic decides to expose through their plugin API. Or pi where the "ecosystem" is npm keyword search and the "security model" is a README warning.

The TUI renderer is an extension. So are the file tools. So are the LLM providers. The core doesn't know or care about any of them.

This Is the Unsexy Work

Protocols are not exciting. HTTP isn't exciting. LSP isn't exciting. But they're the foundation that lets an entire ecosystem grow without permission from a single vendor.

Every AI company in 2026 is rushing to build the best agent product. They're competing on features, UX, and lock-in. We're building a protocol because we believe the agent harness should be a standard, not a product.

Like HTTP standardized how web servers talk to browsers, Dhara standardizes how agents talk to tools. Like LSP standardized language support across editors, Dhara standardizes extension support across agent harnesses.

The moat is the spec. Not the implementation. Not the community. Not the GitHub stars. The spec.

Anyone can build a Dhara-compatible agent. Anyone can write a Dhara-compatible extension. The protocol is open. The session format is an open JSON Schema. There's no vendor to ask for permission.

The India Angle

Dhara is built out of India. Not outsourced. Not funded by Silicon Valley. Built by a team that lives and works in India, making architectural decisions from here.

There's a persistent narrative that Indian tech companies build the cheap version. The "good enough" alternative. That's not what this is.

Dhara's architecture — protocol-native extensions, capability-based security, lossless memory, minimal core — is not a budget version of Claude Code. It's what we believe is the right way to build agent infrastructure. JSON-RPC over custom APIs. Sandboxing over disclaimers. Open standard over proprietary platform.

Some of the best infrastructure software in the world is built outside the valley. The next wave won't come from one geography. We're building here.

What's Next

Dhara launched four days ago. The reference implementation is on GitHub under MIT. The spec is CC-BY-4.0. You can read the entire core in an afternoon.

Right now, Dhara has:

  • A working agent loop with streaming, tool calls, and multi-turn sessions
  • A JSON-RPC 2.0 extension protocol with capability-based sandboxing
  • A TUI renderer (experimental)
  • Standard file tools (read, write, edit, grep, ls, bash)
  • Provider support for Anthropic and OpenAI
  • CI/CD, tests, and an open spec

What we need next:

  • Extension authors — write a Dhara-compatible tool in your language of choice
  • Implementors — build a Dhara-compatible agent in Rust, Go, Python
  • Feedback — try the protocol, tell us what's wrong, open issues, submit PRs

Star the repo on GitHub · Read the spec · Join the community


Dhara (धारा, dhārā) is a Sanskrit word meaning **flow* or stream — the continuous, seamless stream between LLM and tools that defines every agent interaction.*

Top comments (0)