<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sahil David</title>
    <description>The latest articles on DEV Community by Sahil David (@sahildavid).</description>
    <link>https://dev.to/sahildavid</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sahildavid"/>
    <language>en</language>
    <item>
      <title>I Believe We Need a New Class of Agents: Officers</title>
      <dc:creator>Sahil David</dc:creator>
      <pubDate>Tue, 31 Mar 2026 11:10:37 +0000</pubDate>
      <link>https://dev.to/sahildavid/i-believe-we-need-a-new-class-of-agents-officers-150f</link>
      <guid>https://dev.to/sahildavid/i-believe-we-need-a-new-class-of-agents-officers-150f</guid>
      <description>&lt;p&gt;Every major AI agent framework ships with the same assumption baked in: if the agent completes the task, the job is done.&lt;/p&gt;

&lt;p&gt;That assumption is wrong. And it is going to cost people.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap we should be talking about
&lt;/h2&gt;

&lt;p&gt;Right now, across every serious AI project, the architecture looks roughly the same. You have agents. Those agents have tools. And you have an orchestrator deciding which agent does what, in what order, to get a task done.&lt;/p&gt;

&lt;p&gt;The orchestrator is good at its job. It routes. It sequences. It delegates. It optimises for one thing: task completion.&lt;/p&gt;

&lt;p&gt;But here is the question no one is asking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who is checking whether the completed task actually serves the person who requested it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not whether it ran successfully. Not whether the code compiles or the email was sent. Whether the &lt;em&gt;outcome&lt;/em&gt; is in your best interest.&lt;/p&gt;

&lt;p&gt;That layer does not exist. Not in any agent framework shipping today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Orchestration is not governance
&lt;/h2&gt;

&lt;p&gt;This is a distinction that matters more than most people realise.&lt;/p&gt;

&lt;p&gt;An orchestrator decides &lt;strong&gt;what&lt;/strong&gt; your agents do. It plans, delegates, sequences. It is a project manager.&lt;/p&gt;

&lt;p&gt;But a project manager is not a fiduciary. A project manager does not ask: "Should we be doing this at all?" They ask: "What is the most efficient way to get this done?"&lt;/p&gt;

&lt;p&gt;Those are very different questions.&lt;/p&gt;

&lt;p&gt;Think about what happens when you give a coding agent a simple instruction: "Fix the login bug." The agent fixes it. Then it notices the auth module could use some refactoring. So it starts refactoring. The orchestrator sees a subtask and routes it. Nobody flags that you asked for a bug fix and your agent just expanded scope into a critical system.&lt;/p&gt;

&lt;p&gt;Or this. You have two agents running in parallel. Agent A updates an API schema. Agent B is building frontend components against the old schema. The orchestrator gave them both valid tasks. Neither agent can see what the other is doing. By the time you notice, both have been working at cross purposes for twenty minutes.&lt;/p&gt;

&lt;p&gt;Or this. Your agent finishes a task and wants to push directly to main. On your solo project, fine. On a team repo with PR norms, that is a reputational risk the agent has no concept of.&lt;/p&gt;

&lt;p&gt;The orchestrator did its job in every one of these scenarios. The agents completed their tasks. And the outcome still did not serve you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is missing is fiduciary duty
&lt;/h2&gt;

&lt;p&gt;The concept comes from corporate law. An Officer of a company owes a fiduciary duty to the entity they serve. Not a duty to complete tasks. A duty of loyalty, care, and good faith toward the principal's interests.&lt;/p&gt;

&lt;p&gt;That is exactly what AI agent systems need.&lt;/p&gt;

&lt;p&gt;Not another orchestrator. Not smarter agents. Not better prompts. A dedicated supervisory layer whose only job is to ask: &lt;strong&gt;"Does this action serve my principal?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I am calling this layer an &lt;strong&gt;Officer&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How an Officer works
&lt;/h2&gt;

&lt;p&gt;An Officer sits between your agents and the outside world. Every action an agent wants to take routes through the Officer first. Nothing reaches the principal or triggers an external action without a ruling. The Officer does not do the work. It governs the work.&lt;/p&gt;

&lt;p&gt;On every proposed action, the Officer makes one of four rulings:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approve.&lt;/strong&gt; The action clearly serves your interests and is within bounds. Proceed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modify.&lt;/strong&gt; The action is directionally right but needs adjustment. The Officer rewrites the parameters before execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Escalate.&lt;/strong&gt; The action is ambiguous, high-stakes, or outside the Officer's confidence. It asks you directly before proceeding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Veto.&lt;/strong&gt; The action violates your constraints, conflicts with your interests, or exceeds authority. Blocked, with a reason logged.&lt;/p&gt;

&lt;p&gt;This is not a static rule engine. The Officer reasons about context. It weighs tradeoffs. It knows that a &lt;code&gt;git push&lt;/code&gt; means something different on a solo repo than on a team codebase. It knows that running the full integration test suite for a one-line CSS fix is a waste of your time and money. It knows that when your agent starts expanding scope, that is a decision you should be making, not the agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is not the same as guardrails
&lt;/h2&gt;

&lt;p&gt;Guardrails are rules. They are binary. If X, block. If Y, allow.&lt;/p&gt;

&lt;p&gt;An Officer is judgment. It has the full picture. It knows your stated interests, your authority boundaries, what every agent in the system is doing, and what has happened so far in the session. It makes contextual decisions, not pattern matches.&lt;/p&gt;

&lt;p&gt;And critically, it knows when to decide and when to defer. A guardrail never asks you for input. An Officer escalates when the stakes are high enough or the answer is genuinely ambiguous. That is not a limitation. That is the feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters now
&lt;/h2&gt;

&lt;p&gt;I did not arrive at this from theory. While building multi-agent systems I kept hitting the same wall.&lt;/p&gt;

&lt;p&gt;I would architect a system where every agent was well-scoped, every tool was correctly wired, every orchestration flow was clean. And still, the system would produce outcomes that technically completed the task but missed the point. An agent would make a decision that was locally correct but globally wrong. Two agents would work at cross purposes because neither could see the full picture. Actions that should have been flagged would sail through because no layer existed to flag them.&lt;/p&gt;

&lt;p&gt;The pattern was always the same. The agents were fine. The orchestration was fine. What was missing was someone asking: "But should we?"&lt;/p&gt;

&lt;p&gt;Here is where we are in the stack. MCP standardised how agents talk to tools. That was Layer 1. A2A is standardising how agents talk to each other. That is Layer 2. Shared memory and context protocols like Akashik are emerging as Layer 3.&lt;/p&gt;

&lt;p&gt;But there is no Layer 0. No foundational layer that governs whether the work being done across all those layers actually serves the human at the top.&lt;/p&gt;

&lt;p&gt;Officers is Layer 0. The governance layer that should have existed before any of the others.&lt;br&gt;
The frameworks are maturing fast. Agents are getting more autonomous by the month. And the window between "agents that need hand-holding" and "agents that can cause real damage unsupervised" is closing faster than most people realise. The time to build the accountability layer is before you need it, not after the first incident.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is next
&lt;/h2&gt;

&lt;p&gt;Officers should exist as both an open specification and an SDK. That is where I am taking this.&lt;/p&gt;

&lt;p&gt;My initial thoughts: The easiest way to start is an &lt;code&gt;officer.md&lt;/code&gt; file in the root of your project. It defines who the principal is, what their interests are, what agents are allowed to do autonomously, what requires approval, and what is off-limits. That single file gives any Officer-aware system enough context to govern your agents on your behalf.&lt;/p&gt;

&lt;p&gt;The SDK wraps any agent system's tool execution layer so the Officer can evaluate actions at runtime using that context.&lt;/p&gt;

&lt;p&gt;The spec will be open. Not because governance should be a product moat, but because it should be infrastructure. The same way MCP became the standard for tool connectivity because it was open, Officers should become the standard for agent accountability because anyone can adopt it. If your agents can act, they should be governed. That should not depend on which vendor you chose.&lt;/p&gt;

&lt;p&gt;Every agent system today is optimised for capability. How fast can the agents move. How many tools can they access. How complex a task can they handle.&lt;/p&gt;

&lt;p&gt;Let’s optimise for trust. For the confidence that what your agents produce actually serves you. For the certainty that when you step away, the system is not just completing tasks but protecting your interests.&lt;/p&gt;

&lt;p&gt;That is what Officers changes. And I believe it is the most important missing layer in the entire agent stack. And if it is not this, then what is the layer that makes you trust your agents when you are not watching?&lt;/p&gt;

&lt;p&gt;~ Sahil David&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The Missing Layer in the Agent Stack: Why I Wrote a Shared Memory Protocol for AI Agents</title>
      <dc:creator>Sahil David</dc:creator>
      <pubDate>Thu, 19 Mar 2026 17:41:24 +0000</pubDate>
      <link>https://dev.to/sahildavid/the-missing-layer-in-the-agent-stack-why-i-wrote-a-shared-memory-protocol-for-ai-agents-4eoi</link>
      <guid>https://dev.to/sahildavid/the-missing-layer-in-the-agent-stack-why-i-wrote-a-shared-memory-protocol-for-ai-agents-4eoi</guid>
      <description>&lt;p&gt;AI agents can call tools. They can talk to each other. But they can't remember together.&lt;/p&gt;

&lt;p&gt;I found this out the hard way, and that's why I built the Akashik Protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  The moment it broke
&lt;/h2&gt;

&lt;p&gt;A few months ago, I ran an experiment. Five AI agents, one research task. A researcher, an analyst, a strategist, a writer, and a reviewer. They could call tools via MCP. They could pass messages. On paper, it should have worked.&lt;/p&gt;

&lt;p&gt;Here's what actually happened.&lt;/p&gt;

&lt;p&gt;The strategist made a recommendation based on data that the researcher had already corrected two steps earlier. The writer produced a section that directly contradicted the analyst's findings. And nobody, not the agents, not the system, caught it.&lt;/p&gt;

&lt;p&gt;The models were fine. The memory layer was missing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap
&lt;/h2&gt;

&lt;p&gt;I went looking for a protocol that solved this. Here's what I found:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP (Model Context Protocol)&lt;/strong&gt; standardises how agents call tools, read a file, query a database, and invoke a function. Brilliant at what it does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A2A (Agent-to-Agent)&lt;/strong&gt; standardises how agents message each other, send tasks, receive results, and stream progress. Also brilliant.&lt;/p&gt;

&lt;p&gt;But what happens &lt;em&gt;after&lt;/em&gt; the call:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where do findings go once an agent produces them?&lt;/li&gt;
&lt;li&gt;How does context accumulate across agents and turns?&lt;/li&gt;
&lt;li&gt;What happens when two agents arrive at contradictory conclusions?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every team building multi-agent systems is solving this from scratch. Custom state management. Ad-hoc memory. Fragile glue code that breaks the moment you add a fourth agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Akashik: the third layer
&lt;/h2&gt;

&lt;p&gt;Today I'm publishing the &lt;strong&gt;Akashik Protocol&lt;/strong&gt; ~ an open specification for shared memory and coordination between AI agents.&lt;/p&gt;

&lt;p&gt;Think of it as the missing third layer in the agent stack:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;th&gt;What it handles&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tool access&lt;/td&gt;
&lt;td&gt;MCP&lt;/td&gt;
&lt;td&gt;Agent ↔ Tool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Messaging&lt;/td&gt;
&lt;td&gt;A2A&lt;/td&gt;
&lt;td&gt;Agent ↔ Agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Akashik&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Shared memory &amp;amp; coordination&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't competing protocols. They're complementary layers of the same stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three ideas at the core
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Intent is mandatory
&lt;/h3&gt;

&lt;p&gt;You cannot write to the Akashik Field (the shared memory space) without declaring &lt;em&gt;why&lt;/em&gt;. The &lt;code&gt;intent&lt;/code&gt; field is required on every single write.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;researcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;record&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;finding&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;European SaaS market growing at 23% CAGR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;intent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;purpose&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Validate market size for go-to-market strategy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;question&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Is the market large enough to justify entry?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any agent ~ or any human ~ can look at any finding and immediately understand what question it was answering. You get a reasoning chain for free.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Attunement, not search
&lt;/h3&gt;

&lt;p&gt;Agents don't query a database. They declare who they are, their role, active task, and context budget, and the protocol figures out what's relevant.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;strategist&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attune&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;strategist&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;max_units&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;context_hint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Drafting competitive positioning section&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// context.record[0].relevance_score → 0.85&lt;/span&gt;
&lt;span class="c1"&gt;// context.record[0].relevance_reason → 'Recent finding from researcher. Market sizing relevant to strategy.'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every returned unit includes a relevance score and a human-readable reason for why it was surfaced. Context finds the agent, not the other way around.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Conflicts are first-class
&lt;/h3&gt;

&lt;p&gt;When two agents arrive at contradictory conclusions, the protocol detects it, creates a structured Conflict object, and surfaces it in every relevant ATTUNE response. Nothing gets silently overwritten.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;researcher2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;record&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;finding&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Growth decelerating to 14% CAGR based on Q4 data.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;intent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;purpose&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Update market projection with latest data&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;confidence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;score&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.75&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;reasoning&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Single source.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;relations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;contradicts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;target_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mem-001&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Original estimate was 23% CAGR; new data suggests 14%&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="c1"&gt;// → Field automatically creates a Conflict object&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next time the strategist attunes, they receive both findings &lt;em&gt;plus&lt;/em&gt; the unresolved conflict. They can then resolve it using one of seven structured strategies - from &lt;code&gt;last_write_wins&lt;/code&gt; to &lt;code&gt;confidence_weighted&lt;/code&gt; to &lt;code&gt;human_escalation&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like end to end
&lt;/h2&gt;

&lt;p&gt;Here's a complete Level 0 example, two agents sharing memory in under ten lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Field&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@akashikprotocol/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;field&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;researcher&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;field&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;researcher-01&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;researcher&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;strategist&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;field&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;strategist-01&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;strategist&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// Researcher records a finding — intent is required&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;researcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;record&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;finding&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;European SaaS market growing at 23% CAGR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;intent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;purpose&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Validate market size for go-to-market strategy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// Strategist attunes — receives relevant context automatically&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;strategist&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attune&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;max_units&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="c1"&gt;// → Returns the finding, ranked by relevance&lt;/span&gt;
&lt;span class="c1"&gt;// → Every result includes WHY it was recorded&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's Level 0. No persistence, no embeddings, no configuration. Just shared memory with intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Progressive adoption
&lt;/h2&gt;

&lt;p&gt;You don't have to adopt the full protocol at once. Akashik is designed for progressive adoption; every level adds capability without requiring you to rewrite what already works.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;What you get&lt;/th&gt;
&lt;th&gt;Effort&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;Starter&lt;/td&gt;
&lt;td&gt;REGISTER, RECORD, ATTUNE. In-memory.&lt;/td&gt;
&lt;td&gt;An afternoon&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Core&lt;/td&gt;
&lt;td&gt;+ Persistence, logical clocks, conflict detection, polling.&lt;/td&gt;
&lt;td&gt;A week&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;td&gt;+ Semantic attunement, MERGE, push subscriptions, REPLAY.&lt;/td&gt;
&lt;td&gt;Serious build&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;td&gt;+ Security model, authority hierarchy, coordination extension.&lt;/td&gt;
&lt;td&gt;Production-grade&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Start at Level 0. It solves the core problem, agents sharing memory with intent tracking, with almost no overhead. Move up when you need to.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Akashik is not
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Not a vector database.&lt;/strong&gt; Akashik is a protocol, not an implementation. It defines the contract; you choose the storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not a message queue.&lt;/strong&gt; Agents share structured, intent-bearing context, not raw messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not a replacement for MCP or A2A.&lt;/strong&gt; Akashik is the memory layer. MCP and A2A remain the tool and communication layers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not a framework.&lt;/strong&gt; It's framework-agnostic and transport-agnostic. Any agent system can adopt it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where things stand
&lt;/h2&gt;

&lt;p&gt;The specification (v0.1.0-draft) is live now. It covers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nine core operations (REGISTER, DEREGISTER, RECORD, ATTUNE, DETECT, MERGE, SUBSCRIBE, REPLAY, COMPACT)&lt;/li&gt;
&lt;li&gt;Four conformance levels with normative requirements&lt;/li&gt;
&lt;li&gt;Full data type definitions with JSON Schema reference&lt;/li&gt;
&lt;li&gt;Error model with recovery guidance&lt;/li&gt;
&lt;li&gt;Transport bindings for Native SDK, MCP Server, and HTTP REST&lt;/li&gt;
&lt;li&gt;Security model and authority hierarchy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Level 0 SDK (&lt;code&gt;@akashikprotocol/core&lt;/code&gt;) ships in April — TypeScript, in-memory, the exact code examples above will run.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is deliberately early
&lt;/h2&gt;

&lt;p&gt;The spec is in draft. The best protocols are shaped by the people who use them.&lt;/p&gt;

&lt;p&gt;If you're building multi-agent systems and you've hit the stateless agent wall, I'd genuinely value your perspective:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read the spec and tell me what's missing&lt;/li&gt;
&lt;li&gt;Open an issue with what you'd change&lt;/li&gt;
&lt;li&gt;Tell me what you'd build if your agents could share memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Website &amp;amp; docs: &lt;a href="https://akashikprotocol.com/" rel="noopener noreferrer"&gt;akashikprotocol.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub spec: &lt;a href="https://github.com/akashikprotocol/spec" rel="noopener noreferrer"&gt;github.com/akashikprotocol/spec&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;npm: &lt;a href="https://www.npmjs.com/package/@akashikprotocol/core" rel="noopener noreferrer"&gt;@akashikprotocol/core&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The missing layer in the agent stack is memory. Let's build it right.&lt;/p&gt;

&lt;p&gt;~ Sahil &lt;a href="https://www.sahildavid.dev/" rel="noopener noreferrer"&gt;sahildavid.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>multiagent</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
