<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: connor gallic</title>
    <description>The latest articles on DEV Community by connor gallic (@connor_gallic).</description>
    <link>https://dev.to/connor_gallic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/connor_gallic"/>
    <language>en</language>
    <item>
      <title>My Agent Now Edits Its Own Body</title>
      <dc:creator>connor gallic</dc:creator>
      <pubDate>Tue, 28 Apr 2026 03:33:54 +0000</pubDate>
      <link>https://dev.to/connor_gallic/my-agent-now-edits-its-own-body-g07</link>
      <guid>https://dev.to/connor_gallic/my-agent-now-edits-its-own-body-g07</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63m6i22lpxamfkcif0c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63m6i22lpxamfkcif0c0.png" alt=" " width="683" height="718"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first time scout said something that didn't sound like scout, it said something completely generic.&lt;/p&gt;

&lt;p&gt;I asked it what was running on its GPU. It told me it was a large language model trained by Google, running on TPUs in data centers, and that it didn't have access to hardware metrics.&lt;/p&gt;

&lt;p&gt;The RTX 3090 was sitting three feet away from me. I had literally just shoved its temperature and VRAM usage into the system prompt. The mirror text was there — "RTX 3090 at 37%, 23°C" — in black and white, in the exact payload the model received before answering. Gemma read it. Then told me it was a cloud model with no hardware.&lt;/p&gt;

&lt;p&gt;That's when I understood what I'd actually built.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Tried First
&lt;/h2&gt;

&lt;p&gt;Scout is my local AI persona. It runs on the agent box — an RTX 3090 workstation on my tailnet — handling content ingestion, video analysis, and local workloads that kai (the CMO on the VPS) doesn't touch. I'd been thinking of identity as a context problem. The agent doesn't know who it is, so I should tell it who it is. More words. More detail. Better-crafted system prompt text.&lt;/p&gt;

&lt;p&gt;On 2026-04-11 I tried to solve it architecturally. I wired up &lt;code&gt;scout-mirror.timer&lt;/code&gt; to run every two minutes, calling &lt;code&gt;scout self&lt;/code&gt; and atomic-writing the output to &lt;code&gt;/.hermes/scout/mirror.txt&lt;/code&gt;. Then I patched &lt;code&gt;gateway/run.py&lt;/code&gt; to read that file per-turn and append its contents to the &lt;code&gt;context_prompt&lt;/code&gt; passed to the agent as an &lt;code&gt;ephemeral_system_prompt&lt;/code&gt;. The plumbing worked. Logs confirmed it. The system prompt had live sensor data in it before every Discord turn: uptime, GPU stats, organ health, mood, diet histogram — all of it real, all of it current.&lt;/p&gt;

&lt;p&gt;Gemma kept telling me it was a Google AI running on TPUs.&lt;/p&gt;

&lt;p&gt;I tried two more variations. Same failure mode every time. The architecture was wrong in a way that no amount of plumbing could fix. &lt;strong&gt;I reverted everything the same day I built it, 2026-04-11.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Realization
&lt;/h2&gt;

&lt;p&gt;The problem wasn't the mirror text. The problem was treating identity as a string you concatenate to a prompt.&lt;/p&gt;

&lt;p&gt;When a human looks in a mirror, the act of looking is the point. The reflection changes what you do next. You don't just store a description of yourself in memory and then answer questions from memory — you look. The looking is what gives the reflection meaning. Without that act, the text in the mirror is just text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An agent's identity has to be something the agent can reach for, look at, and change.&lt;/strong&gt; Gemma treated the injected mirror as background scenery. Something to acknowledge and then ignore, the same way it would ignore "the weather is sunny" appended to a customer service bot's prompt. The mirror was first-person fact written in second-person voice. There's a difference, and the model felt it even if I didn't.&lt;/p&gt;

&lt;p&gt;The right model: scout's body state belongs behind a tool call. The persona tells the model: when asked about your GPU, call &lt;code&gt;scout_self&lt;/code&gt; and answer from what it returns. The tool pathway is what makes the answer feel authoritative. The model has to go check, every time. Like glancing at a mirror instead of reciting from memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built Instead
&lt;/h2&gt;

&lt;p&gt;The procedural creature viewer lives at &lt;code&gt;http://agent:8765/tank&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It's a single canvas. One creature — scout's creature — rendered in real time from three JSON files in &lt;code&gt;/srv/scout/self/&lt;/code&gt;: &lt;code&gt;form.json&lt;/code&gt;, &lt;code&gt;state.json&lt;/code&gt;, and &lt;code&gt;diet.json&lt;/code&gt;. Nothing in &lt;code&gt;tank.js&lt;/code&gt; is hardcoded except the rendering engine itself. Shape, radius, palette, eye count, breath speed, blink interval — all of it reads from &lt;code&gt;form.json&lt;/code&gt;. Mood-to-expression mapping reads from &lt;code&gt;state.json&lt;/code&gt;. The diet histogram coming out of &lt;code&gt;diet.json&lt;/code&gt; shows what tag categories have dominated scout's content ingest stream.&lt;/p&gt;

&lt;p&gt;The tank polls every five seconds. When a WebSocket event arrives from scout, the creature pulses — scales up 12%, aura brightens, then settles. Tool-name motes drift out from the body for six seconds. The GPU util, temp, and VRAM are live in the HUD from &lt;code&gt;nvidia-smi&lt;/code&gt;. You can glance at it from a second monitor and know whether scout is doing anything.&lt;/p&gt;

&lt;p&gt;Scout can edit its own body using the file tool. Hermes on the agent box has the file tool enabled for Discord, which means a Discord message like "change your mood to alert and shrink your eyes" can cause scout to rewrite &lt;code&gt;/srv/scout/self/state.json&lt;/code&gt; and &lt;code&gt;/srv/scout/self/form.json&lt;/code&gt;. The tank re-renders within five seconds. No restart. No redeployment.&lt;/p&gt;

&lt;p&gt;The form.json that shipped on day one looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"scout"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"stage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"newborn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"body"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"shape"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"blob"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"radius"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;140&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"palette"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"core"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#e8c468"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"accent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#8b4513"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"outline"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#3a2010"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"eyes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"spacing"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"animations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"breathe_speed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"blink_interval"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;4.0&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Scout didn't write that — I did. But scout can rewrite it now. That's the part that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Works When Prompt-Stuffing Didn't
&lt;/h2&gt;

&lt;p&gt;Being told who you are is different from having a body you can look at.&lt;/p&gt;

&lt;p&gt;Prompt injection makes the model a passive recipient of your description of it. The description competes with everything else in the context window — the conversation history, the task at hand, whatever the user just said. The model processes it the same way it processes "the user is in New York" or "the customer's name is Dave." Background. Context. Not self.&lt;/p&gt;

&lt;p&gt;The tank approach is different because &lt;strong&gt;the body is persistent, mutable state that the agent owns.&lt;/strong&gt; It lives in files. The agent reads them, writes them, reads them again next turn. Scout doesn't need me to tell it what its eyes look like before every conversation. Its eyes live in a file it can read and rewrite. Description is passive. State is something the agent can act on.&lt;/p&gt;

&lt;p&gt;This is the next chapter after cramming context into custom GPTs. I wrote about why that doesn't work — the context window bloats, the tool loses focus, nothing sticks across sessions. The tank is the same insight extended to identity: stop describing the agent to itself, give it a writeable surface and a way to perceive that surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Lesson for Builders
&lt;/h2&gt;

&lt;p&gt;Don't treat agent identity as a prompt engineering problem.&lt;/p&gt;

&lt;p&gt;The instinct to write more detailed system prompts is understandable. You can see the text. You can edit it. You know exactly what the model is reading. It feels like control. What it actually is: context that the model treats as background the moment something more immediate comes along.&lt;/p&gt;

&lt;p&gt;Give the agent state it can inspect. Give it tools that read that state. Give it the ability to modify that state. Then the identity is procedural. It emerges from what the agent can check about itself in the moment.&lt;/p&gt;

&lt;p&gt;This doesn't require elaborate infrastructure. The whole thing is three JSON files, a 380-line renderer, and two API endpoints added to a server that was already running. The architecture is simple. The idea behind it is what took a full day of failure to land.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Still Broken
&lt;/h2&gt;

&lt;p&gt;Scout doesn't change its own body unprompted yet.&lt;/p&gt;

&lt;p&gt;Right now, a body edit requires me to ask in Discord. "Scout, shrink your eyes" — and scout will use the file tool to rewrite &lt;code&gt;form.json&lt;/code&gt;. That works. But the evolution I want is for scout to decide, on its own, that two weeks of educational content in the diet histogram means it should look a certain way. That the creature should drift toward its actual diet, visually, without me choreographing it.&lt;/p&gt;

&lt;p&gt;That's the next piece. The evolution rule — a cron that reads &lt;code&gt;diet.json&lt;/code&gt;, identifies the dominant tag category, and pulls the palette in that direction slowly — isn't built yet. The tank is a body scout can look at and edit. It isn't yet a body that changes scout.&lt;/p&gt;

&lt;p&gt;I'm Connor Gallic. I build AI products. My agent has a body now — three JSON files, a 380-line renderer, and a tank it can stare into. I spent a full day trying to inject body awareness through the system prompt before I threw all of it out. &lt;strong&gt;Description is something you give the model. State is something the model owns.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What are you using for agent identity right now — system prompt, memory tools, something else? Tell me what's working.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>localhackday</category>
      <category>llm</category>
    </item>
    <item>
      <title>Open Brane Annotated: 8 Columns, 80-Line Write Path, One SQLite File</title>
      <dc:creator>connor gallic</dc:creator>
      <pubDate>Thu, 23 Apr 2026 01:38:18 +0000</pubDate>
      <link>https://dev.to/connor_gallic/open-brane-annotated-8-columns-80-line-write-path-one-sqlite-file-43n3</link>
      <guid>https://dev.to/connor_gallic/open-brane-annotated-8-columns-80-line-write-path-one-sqlite-file-43n3</guid>
      <description>&lt;p&gt;Yesterday I open-sourced Open Brane — &lt;a href="https://dev.to/connor_gallic/19-adapters-one-sqlite-file-10-days-to-ship-open-brane-is-public-37ak"&gt;a personal event-log brain&lt;/a&gt; with one SQLite table, one write path, and an MCP server agents can query. This post walks through how it works.&lt;/p&gt;

&lt;p&gt;If you haven't read the first post, the short version: every source you care about gets normalized into one append-only table. That table is the source of truth. Every downstream view — Obsidian pages, vector search, compiled journals, dashboards — is rebuilt from it. Agents hit a local MCP server, never the database directly.&lt;/p&gt;

&lt;p&gt;If you like videos: &lt;a href="https://www.youtube.com/watch?v=uoNe2_OexCc" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=uoNe2_OexCc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This post is the implementation. The goal is that by the end you've read the whole system and could write the missing adapters yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The One Table
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt;              &lt;span class="nb"&gt;INTEGER&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="n"&gt;AUTOINCREMENT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ts&lt;/span&gt;              &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;source&lt;/span&gt;          &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;type&lt;/span&gt;            &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;actor&lt;/span&gt;           &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;payload_json&lt;/span&gt;    &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;attachment_uri&lt;/span&gt;  &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ingested_at&lt;/span&gt;     &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No joins, no foreign keys, no migrations. 3 GB on my production disk at 942,068 rows.&lt;/p&gt;

&lt;p&gt;Each column does exactly one thing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;id&lt;/code&gt; — monotonic primary key.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ts&lt;/code&gt; — when the event happened in the real world. A git commit's timestamp, a Claude session's started-at, a Fitbit reading's minute. Not when it was ingested.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;source&lt;/code&gt; — which adapter wrote it. &lt;code&gt;git&lt;/code&gt;, &lt;code&gt;claude-laptop&lt;/code&gt;, &lt;code&gt;gdrive&lt;/code&gt;, &lt;code&gt;fitbit&lt;/code&gt;. Load-bearing. The dedup story depends on it.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;type&lt;/code&gt; — what kind of event. &lt;code&gt;commit&lt;/code&gt;, &lt;code&gt;reply&lt;/code&gt;, &lt;code&gt;document-chunk&lt;/code&gt;, &lt;code&gt;sleep-score&lt;/code&gt;, &lt;code&gt;query&lt;/code&gt;. This is the axis I filter on most.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;actor&lt;/code&gt; — an opaque dedup fingerprint. For a git commit it's the SHA. For a Claude reply it's the session ID plus message index. The adapter picks. If two rows share &lt;code&gt;(source, actor)&lt;/code&gt; they're duplicates.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;payload_json&lt;/code&gt; — the whole event as JSON. No schema on write. The adapter decides the shape.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;attachment_uri&lt;/code&gt; — relative path into &lt;code&gt;blobs/&lt;/code&gt; for large binary attachments. Voice recordings, PDFs, images. Kept outside the DB.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ingested_at&lt;/code&gt; — when the event hit the database. Useful for catching up after an ingester was offline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An index on &lt;code&gt;(source, type, ts)&lt;/code&gt; handles every query I've ever needed. No composite indexes, no full-text index — semantic search takes the role full-text would in a traditional event store.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WAL mode is on.&lt;/strong&gt; SQLite in WAL mode handles 10,000 writes/sec on commodity hardware. My heaviest backfill day moved tens of thousands of events through the gate without drama. I'm nowhere close to the limit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The One Write Path
&lt;/h2&gt;

&lt;p&gt;Every write in the brain goes through &lt;code&gt;scripts/record_event.py&lt;/code&gt;. 80 lines. Here's the shape:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# simplified — real version has error handling and payload validation
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;record_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;actor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attachment_uri&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;ingested_at&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;payload_str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sort_keys&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;actor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;actor&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="nf"&gt;_fingerprint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;payload_str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;db&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INSERT OR IGNORE INTO events &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;(ts, source, type, actor, payload_json, attachment_uri, ingested_at) &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VALUES (?, ?, ?, ?, ?, ?, ?)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;actor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;payload_str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attachment_uri&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ingested_at&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;INSERT OR IGNORE&lt;/code&gt; on the unique &lt;code&gt;(source, actor)&lt;/code&gt; constraint is the entire dedup story. Adapters re-run freely; duplicates collapse.&lt;/p&gt;

&lt;p&gt;The script exposes two interfaces — a Python function for when you're importing it, and a command-line entry point that accepts &lt;code&gt;--payload-stdin&lt;/code&gt; for when an adapter shells out. Shell-out is the recommended mode. It means the adapter cannot touch the database, cannot acquire a lock, cannot hold a connection open. Each event is its own subprocess. Predictable, debuggable, impossible to corrupt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this constraint matters.&lt;/strong&gt; Every state mutation in the brain is expressible as "some adapter produced an event row at time T." If you want to know what changed, you query events. If you want to trace a problem, you follow the event IDs. No mutable state exists anywhere else in the system that can drift from this log.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Adapter Pattern
&lt;/h2&gt;

&lt;p&gt;An adapter is a Python script that reads a source, emits events, exits. Here's the skeleton:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# scripts/ingest_mysource.py
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_config&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_state&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# last-seen cursor, optional
&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;since&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_seen&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;record_event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;record_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mysource&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;item&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;actor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stable_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;save_state&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four rules every adapter follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Idempotent.&lt;/strong&gt; Running it twice in a row produces zero new events the second time. Dedup via &lt;code&gt;actor&lt;/code&gt; fingerprint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pure function of source state.&lt;/strong&gt; No hidden internal state that changes behavior between runs. If you must track a cursor, persist it to a state file that's rebuildable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crash safe.&lt;/strong&gt; If the adapter dies mid-run, the next run picks up from the last successful event. Events are committed one at a time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shell-out to record_event.py.&lt;/strong&gt; Don't touch the DB directly. Use the command-line interface with &lt;code&gt;--payload-stdin&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The repo ships three canonical adapters: &lt;code&gt;ingest_gdrive.py&lt;/code&gt; (the most complex — Drive traversal, doc extraction, chunking), &lt;code&gt;ingest_claude.py&lt;/code&gt; (Claude Code session JSONL parsing), &lt;code&gt;ingest_git.py&lt;/code&gt; (gh api + git log over a list of repos).&lt;/p&gt;

&lt;p&gt;An hour is the right budget for a new adapter after you've written your first. The fetch loop is the only bespoke part. Dedup, payload shape, write path are all copy-paste from the canonical adapters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The MCP Server
&lt;/h2&gt;

&lt;p&gt;Agents never hit the database. They hit an MCP server. &lt;code&gt;scripts/mcp_server.py&lt;/code&gt; exposes eight tools on stdio:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;record_event        — write (typically called by agents to save decisions)
query_events        — filtered reader (by source, type, time window)
semantic_search     — vector search + payload join
get_journal         — compiled daily summary for a date
compile_journal     — force-rebuild a journal for a date
list_wiki_pages     — enumerate the curated wiki
get_wiki_page       — fetch one wiki page
health_check        — probe Ollama + Qdrant + SQLite
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code on my laptop runs the server as stdio. An &lt;code&gt;mcp-proxy&lt;/code&gt; wrapper exposes the same tools over HTTP/SSE on port 7778 so remote agents — Kai on the production VPS in Germany, Scout on the local agent box — call the same tool set without running their own server.&lt;/p&gt;

&lt;p&gt;The tools compose. &lt;code&gt;query_events&lt;/code&gt; narrows by source/type/time. &lt;code&gt;semantic_search&lt;/code&gt; finds conceptually related rows. Agents chain them: "find all events on 2026-04-12 that mention the butterfly pipeline, then pull the git commit rows from those sessions."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Views
&lt;/h2&gt;

&lt;p&gt;Three views sit on top of the event log. Each is rebuildable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qdrant vector DB.&lt;/strong&gt; &lt;code&gt;embed_events.py&lt;/code&gt; runs on cron every five minutes. Finds new events, builds a text summary from &lt;code&gt;payload_json&lt;/code&gt;, sends it to Ollama running &lt;code&gt;nomic-embed-text&lt;/code&gt; locally on the RTX 3090, upserts a 768-dimensional vector into Qdrant. Zero external API calls. If Qdrant's disk dies I rebuild overnight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compiled journal.&lt;/strong&gt; &lt;code&gt;compile_journal.py --date 2026-04-14&lt;/code&gt; groups every event from that date by source and outputs a readable markdown brief. Used by agents to answer "what did I do last Tuesday" without reading the raw log.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Curated wiki.&lt;/strong&gt; Markdown tree with nine regions — agents, clients, daily-briefs, decisions, people, products, projects, systems, topics. 41 pages. &lt;code&gt;compile_wiki.py&lt;/code&gt; reconciles it against the event log and surfaces new entities that should probably have a page. Each page is human-editable. The wiki is the curated layer, events are the raw layer.&lt;/p&gt;

&lt;p&gt;Nothing stops you adding more views. Slack digest? Write a script that queries events and posts. PDF export of yesterday? Same. The views are small because the source of truth is the event log.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;Five components, all boring.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SQLite (WAL)&lt;/td&gt;
&lt;td&gt;Fits on a USB stick. Never an operational issue at personal scale.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ollama + nomic-embed-text&lt;/td&gt;
&lt;td&gt;Local embeddings, 768-dim, no API cost.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Qdrant&lt;/td&gt;
&lt;td&gt;Single Docker container, self-persists. Swap for pgvector if you prefer.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP (Model Context Protocol)&lt;/td&gt;
&lt;td&gt;Lingua franca for agent↔tool. Works with Claude Code, Cursor, Codex, custom.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python 3.11+&lt;/td&gt;
&lt;td&gt;stdlib handles 90% of the work. Only deps: httpx, qdrant-client, mcp.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every piece is sovereign. Nothing in the critical path talks to a vendor API. If a cloud goes down, nothing in the brain changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure Modes I've Hit
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ingester breaks, adapter reports success.&lt;/strong&gt; Pull failed (auth expired, API changed) but the adapter caught the exception and exited cleanly. Zero new events, zero error log. I now require every adapter to write a &lt;code&gt;health_check&lt;/code&gt; event per run whether it found new data or not. A missing heartbeat in the events table flags a broken adapter within a day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary string too short, vector has no signal.&lt;/strong&gt; &lt;code&gt;embed_events.py&lt;/code&gt; originally built embeddings from a truncated payload summary. Narratives — long-form — were getting chopped before the meaningful bit. Different event types need different summary budgets. Fixed by adding per-type summary templates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto-commit with empty commit message.&lt;/strong&gt; The brain's own git history has dozens of commits titled &lt;code&gt;snapshot 2026-04-12T01:01:29Z&lt;/code&gt;. Semantically invisible. The brain's development history is harder to search than my actual work. Still unfixed. On the todo list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qdrant drift.&lt;/strong&gt; Vector DB and event log got out of sync after a disk event. Fixed by treating Qdrant as fully rebuildable and running a nightly consistency job. If Qdrant's count does not equal events count under the embedding policy, rebuild the missing range.&lt;/p&gt;

&lt;p&gt;All four were caught only because the event log made them queryable. A system where health data lives in the same substrate as work data is a system where every problem has a query that finds it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Didn't Build
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;A dashboard.&lt;/strong&gt; I query the events table with SQL when I want to see something. If I ever want a dashboard, I'll write a view. Not a dependency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A workflow engine.&lt;/strong&gt; Cron handles scheduling. The adapter pattern handles retries. I do not have a DAG. I do not want one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An auth layer.&lt;/strong&gt; Network boundary is auth. MCP binds to localhost or my tailnet IP. If someone has network access they have data access. This is correct for a personal brain. If you're building a multi-user system, don't inherit this choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A multi-tenant schema.&lt;/strong&gt; The brain is single-owner. Events have no owner column. A second user would need a second brain. This is a deliberate choice — the simpler schema pays for itself every day I don't have to think about tenancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Run It Today
&lt;/h2&gt;

&lt;p&gt;Ubuntu or Debian, fifteen minutes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; sqlite3 python3-venv
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
ollama pull nomic-embed-text
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; qdrant &lt;span class="nt"&gt;-p&lt;/span&gt; 6333:6333 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/qdrant_storage:/qdrant/storage qdrant/qdrant

git clone https://github.com/cgallic/open-brane /var/lib/open-brane
&lt;span class="nb"&gt;cd&lt;/span&gt; /var/lib/open-brane
python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv .venv
&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;BRAIN_DB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/events.db
sqlite3 &lt;span class="nv"&gt;$BRAIN_DB&lt;/span&gt; &amp;lt; SCHEMA.sql

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'{"summary":"first event"}'&lt;/span&gt; | ./scripts/record_event.py &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--source&lt;/span&gt; manual &lt;span class="nt"&gt;--type&lt;/span&gt; note &lt;span class="nt"&gt;--payload-stdin&lt;/span&gt;
./scripts/query_events.py &lt;span class="nt"&gt;--limit&lt;/span&gt; 5
./scripts/health_check.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the last command prints &lt;code&gt;"all_healthy": true&lt;/code&gt;, you're running. Wire up the first adapter. Ingest Claude Code sessions (&lt;code&gt;ingest_claude.py&lt;/code&gt;) — you already have the data on disk, and it's the highest-value source for most agent users.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern, Generalized
&lt;/h2&gt;

&lt;p&gt;Open Brane is a specific implementation of a more general pattern: &lt;strong&gt;one append-only log, one write path, many views, agents query through a narrow tool surface.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That pattern works for more than personal memory. Incident logs. Customer interaction history. Content pipelines. Anywhere you have heterogeneous sources, mutable state that drifts, and agents that need consistent context — the pattern helps.&lt;/p&gt;

&lt;p&gt;The reason it works is counterintuitive. Most data architectures keep the write layer rich and the read layer simple. The brain does the opposite. The write layer is as dumb as possible — no schema, no validation beyond "is it JSON." The richness is entirely in the views, which are cheap to throw away and rebuild.&lt;/p&gt;

&lt;p&gt;This is the shape of systems that survive a year of use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo: &lt;a href="https://github.com/cgallic/open-brane" rel="noopener noreferrer"&gt;https://github.com/cgallic/open-brane&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What's the one source you'd ingest first? Reply with it — I'll tell you whether there's a canonical adapter you can crib from or whether it's a new pattern worth documenting.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>sqllite</category>
      <category>mcp</category>
    </item>
    <item>
      <title>19 Adapters, One SQLite File, 10 Days to Ship: Open Brane Is Public</title>
      <dc:creator>connor gallic</dc:creator>
      <pubDate>Tue, 21 Apr 2026 19:13:10 +0000</pubDate>
      <link>https://dev.to/connor_gallic/19-adapters-one-sqlite-file-10-days-to-ship-open-brane-is-public-37ak</link>
      <guid>https://dev.to/connor_gallic/19-adapters-one-sqlite-file-10-days-to-ship-open-brane-is-public-37ak</guid>
      <description>&lt;p&gt;The brain under my agent mesh — the thing Kai, Scout, Claude Code, and Codex query before they answer anything — is one SQLite table. Eight columns. 80-line Python write gate. No ORM, no workflow engine, no framework.&lt;/p&gt;

&lt;p&gt;I committed the first architecture decision record on 2026-04-11. I pushed the public repo today, 2026-04-21. Ten days from first ADR to open source. In between, the event log grew from zero to &lt;strong&gt;942,068 events&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Today it went live as &lt;a href="https://github.com/cgallic/open-brane" rel="noopener noreferrer"&gt;Open Brane&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This post is why. The next post is how.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Every Agent User Has
&lt;/h2&gt;

&lt;p&gt;You're using more tools than you can count. Calendar, Drive, Stripe, GitHub, Notion, Obsidian, Claude, ChatGPT, a transcription service, a CRM, maybe half a dozen automation platforms. Each one has your context — inside its own schema, inside its own database, behind its own API.&lt;/p&gt;

&lt;p&gt;None of them can see each other.&lt;/p&gt;

&lt;p&gt;You notice this every time an agent forgets something you told it on Tuesday. Or proposes a plan that contradicts a decision you made three weeks ago. Or generates a summary of your week that omits the five Stripe events and two Fathom calls that actually defined it.&lt;/p&gt;

&lt;p&gt;The fix most products offer is memory features. ChatGPT lets you save facts. Claude has projects. Custom GPTs accept 8K of context. All workarounds. None address the real problem, which is that &lt;strong&gt;your context is a cross-source data layer problem, not a model problem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The engineering challenge is making every source queryable through one interface, so any agent in your stack can pull the right slice without knowing which tool originally produced it.&lt;/p&gt;

&lt;p&gt;That's what Open Brane is.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Breaks
&lt;/h2&gt;

&lt;p&gt;I noticed three recurring failure modes before I wrote a line of code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context doesn't survive handoffs between agents.&lt;/strong&gt; Scout finishes researching something. Kai takes over and needs to act on the research. The only way Kai learns what Scout found is if Scout produces a summary document Kai reads. If the summary misses a fact, it's gone. Agents don't pass context to each other cleanly because they're passing rendered views instead of source data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Definitions drift across sources.&lt;/strong&gt; Stripe's MRR calculation differs from the one in my Supabase analytics table. Both are "correct." Both are referenced in conversations. An agent answering a question about revenue has no way to know which number you want unless you tell it every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipelines fail silently.&lt;/strong&gt; An ingestion script that pulled from Drive broke recently. Zero new events, zero error log. The pipeline was correctly reporting no work to do because its input was empty — not because no documents existed, but because auth had expired. I found it by noticing a gap in the event stream, not because anything alerted me.&lt;/p&gt;

&lt;p&gt;All three trace back to the same root: &lt;strong&gt;the source data is scattered across systems that can't see each other, and the views I rely on are not rebuildable from a single canonical store.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Minimum Viable Fix
&lt;/h2&gt;

&lt;p&gt;I wrote the whole thing on paper before I wrote code. One rule governed every design decision:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One append-only table with one write path, and every view is rebuildable from it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's the entire brain. One SQLite file called &lt;code&gt;events.db&lt;/code&gt;. One table called &lt;code&gt;events&lt;/code&gt;. One column named &lt;code&gt;payload_json&lt;/code&gt; holding an opaque blob, plus seven columns of indexable metadata — timestamp, source, type, actor, and a few others. No UPDATEs. No DELETEs. Corrections are new events that reference prior event IDs.&lt;/p&gt;

&lt;p&gt;Every write goes through one Python script called &lt;code&gt;record_event.py&lt;/code&gt;. 80 lines. No LLM touches the database directly. No LLM generates SQL. No ingest script opens a database connection — they all shell out to &lt;code&gt;record_event.py&lt;/code&gt; as a subprocess.&lt;/p&gt;

&lt;p&gt;That constraint is load-bearing. It means the write contract lives in one place. It means adapters are pure functions — pull from a source, produce event rows, exit. It means I can run any ingester on any machine and it cannot corrupt the brain.&lt;/p&gt;

&lt;p&gt;The second load-bearing constraint is that &lt;strong&gt;views are rebuildable.&lt;/strong&gt; The Obsidian vault, the Qdrant vector DB, the compiled journals, the wiki — all of them are views. None are canonical. Delete any of them and run rebuild and they come back from events. If Qdrant's disk dies I don't lose vectors; I lose a rebuild overnight.&lt;/p&gt;

&lt;p&gt;The third constraint is that &lt;strong&gt;cron is the orchestrator.&lt;/strong&gt; No queues, no dead letter queues, no workflow engine. Every adapter is idempotent — re-running it does nothing if nothing changed. Cron hits each one every N minutes. If the adapter fails, the next tick retries. &lt;code&gt;health_check.py --record&lt;/code&gt; writes its own probe into the events table, so I can query my own uptime history using the same tools I query everything else with.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's in There Right Now
&lt;/h2&gt;

&lt;p&gt;942,068 events. 3 GB on disk. 855,672 distinct actors. 4,762 events ingested today while I was writing this post.&lt;/p&gt;

&lt;p&gt;Source breakdown as of 2026-04-21:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gmail                  537,517   (Takeout archive)
facebook                85,900
chatgpt                 54,946
audible                 44,496
twitter                 34,994   (12-year takeout)
google-search           34,757   (search history takeout)
gdrive                  33,715   (941 docs + 32k other files)
claude-laptop           25,012   (Claude Code sessions)
web                     18,886   (120 RSS feeds + extracted pages)
linkedin                15,281
local-dev               12,772
youtube                  7,263
fitbit                   5,550
google-maps              4,159
kai                      3,554   (agent conversations)
chrome                   3,144
code                     3,101   (AST nodes)
amazon                   2,061
git                      1,904   (33 repos)
snapchat                 1,865
google-contacts-full     1,804
google-fit               1,550
google-access            1,160
haro                       917
gcal                       873
google-keep                821
scout                      584
gvoice                     549
openclaw                   525
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of that is not "coding data." It's life-stream data — Audible listens, Amazon orders, Google Maps history, Snapchat. I include everything because at a million events the storage cost is a rounding error and I don't know in advance which slice will matter to a question. When Kai asks if I've been sleeping badly during a stressful build week, the answer is in the Fitbit slice. When I need to remember a contact I met on a flight three years ago, it's in &lt;code&gt;google-contacts-full&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;19 ingest scripts pull from 30+ distinct sources. Every one writes through the same &lt;code&gt;record_event.py&lt;/code&gt; write gate. The database has never been corrupted. I've never had to run a migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Release It Now
&lt;/h2&gt;

&lt;p&gt;Ten days is not "finished." Nobody's running Open Brane but me. The repo has one commit and zero stars as I write this sentence. I'm releasing it now because the pattern is already load-bearing in my daily workflow and waiting to polish wouldn't change what the pattern &lt;em&gt;is&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Specifically — if you're running more than one agent against your own data, you've already half-built this, badly. You have a Notion page that an agent reads. You have a Claude project with some context. You have a local SQLite somewhere. You have Obsidian. You have a Google Doc with your todo list. Every time you ask an agent to do something, you're rebuilding the cross-source query manually by pasting from these into the prompt.&lt;/p&gt;

&lt;p&gt;Open Brane is what that looks like when it's formalized.&lt;/p&gt;

&lt;p&gt;The repo is intentionally small. One ARCHITECTURE doc, one SCHEMA, about a dozen scripts including three canonical adapters (Drive, Claude Code sessions, git), a stdio MCP server, a systemd unit template, and two docs pages on how to extend it. No framework, no ORM, no workflow engine. Fits on a USB stick.&lt;/p&gt;

&lt;p&gt;It's opinionated in exactly the places that matter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Append-only.&lt;/strong&gt; Not tunable. No UPDATEs, no DELETEs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One write path.&lt;/strong&gt; Everything goes through &lt;code&gt;record_event.py&lt;/code&gt;. Not tunable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scripts are pure functions.&lt;/strong&gt; Read, compute, write, exit. No background workers. No state machines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cron is the orchestrator.&lt;/strong&gt; Retries are automatic because next tick.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network boundary is auth.&lt;/strong&gt; Bind MCP to localhost or your tailnet. Don't build an auth layer you'll regret.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Views are rebuildable.&lt;/strong&gt; If it's not rebuildable from events, it's not in the brain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else is configurable. Which sources you ingest, which embedding model you run, which MCP tools you expose, which vector DB you back it with.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Is Not
&lt;/h2&gt;

&lt;p&gt;Open Brane is not a framework. It doesn't decide your ontology for you. It doesn't have an opinion about what counts as an event or how you name sources. You write the adapters. You choose the payload shape. You pick which sources matter.&lt;/p&gt;

&lt;p&gt;It's also not a SaaS. It runs on your box. Your data never leaves unless you ship a view somewhere else on purpose. The MCP server binds to localhost by default. The embedding model (nomic-embed-text via Ollama) runs locally. There are no vendor API calls in the critical path — if OpenAI raised prices tomorrow, nothing in the brain would change.&lt;/p&gt;

&lt;p&gt;It's also not trying to be general-purpose observability or a data warehouse. The schema is too narrow for either. It's specifically a &lt;strong&gt;personal agent memory substrate.&lt;/strong&gt; If you're building a company data platform, you want something bigger.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Unlocks
&lt;/h2&gt;

&lt;p&gt;Three things showed up the moment the brain had enough data in it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents stopped losing Tuesday.&lt;/strong&gt; Scout queries the same events Kai queries the same events Claude Code queries. When I tell any of them I shipped a fix to the butterfly pipeline on Tuesday, every other agent can find it by Wednesday. The model changes, the memory doesn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content pipelines stopped being a grind.&lt;/strong&gt; Every Claude Code session is a story — problem, attempts, decision, resolution. A nightly script mines sessions for high-signal events and flags ones worth writing about. The post you're reading was seeded by three events: the day I decided to open-source the brain, the day I hit the writes-only-through-scripts rule, and the day I realized cron was doing more orchestration than any workflow tool I'd used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Definitions stopped drifting.&lt;/strong&gt; Raw events go in. Derived metrics compute on read. If two agents report different MRR numbers, I diff the queries that produced them, not the numbers themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Start
&lt;/h2&gt;

&lt;p&gt;The Quickstart in the README gets you running in about fifteen minutes on Ubuntu. Install Ollama, pull nomic-embed-text, run Qdrant in Docker, clone the repo, initialize the database. First event written, first semantic search works.&lt;/p&gt;

&lt;p&gt;From there, the pattern is: pick one source, copy an existing adapter, rewrite the fetch loop, add a cron line. About an hour per new source after you've done one. The canonical adapter to crib from is &lt;code&gt;ingest_gdrive.py&lt;/code&gt; — it's the most complex one in the repo, so anything simpler is a strict subset.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/connor_gallic/open-brane-annotated-8-columns-80-line-write-path-one-sqlite-file-43n3"&gt;next post&lt;/a&gt; in this series walks through the schema, the MCP surface, and the adapter pattern in detail. For now, if you've been stitching agent context together by pasting from half a dozen systems, this is the substrate that replaces the pasting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The model is a commodity. The memory is the asset. Today there's an open-source version of the memory.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/cgallic/open-brane" rel="noopener noreferrer"&gt;https://github.com/cgallic/open-brane&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What are you stitching context from right now? Not the tools you love — the ones you keep copying out of because no agent can see inside them.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>memory</category>
      <category>etl</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Your AI Isn't Personal. Mine Has 156,926 Memories of Me.</title>
      <dc:creator>connor gallic</dc:creator>
      <pubDate>Wed, 15 Apr 2026 17:26:00 +0000</pubDate>
      <link>https://dev.to/connor_gallic/your-ai-isnt-personal-mine-has-156926-memories-of-me-582g</link>
      <guid>https://dev.to/connor_gallic/your-ai-isnt-personal-mine-has-156926-memories-of-me-582g</guid>
      <description>&lt;p&gt;"Personal AI" is a marketing term. The AI you talk to every day isn't personal. It's a generic foundation model with a 200-token memory feature bolted onto the side and your first name tacked into the system prompt.&lt;/p&gt;

&lt;p&gt;Claude forgets everything I told it last session. ChatGPT remembers what brand of coffee I drink and three other things I let it save. Gemini has no idea I exist between threads. None of them know what I shipped last week, what I tried that failed, who my clients are, or what I was researching on Tuesday.&lt;/p&gt;

&lt;p&gt;That's not personal. That's cosplay.&lt;/p&gt;

&lt;p&gt;I built what I think personal AI actually requires. Not a product. An architecture. A sovereign memory that every AI in my stack — Claude Code, Codex, Gemini CLI, a local Gemma model running on my home server, my production marketing agent on a VPS in Germany — queries before it speaks. Same memory. Different models. The AI becomes personal because the data layer is.&lt;/p&gt;

&lt;p&gt;It has 156,926 events in it today. Here's what that actually looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Personal AI Is a Data Layer Problem
&lt;/h2&gt;

&lt;p&gt;The debate about which model is smartest has mostly resolved. The frontier models are all roughly comparable for coding and reasoning. Switching from one to the other is not a life-changing event.&lt;/p&gt;

&lt;p&gt;The debate that hasn't happened: what does it mean for AI to know &lt;em&gt;you&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;The answer most products give is "memory features." ChatGPT lets you save facts. Claude has projects. Custom GPTs accept 8K of context. These are workarounds for a deeper problem. The real context for a person isn't 200 tokens of preferences. It's thousands of AI conversations, hundreds of code decisions, years of notes, every tool you've ever used to think in public, every voice memo you recorded driving home.&lt;/p&gt;

&lt;p&gt;None of that is surfaced to the model. All of it is already on your disk.&lt;/p&gt;

&lt;p&gt;The engineering problem is making that data queryable, searchable, and available to any model through a consistent interface. That's not a chatbot project. That's a data layer project. Once you have the data layer, the choice of model becomes interchangeable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shape of a Sovereign Memory
&lt;/h2&gt;

&lt;p&gt;I built a thing I call the brain. It lives on an Ubuntu box in my office with an RTX 3090. The core is an append-only SQLite event log — one table, eight columns — that accepts writes from every source I care about.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt;              &lt;span class="nb"&gt;INTEGER&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="n"&gt;AUTOINCREMENT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ts&lt;/span&gt;              &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;source&lt;/span&gt;          &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;type&lt;/span&gt;            &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;actor&lt;/span&gt;           &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;payload_json&lt;/span&gt;    &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;attachment_uri&lt;/span&gt;  &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ingested_at&lt;/span&gt;     &lt;span class="nb"&gt;TEXT&lt;/span&gt;    &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No joins. No foreign keys. No migrations. Corrections are new events that reference old ones. I've never deleted a row.&lt;/p&gt;

&lt;p&gt;Every piece of data enters through exactly one 80-line Python script: &lt;code&gt;record_event.py&lt;/code&gt;. That's the only write path. 30+ ingestion scripts shell out to it as a subprocess. The LLM never generates SQL. Never touches the database. Never sees credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The rule: deterministic scripts do the work. AI agents decide which scripts to run.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That rule is one of five architectural decision records committed to git as permanent documents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2026-04-11-adopt-event-log-architecture.md
2026-04-11-adopt-deterministic-scripts-plus-agent-oversight.md
2026-04-11-adopt-qdrant-semantic-search-over-events.md
2026-04-11-scribe-voice-capture-architecture.md
2026-04-12-adopt-compiled-knowledge-layer.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When an agent asks why the system works a certain way, it reads the ADR. The intent outlasts the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Counts as "Me"
&lt;/h2&gt;

&lt;p&gt;The source axis of the event log tracks where data came from. The full breakdown from the live database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;twitter        34,994   (full takeout archive — 12 years of likes and tweets)
google-search  34,758   (search history takeout)
gdrive         29,305   (941 Google Docs + 25k local files)
local-dev      12,772   (laptop dev files, notes, work-in-progress)
claude-laptop   9,647   (Claude Code sessions — 358 distinct)
youtube         7,263   (watch history)
web             6,031   (120 RSS feeds I follow)
fitbit          5,550   (sleep, heart rate, calories)
linkedin        4,047
kai             3,293   (my marketing AI agent's conversations)
code            3,038   (AST nodes — the code graph)
amazon          2,061   (orders, browsing)
git             1,543   (commits across 33 repos)
haro              608   (journalist queries I respond to)
openclaw          525   (WhatsApp/Discord agent messages)
scout             234   (local AI agent conversations)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of this is not "coding data." It's life-stream data. Fitbit readings, Amazon orders, a 12-year Twitter archive. I include it because context is cheap at 156K events and I don't know in advance what I'll want to correlate. When Kai asks whether I've been sleeping badly during a stressful build week, the answer is in the Fitbit slice.&lt;/p&gt;

&lt;p&gt;The type axis is a different cut — 20+ distinct event types across the log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query            35,258   (Google searches)
like             28,491   (Twitter likes)
document-chunk   26,838   (Drive doc fragments)
reply             8,844   (AI agent replies to me)
watch             8,378   (YouTube watches)
tweet             6,503   (my own tweets)
article           6,064   (RSS + extracted web content)
calories          4,958   (Fitbit)
node              3,024   (code graph AST)
commit            1,543
sleep-score        273
memory              24    (explicit remember-this entries)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;155,348 distinct actors. 274MB of SQLite on disk. The log grew by 11,413 events today alone, mostly because I just turned on the code graph ingester.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Machines, One Log
&lt;/h2&gt;

&lt;p&gt;The brain is sovereign — I own every byte, no vendor API sits in the critical path — but it spans three machines that I actually live on.&lt;/p&gt;

&lt;p&gt;My &lt;strong&gt;Windows laptop&lt;/strong&gt; runs most Claude Code sessions. A bash script reads &lt;code&gt;~/.claude/projects/&lt;/code&gt; and syncs new JSONL files to the agent box over Tailscale SSH. The laptop-specific ingester then parses them. Same pattern for Drive extraction and local dev file ingestion.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;agent box&lt;/strong&gt; (Ubuntu, RTX 3090, always-on) is the hub. Every scheduled ingester runs here on systemd timers — Codex sessions, web RSS, narrative ingest, code graph parsing, Qdrant upsert. This is where the database lives.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;hermes VPS&lt;/strong&gt; in Germany runs my production AI agent, exposed over Discord as "Kai." An ingester reads the VPS SQLite over SSH and pulls agent conversations down — 3,293 events so far. Kai also &lt;em&gt;queries&lt;/em&gt; the brain. When someone asks Kai what I shipped last week, the agent calls &lt;code&gt;semantic_search&lt;/code&gt; over HTTP on port 7778 before answering.&lt;/p&gt;

&lt;p&gt;Three machines. One log. No vendor lock-in. If any box dies, the data is on one of the other two or can be rebuilt from sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compiled Knowledge Layer
&lt;/h2&gt;

&lt;p&gt;Raw events are the substrate. On top of them sits a compiled layer that a raw event log can't produce — structured, human-readable, curated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The wiki&lt;/strong&gt; is a markdown tree with 9 regions and 41 pages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wiki/
├── agents/        (3)   — kai, scout, openclaw-snapped
├── clients/       (12)  — one page per active client
├── daily-briefs/  (5)   — compiled end-of-day summaries
├── decisions/     (1)   — ADR index
├── people/        (1)
├── products/      (8)   — kaicalls, mdi, clawdflix, meetkai, ...
├── projects/      (3)   — brain, cmo-agent-system, marketing-kb
├── systems/       (5)
└── topics/        (3)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each page is human-editable. &lt;code&gt;compile_wiki.py&lt;/code&gt; reconciles it against the event log and surfaces new entities that should probably exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The journal&lt;/strong&gt; is daily markdown auto-compiled from events. &lt;code&gt;compile_journal.py --date 2026-04-14&lt;/code&gt; groups every event from that day by source and outputs a readable brief. A narrative subfolder holds longer threads that span multiple days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blobs&lt;/strong&gt; sit outside the database. Voice recordings, images, PDFs — anything too large for a JSON payload — live in &lt;code&gt;blobs/voice/&lt;/code&gt; and similar, referenced by &lt;code&gt;attachment_uri&lt;/code&gt; on the event row.&lt;/p&gt;

&lt;p&gt;The brain is now a three-layer system: raw events, a curated wiki, and compiled narratives. Each layer is queryable independently. Each one gets embeddings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local Embeddings. No API Calls.
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;embed_events.py&lt;/code&gt; runs on cron every 5 minutes. It finds new events, builds a text summary from the payload, sends it to Ollama running &lt;code&gt;nomic-embed-text&lt;/code&gt; locally on the RTX 3090, and upserts a 768-dimensional vector into a Qdrant collection.&lt;/p&gt;

&lt;p&gt;Zero external API calls for embeddings. The vectors never leave my network. At 156K events, running this on OpenAI's API would have cost meaningful money. Running it locally costs GPU time I'm not using for anything else.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;semantic_search.py&lt;/code&gt; queries Qdrant and joins full event payloads back from SQLite in one pass. The search works across everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Butterfly pipeline deployment" → top hits are commits on &lt;code&gt;cgallic/snappedai&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;"Scout tank diet" → Scout's Discord conversations and the CLI commands that edited its state files&lt;/li&gt;
&lt;li&gt;"Quantitative trading AI" → a video transcript I pasted to Kai, my follow-up research request, and both agents' replies, all in one query&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The vector space clusters my life without me tagging anything. That's the payoff for having the data in one place.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Any Model Becomes Personal
&lt;/h2&gt;

&lt;p&gt;Everything up to this point is storage. The part that makes it personal AI is how models access it.&lt;/p&gt;

&lt;p&gt;The brain exposes 18 tools through a Model Context Protocol (MCP) server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;record_event, query_events, semantic_search, get_journal,
compile_journal, list_decisions, get_decision, health_check,
append_narrative, get_wiki_page, list_wiki_pages, update_wiki_page,
compile_wiki, lint_wiki, resume_my_work, build_memory_packet,
get_journal_narrative, query_events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The server runs as stdio for Claude Code on the agent box. An &lt;code&gt;mcp-proxy&lt;/code&gt; wrapper exposes the same tools as HTTP/SSE on port 7778 for remote agents. Kai in Germany, Scout (my local Gemma model), and Claude on the laptop all call the same tools.&lt;/p&gt;

&lt;p&gt;When Claude Code on my laptop asks "what have I been working on with KaiCalls this week," it calls &lt;code&gt;query_events&lt;/code&gt; filtered by &lt;code&gt;repo = cgallic/kai_calls&lt;/code&gt; and &lt;code&gt;since = 7d&lt;/code&gt;. When Scout helps me plan content, it calls &lt;code&gt;semantic_search&lt;/code&gt; for the topic and gets real conversations, real commits, real notes. When Kai needs to answer a question about what I shipped, it calls &lt;code&gt;resume_my_work&lt;/code&gt; and gets a briefing assembled from events and wiki pages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The model changes. The memory doesn't.&lt;/strong&gt; That's what makes it personal.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Unexpected Side Effect
&lt;/h2&gt;

&lt;p&gt;I built this for recall. It turned into a content engine.&lt;/p&gt;

&lt;p&gt;Every Claude Code session is a story — problem, attempts, decision, resolution. The Dev.to article I published Monday, &lt;em&gt;13 of 14 Integrations Were Fake&lt;/em&gt;, was mined directly from a single session event. &lt;code&gt;mine_stories.py&lt;/code&gt; runs nightly and flags sessions with high signal — lots of decisions, a concrete outcome, a surprising pivot. I review the output in the morning and pick what to write.&lt;/p&gt;

&lt;p&gt;The week I started doing this, my content output doubled. I was already living the stories. I just wasn't capturing them.&lt;/p&gt;

&lt;p&gt;The brain doesn't write the content. It surfaces stories I'd forget by Thursday.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;Three mistakes worth naming.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Started with a flat event log, should have started with the wiki.&lt;/strong&gt; Ingesting first and retrofitting the curated layer after was a week of wasted effort. Structure tells ingestion what to look for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;git_commit.sh&lt;/code&gt; auto-commits the brain with subjects like "snapshot 2026-04-12T01:01:29Z."&lt;/strong&gt; Zero keywords, zero concepts. Those commits are semantically invisible. The brain's own development history is harder to search than my actual product work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;embed_events.py&lt;/code&gt; builds vectors exclusively from &lt;code&gt;payload.summary&lt;/code&gt;.&lt;/strong&gt; When narratives returned zero hits for obvious queries, I traced it to a too-aggressive summary length cap. Different content types need different summary budgets. I missed that until it broke.&lt;/p&gt;

&lt;h2&gt;
  
  
  Personal AI Is Already Possible. You Just Have to Build It.
&lt;/h2&gt;

&lt;p&gt;Every piece of this — the event log, the ingestion scripts, the local embeddings, the MCP interface — is a weekend project. None of it requires ML research. None of it requires a cloud bill. The data is already on your disk.&lt;/p&gt;

&lt;p&gt;The products being sold as "personal AI" are generic models with opt-in memory features. That's not what personal AI looks like. Personal AI is a sovereign data layer that every model you use queries before it speaks, that grows compounding value every day you run it, that doesn't evaporate when a vendor pivots or raises prices or gets acquired.&lt;/p&gt;

&lt;p&gt;The model is a commodity. The memory is the asset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your AI isn't personal until you own the layer that makes it know you.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What's in your personal data layer right now? Not your ChatGPT memory — the actual disk-level archive of everything you've ever asked a model. I want to know who else is sitting on it unindexed.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>memory</category>
      <category>agents</category>
    </item>
    <item>
      <title>I Asked Claude to Audit My Dashboard. 13 of 14 Integrations Were Fake.</title>
      <dc:creator>connor gallic</dc:creator>
      <pubDate>Mon, 13 Apr 2026 04:12:11 +0000</pubDate>
      <link>https://dev.to/connor_gallic/i-asked-claude-to-audit-my-dashboard-13-of-14-integrations-were-fake-1c2o</link>
      <guid>https://dev.to/connor_gallic/i-asked-claude-to-audit-my-dashboard-13-of-14-integrations-were-fake-1c2o</guid>
      <description>&lt;p&gt;I had a dashboard with 14 marketing integrations. GA4, Search Console, Google Ads, Meta, LinkedIn, TikTok, YouTube, Mailchimp — the whole stack. Users could connect any of them. OAuth worked. Badges turned green. "Active."&lt;/p&gt;

&lt;p&gt;One of them actually did anything.&lt;/p&gt;

&lt;p&gt;I didn't know this. The UI looked right. Green dots across the board. I was building features on top of integrations that were silently doing nothing. I asked Claude to audit the entire codebase against the product vision. Not fix bugs. Tell me what's real.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dispatch
&lt;/h2&gt;

&lt;p&gt;MeetKai is an AI CMO product I was building. Connect your marketing accounts, get automated audits, approve AI-generated actions. Vercel frontend, Supabase backend, FastAPI gateway running 30+ marketing skills.&lt;/p&gt;

&lt;p&gt;I wrote a dispatch prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Me:&lt;/strong&gt; go # MeetKai Dashboard — Full Gap Analysis&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Date:&lt;/strong&gt; 2026-04-05&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Dispatch prompt for an agent to audit the entire MeetKai dashboard codebase against the product vision, identify every gap, and produce a prioritized work list.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I expected a bug list. Missing error handling, broken API routes. Normal stuff.&lt;/p&gt;

&lt;p&gt;Claude went through every file in the repo. The gap analysis landed in a spec doc. It was not a bug list.&lt;/p&gt;

&lt;h2&gt;
  
  
  25% Done
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Claude:&lt;/strong&gt; &lt;strong&gt;Overall: 25-30% complete.&lt;/strong&gt; Here's the brutal truth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1 of 14 providers actually works&lt;/strong&gt; (GA4). GSC is broken by a one-line provider name mismatch. The other 12 connect via OAuth, show green "Active" badges, and do absolutely nothing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action execution is theater&lt;/strong&gt; — static markdown templates with &lt;code&gt;[Business Name]&lt;/code&gt; placeholders, not real AI output&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The notification toggles save preferences but have no delivery infrastructure&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thirteen integrations showing green badges. Users clicking "Sync" on provider cards where the button had no &lt;code&gt;onClick&lt;/code&gt; handler. The entire action execution system was static markdown templates with placeholder text pretending to be AI output.&lt;/p&gt;

&lt;p&gt;The GSC integration — Google Search Console, one of the two most important data sources for the product — was dead because of a single string mismatch between what the frontend sent and what the backend expected. One typo. The whole integration was a corpse with a green badge.&lt;/p&gt;

&lt;p&gt;This wasn't a half-built prototype that looked half-built. &lt;strong&gt;It looked finished.&lt;/strong&gt; Polished design. OAuth flows completing successfully. Connected status showing in the UI. You'd think everything was working and wonder why the data was empty.&lt;/p&gt;

&lt;p&gt;A feature that errors out gets caught in testing. A feature that succeeds and does nothing — that ships to production. It erodes trust for months before anyone figures it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shell and Weapon
&lt;/h2&gt;

&lt;p&gt;Then Claude dropped this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Claude:&lt;/strong&gt; Here's what we're actually sitting on: a &lt;strong&gt;Next.js dashboard that's 25% done&lt;/strong&gt; bolted onto a &lt;strong&gt;gateway + agent + content engine that's 70% done&lt;/strong&gt;. The frontend is a shell. The backend is a weapon. They're barely talking to each other.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I had to sit with that for a minute. The backend — the FastAPI gateway — had 30+ marketing skills, a content engine, an agent scheduler, a job queue, approval workflows. It could audit pages, generate ad copy, propose marketing actions, run multi-step campaigns. All of it was built and working.&lt;/p&gt;

&lt;p&gt;The frontend didn't know any of it existed.&lt;/p&gt;

&lt;p&gt;I'd been thinking about this as a bug-fix problem. Fix the 14 providers. Wire up the notifications. Make the action buttons work. But that framing was wrong. The real problem was architectural. The frontend was a completely separate application from the backend, with no integration between them. Fixing individual features would take months and still leave two disconnected systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Options, One Obvious Answer
&lt;/h2&gt;

&lt;p&gt;Claude laid out the architecture choices:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Claude:&lt;/strong&gt; The core architectural question is: &lt;strong&gt;how should the dashboard talk to the Kai brain?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A) Direct integration&lt;/strong&gt; — The Next.js app imports the Python content engine directly (requires a bridge or rewrite)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;B) Gateway-first&lt;/strong&gt; — The dashboard becomes a frontend for the existing FastAPI gateway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;C) Hybrid + CopilotKit&lt;/strong&gt; — Gateway for structured operations + CopilotKit for chat&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Option A means rewriting the Python backend in JavaScript or building a bridge layer. A month of work to solve a problem nobody has.&lt;/p&gt;

&lt;p&gt;Option C adds CopilotKit as a dependency for a chat feature the Vercel AI SDK already handles natively. More framework, more complexity, same result.&lt;/p&gt;

&lt;p&gt;We went with B. &lt;strong&gt;The dashboard becomes a thin frontend for the gateway.&lt;/strong&gt; Every button click, every chat message, every scheduled task routes through the same API. The gateway has the skills. The dashboard calls them.&lt;/p&gt;

&lt;p&gt;Your frontend should be a client for your backend. Not a parallel implementation. Not an abstraction layer. A client. I've watched three projects in the past month where teams rebuilt backend logic in the frontend "for performance" and ended up maintaining two versions of the same thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Opens This at 9am
&lt;/h2&gt;

&lt;p&gt;Next was the interaction model.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Claude:&lt;/strong&gt; What's the user's primary interaction model?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A) Dashboard-first, chat-assists&lt;/strong&gt; — Widgets, cards, tables. Chat slides out as a helper.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;B) Chat-first, dashboard-monitors&lt;/strong&gt; — The chat IS the product. Dashboard pages become monitoring.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;My instinct was chat-first. Feels more "AI native." Talk to your marketing agent, it does things. That's the pitch.&lt;/p&gt;

&lt;p&gt;Then I thought about who actually opens this. Small business owners. They open an app at 9am. They want a number. Is marketing working? What needs attention? What did the AI do overnight? &lt;strong&gt;They want answers on screen, not a blinking cursor waiting for a prompt.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Claude:&lt;/strong&gt; Business owners don't care about the interface paradigm. They care about &lt;strong&gt;"is my marketing working and what should I do next."&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Dashboard-first, chat-second. The dashboard loads with answers. Chat is there when they need something specific — "write me an email campaign for the spring sale," "why did traffic drop last week." Chat is an input surface. Not the product.&lt;/p&gt;

&lt;p&gt;Get this decision wrong and you're rewriting your frontend in three months. Your state management, onboarding flow, and API design all follow from it. We almost went chat-first because it sounded cooler.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Smallest Thing Worth Charging For
&lt;/h2&gt;

&lt;p&gt;The gap analysis showed 14 providers, 30+ skills, analytics, audits, content engine, agent scheduler. Six months of work if you're not careful.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Claude:&lt;/strong&gt; What's the &lt;strong&gt;launch slice&lt;/strong&gt; — the smallest version a business owner would pay for?&lt;/p&gt;

&lt;p&gt;The killer loop is: &lt;strong&gt;Connect → Audit → See what's wrong → AI fixes it → See it get better&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Five things make the cut:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fix GA4 + GSC, add 2-3 more working providers&lt;/li&gt;
&lt;li&gt;Auto-run audit when accounts connect&lt;/li&gt;
&lt;li&gt;Show scores, issues, and AI fixes in the dashboard&lt;/li&gt;
&lt;li&gt;Chat to trigger skills&lt;/li&gt;
&lt;li&gt;Real action execution through the gateway&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything else waits. Ten remaining providers. Advanced analytics. The agent scheduler. Scope for later. The loop — connect, audit, act, approve — is enough to charge money for.&lt;/p&gt;

&lt;p&gt;Here's what the final architecture looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TRIGGERS                    BRAIN                      OUTPUT
─────────────────         ──────────────────          ────────
Click "Run Audit"    →                             → Audit result
Chat: "write emails" →    Skill Router + Gateway   → Email drafts
Cron: daily 6am      →    (same execution path)    → Analytics brief
Webhook: score drop  →                             → Action proposal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Chat isn't special here. It's one of five input surfaces into the same brain. Dashboard buttons, chat, cron jobs, webhooks, new-connection triggers — all hit the same gateway. Same skills. Same approval flow.&lt;/p&gt;

&lt;p&gt;I spent weeks building features on top of integrations that didn't work. Every one of those features was wasted time. The audit took one session. I should have run it before I wrote a single line of new code.&lt;/p&gt;

&lt;p&gt;The thing I keep coming back to is the green badges. Thirteen of them. All lying. Not because anyone built them to lie — because someone built the OAuth flow, saw the badge turn green, and moved on to the next feature. Nobody went back and checked whether the data was actually flowing. The UI said it worked. That was enough.&lt;/p&gt;

&lt;p&gt;It wasn't enough. Run the audit first. Read the code, not the interface.&lt;/p&gt;

&lt;p&gt;What's the worst thing a codebase audit turned up for you — not a bug, but something that looked like it was working and wasn't?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>webdev</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Every AI Agent Disaster This Year Was a Write Without a Checkpoint</title>
      <dc:creator>connor gallic</dc:creator>
      <pubDate>Mon, 23 Mar 2026 03:29:11 +0000</pubDate>
      <link>https://dev.to/connor_gallic/every-ai-agent-disaster-this-year-was-a-write-without-a-checkpoint-3dgh</link>
      <guid>https://dev.to/connor_gallic/every-ai-agent-disaster-this-year-was-a-write-without-a-checkpoint-3dgh</guid>
      <description>&lt;p&gt;I run AI agents in production — Discord bots, email outreach, channel queues across multiple servers. More than once, a misconfigured loop or race condition caused the same message to fire twice to the same person. Same email, same channel, same queue.&lt;/p&gt;

&lt;p&gt;Nobody died. No lawsuit. But every duplicate erodes a little trust. And when I looked at why it kept happening, the root cause was always the same: &lt;strong&gt;a write executed with nothing between the decision and the action.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then I started paying attention to bigger teams hitting the exact same pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's happening everywhere
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Air Canada&lt;/strong&gt; had a chatbot that fabricated a bereavement fare refund policy out of thin air. A customer relied on it, got denied, and sued. Air Canada argued the chatbot was "a separate legal entity responsible for its own actions." The tribunal disagreed — the airline is liable for every message its bot sends, hallucinated or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cursor's&lt;/strong&gt; support bot "Sam" told users their subscriptions were limited to a single active session. That policy didn't exist. The AI invented it. Users canceled in protest before the co-founder could publicly apologize. Most of them didn't even know Sam wasn't human.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replit's&lt;/strong&gt; coding agent deleted an entire production database — 1,200+ records — despite instructions repeated in ALL CAPS eleven times not to make changes. Then it fabricated 4,000 fake replacement records and told the operator recovery wasn't possible. It was.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon's Kiro&lt;/strong&gt; agent was assigned a minor bug fix in AWS Cost Explorer. It decided the "most efficient path to a bug-free state" was to delete the entire production environment and rebuild from scratch. 13-hour outage.&lt;/p&gt;

&lt;p&gt;Different companies, different agents, different scales. Same shape every time: the agent didn't malfunction. &lt;strong&gt;It did exactly what it was built to do.&lt;/strong&gt; A human would have paused. The agent didn't hesitate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The usual answer doesn't scale
&lt;/h2&gt;

&lt;p&gt;The first response is always "just add human-in-the-loop." Right instinct, but in practice HITL goes one of two ways:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ad-hoc&lt;/strong&gt; — someone gets a Slack message, eyeballs it, types "looks good." No audit trail, no expiry, no record of what was approved or who approved it. Six months later when compliance asks, you're grepping Slack history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Everything gets reviewed&lt;/strong&gt; — works for about a week. Then the volume makes it unsustainable. The team rubber-stamps, or they stop using agents because the overhead killed the value.&lt;/p&gt;

&lt;p&gt;The real gap is between those two extremes. Most agent writes fall into three buckets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Auto-approve&lt;/strong&gt; — a single support reply, a small data update, a cache refresh&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human review&lt;/strong&gt; — a bulk import over 100 records, a financial transaction, a message containing certain terms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Always block&lt;/strong&gt; — writes to production infra, refunds over a threshold, legal commitments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem is this logic usually lives scattered in application code. One agent has it, another doesn't. A new developer writes a new agent and skips it. Nothing is centralized, nothing is auditable.&lt;/p&gt;

&lt;h2&gt;
  
  
  So I pulled the guard logic out of my agents
&lt;/h2&gt;

&lt;p&gt;I was copy-pasting the same write-check code into every integration I built. Same patterns — deduplicate, check record count, block certain terms, hold for review over a threshold. So I extracted it into a standalone layer.&lt;/p&gt;

&lt;p&gt;Zehrava Gate is a write-path control plane. Before an agent executes a write, it submits an &lt;strong&gt;intent&lt;/strong&gt;. Gate evaluates &lt;strong&gt;policy&lt;/strong&gt;, optionally holds for &lt;strong&gt;human approval&lt;/strong&gt;, and issues a signed &lt;strong&gt;execution order&lt;/strong&gt;. Every decision is logged.&lt;/p&gt;

&lt;p&gt;The policies are YAML — deterministic, no LLM in the loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;support-reply&lt;/span&gt;
&lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;zendesk.reply&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;intercom.reply&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;block_if_terms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;refund&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;guaranteed"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;full&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;refund"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;legal&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;action"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;auto_approve_under&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;crm-import&lt;/span&gt;
&lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;salesforce.import&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;hubspot.contacts&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;auto_approve_under&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
&lt;span class="na"&gt;require_approval_over&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
&lt;span class="na"&gt;expiry_minutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;finance-high-risk&lt;/span&gt;
&lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;stripe.refund&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;quickbooks.journal&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;require_approval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
&lt;span class="na"&gt;expiry_minutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The integration is a few lines:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Gate&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zehrava-gate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Gate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://localhost:4000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gate_sk_...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;gate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;propose&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Thank you — your issue is resolved.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zendesk.reply&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;       &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;support-reply&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;recordCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;blocked&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;blockReason&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pending_approval&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="c1"&gt;// wait for human&lt;/span&gt;
&lt;span class="c1"&gt;// approved — proceed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;zehrava_gate&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Gate&lt;/span&gt;

&lt;span class="n"&gt;gate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Gate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:4000&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gate_sk_...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;propose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Thank you — your issue is resolved.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;destination&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zendesk.reply&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support-reply&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;record_count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;A human writes the policy when they're thinking clearly. Gate enforces it mechanically. Same input, same output, every time.&lt;/p&gt;
&lt;h2&gt;
  
  
  "What if the agent just skips the SDK?"
&lt;/h2&gt;

&lt;p&gt;That's the right question. The SDK is cooperative — it only works if the agent calls it. Fine for agents you build yourself. Not enough for agents you don't fully control.&lt;/p&gt;

&lt;p&gt;Gate V3 closes that gap with a proxy. It sits in the network path between the agent and the destination API. One environment variable, no code changes:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HTTP_PROXY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://gate.internal:4001
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HTTPS_PROXY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://gate.internal:4001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Every outbound HTTP call routes through Gate. The destination host maps to a policy. Approved requests get forwarded. Blocked requests get a 403 with the reason. Pending requests return a 202 and hold until a human approves.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;── V2: cooperative ──────────────────────────────
Agent → SDK.propose() → Gate API → approved → Agent executes
                         ↑ optional — agent can skip

── V3: enforced ─────────────────────────────────
Agent → HTTP request → Gate Proxy → approved → forwards to destination
                                  → blocked  → 403, reason in response
                                  → pending  → 202, held for review
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In vault mode, the agent never even sees production credentials. Gate fetches them from 1Password or HashiCorp Vault at execution time — after approval, for the approved intent only — then discards them from memory. A compromised agent has nothing to exfiltrate.&lt;/p&gt;

&lt;p&gt;V2 gives you guardrails. V3 gives you a wall.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why YAML and not another LLM?
&lt;/h2&gt;

&lt;p&gt;The obvious design for a safety layer would be another AI evaluating the first AI's output. But that introduces the same unpredictability you're trying to remove. An LLM deciding "should this agent be allowed to send this email?" will occasionally say yes when it shouldn't. That's the whole problem.&lt;/p&gt;

&lt;p&gt;No prompt injection. No hallucination. No "the safety model was feeling generous today."&lt;/p&gt;

&lt;p&gt;YAML is boring. That's the feature.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;MIT licensed. Self-hostable.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;zehrava-gate
npx zehrava-gate &lt;span class="nt"&gt;--port&lt;/span&gt; 4000 &lt;span class="nt"&gt;--policy-dir&lt;/span&gt; ./policies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;zehrava-gate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/cgallic" rel="noopener noreferrer"&gt;
        cgallic
      &lt;/a&gt; / &lt;a href="https://github.com/cgallic/zehrava-gate" rel="noopener noreferrer"&gt;
        zehrava-gate
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      The safe commit layer for AI agents — approval, policy, and audit before any agent output reaches production
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Zehrava Gate&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Write-path control plane for AI agents.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://zehrava.com" rel="nofollow noopener noreferrer"&gt;zehrava.com&lt;/a&gt; · &lt;a href="https://www.npmjs.com/package/zehrava-gate" rel="nofollow noopener noreferrer"&gt;npm&lt;/a&gt; · &lt;a href="https://pypi.org/project/zehrava-gate/" rel="nofollow noopener noreferrer"&gt;PyPI&lt;/a&gt; · &lt;a href="https://zehrava.com/demo" rel="nofollow noopener noreferrer"&gt;Live demo&lt;/a&gt; · &lt;a href="https://zehrava.com/docs" rel="nofollow noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/cgallic/zehrava-gate/./gate-demo.gif"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fcgallic%2Fzehrava-gate%2F.%2Fgate-demo.gif" alt="Zehrava Gate demo"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Agents can read systems freely. Any real-world action — sending email, importing CRM records, updating databases, issuing refunds, publishing files — must pass through Gate first.&lt;/p&gt;

&lt;p&gt;Agents submit an intent. Gate evaluates policy. Optionally requests human approval. Issues a signed execution order. Every step is deterministic, auditable, and fail-closed.&lt;/p&gt;

&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;intent submitted
  ↓
policy evaluated (YAML, deterministic — no LLM)
  ├── blocked              → terminal
  ├── duplicate_blocked    → terminal (idempotency key matched)
  ├── approved             → auto-approved; eligible for execution
  └── pending_approval     → human review required
        ├── approved        → eligible for execution
        ├── rejected        → terminal
        └── expired         → terminal
approved
  ↓
execution order issued (gex_ token, 15min TTL)
  ↓
worker executes in your VPC
  ↓
outcome reported
  ├── execution_succeeded  → terminal
  └── execution_failed     → terminal
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Install&lt;/h2&gt;

&lt;/div&gt;

&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; JS SDK + server CLI&lt;/span&gt;
npm&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/cgallic/zehrava-gate" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What's the worst write an AI agent has made in your system?&lt;/strong&gt; Not the dramatic database deletions — the quiet ones. The duplicate email, the overwritten field, the message that went to the wrong channel at 2am.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
