<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dylan Brown</title>
    <description>The latest articles on DEV Community by Dylan Brown (@dylan_brown_4c803aefcfe51).</description>
    <link>https://dev.to/dylan_brown_4c803aefcfe51</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dylan_brown_4c803aefcfe51"/>
    <language>en</language>
    <item>
      <title>Building an Autonomous AI Agent That Writes Novels — Architecture of a 10-Agent Pipeline</title>
      <dc:creator>Dylan Brown</dc:creator>
      <pubDate>Fri, 27 Mar 2026 16:00:03 +0000</pubDate>
      <link>https://dev.to/dylan_brown_4c803aefcfe51/building-an-autonomous-ai-agent-that-writes-novels-architecture-of-a-10-agent-pipeline-59pf</link>
      <guid>https://dev.to/dylan_brown_4c803aefcfe51/building-an-autonomous-ai-agent-that-writes-novels-architecture-of-a-10-agent-pipeline-59pf</guid>
      <description>&lt;p&gt;AI-generated fiction has a consistency problem. Ask any LLM to write chapter 1 of a novel and it'll do a decent job. Ask it to write chapter 30 and it has no idea what happened in the first 29.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/Narcooo/inkos" rel="noopener noreferrer"&gt;InkOS&lt;/a&gt; to solve this. It's an open-source CLI AI agent that writes, audits, and revises novels autonomously — using a pipeline of 10 specialized AI agents with persistent state tracking across the entire book.&lt;/p&gt;

&lt;p&gt;This post walks through the architecture and the specific engineering problems it solves.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Most AI writing tools work like this: you give the model a prompt, it generates text, you copy it, repeat. There's no memory between chapters. After 20+ chapters, you run into:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Continuity breaks&lt;/strong&gt; — characters remember things they never witnessed, weapons reappear after being lost, relationships reset&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context bloat&lt;/strong&gt; — injecting all previous state into each prompt hits token limits, causes 400 errors, costs $200/chapter in API calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hook accumulation&lt;/strong&gt; — the model plants plot hooks but never resolves them. After 30 chapters you have 40+ dangling threads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI voice&lt;/strong&gt; — every paragraph uses the same words ("delve", "tapestry", "testament", "intricate"), sentence structure is monotonous, and there's excessive summarization&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Architecture: 10 Agents in Sequence
&lt;/h2&gt;

&lt;p&gt;Instead of one model doing everything, InkOS splits the work across 10 specialized agents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Radar → Planner → Composer → Architect → Writer → Observer → Reflector → Normalizer → Auditor → Reviser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each agent has exactly one job:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Agent&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Radar&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Scans platform trends and reader preferences (pluggable, skippable)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Planner&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Reads author intent + current focus + memory retrieval, produces chapter intent with must-keep/must-avoid lists&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Composer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Selects relevant context from truth files by relevance, compiles rule stack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Architect&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Plans chapter structure: outline, scene beats, pacing targets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Produces prose from composed context (length-governed, dialogue-driven)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Observer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Over-extracts 9 categories of facts from the chapter text&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reflector&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Outputs Zod-validated JSON deltas for state updates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Normalizer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Single-pass compress/expand to hit the target word count band&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Auditor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Validates draft against 7 truth files across 33 dimensions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reviser&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Auto-fixes critical issues, flags others for human review&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If the audit fails, the pipeline loops back: revise → re-audit until all critical issues are resolved.&lt;/p&gt;

&lt;h2&gt;
  
  
  State Management: 7 Truth Files
&lt;/h2&gt;

&lt;p&gt;Every book maintains 7 canonical truth files as the single source of truth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;current_state.md&lt;/code&gt; — character locations, relationships, knowledge, emotional arcs&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;particle_ledger.md&lt;/code&gt; — resource accounting: items, money, stats with quantities&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pending_hooks.md&lt;/code&gt; — open plot threads, foreshadowing, unresolved conflicts&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;chapter_summaries.md&lt;/code&gt; — per-chapter summaries with state changes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;subplot_board.md&lt;/code&gt; — A/B/C subplot line status tracking&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;emotional_arcs.md&lt;/code&gt; — per-character emotion tracking and growth&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;character_matrix.md&lt;/code&gt; — interaction matrix, encounter records, information boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Auditor checks every draft against these files. If a character "remembers" something they never witnessed, or pulls a weapon they lost two chapters ago — the auditor catches it.&lt;/p&gt;

&lt;p&gt;Since v0.6, truth files are stored as Zod-validated JSON (&lt;code&gt;story/state/*.json&lt;/code&gt;). The Reflector outputs JSON deltas — not full markdown rewrites. Corrupted data is rejected, not propagated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving Context Bloat: SQLite Temporal Memory
&lt;/h2&gt;

&lt;p&gt;On Node 22+, InkOS uses a SQLite temporal memory database (&lt;code&gt;story/memory.db&lt;/code&gt;). Instead of injecting all 7 truth files into every prompt (which blows up after 20 chapters), the Composer agent does relevance-based retrieval — pulling only the facts, hooks, and summaries that matter for the current chapter.&lt;/p&gt;

&lt;p&gt;This was the single biggest improvement in v0.6. Before: context bloat caused 400 errors and made each chapter cost $200+ in API calls. After: selective retrieval keeps context lean regardless of book length.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hook Governance
&lt;/h2&gt;

&lt;p&gt;One of the hardest problems in long-form AI fiction: the model loves planting hooks but never pays them off. After 30 chapters you'd have 40+ open threads, none resolving.&lt;/p&gt;

&lt;p&gt;The Planner agent now generates a &lt;code&gt;hookAgenda&lt;/code&gt; — scheduling which hooks to advance and which to resolve in each chapter. &lt;code&gt;analyzeHookHealth&lt;/code&gt; audits hook debt, &lt;code&gt;evaluateHookAdmission&lt;/code&gt; blocks duplicate hooks, and new &lt;code&gt;mention&lt;/code&gt; semantics prevents fake advancement (where the model references a hook without actually progressing it).&lt;/p&gt;

&lt;h2&gt;
  
  
  De-AI-ification
&lt;/h2&gt;

&lt;p&gt;Every genre profile includes a fatigue word list. For LitRPG: "delve", "tapestry", "testament", "intricate", "pivotal". The Auditor flags these automatically.&lt;/p&gt;

&lt;p&gt;But detection alone isn't enough — InkOS bakes de-AI-ification into the Writer agent's prompts at the source: banned sentence patterns, style fingerprint injection, dialogue-driven scene guidance. &lt;code&gt;revise --mode anti-detect&lt;/code&gt; runs dedicated anti-detection rewriting on existing chapters.&lt;/p&gt;

&lt;p&gt;You can also clone any author's style: &lt;code&gt;inkos style analyze reference.txt&lt;/code&gt; extracts a statistical fingerprint (sentence length distribution, word frequency, rhythm profiles), and &lt;code&gt;inkos style import&lt;/code&gt; injects it into all future chapters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Genre Support
&lt;/h2&gt;

&lt;p&gt;10 English-native genre profiles, each with dedicated pacing rules, audit dimensions, and fatigue word lists:&lt;/p&gt;

&lt;p&gt;LitRPG, Progression Fantasy, Isekai, Cultivation, System Apocalypse, Dungeon Core, Romantasy, Sci-Fi, Tower Climber, Cozy Fantasy — plus 5 Chinese web novel genres.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; @actalk/inkos
inkos book create &lt;span class="nt"&gt;--title&lt;/span&gt; &lt;span class="s2"&gt;"The Last Delver"&lt;/span&gt; &lt;span class="nt"&gt;--genre&lt;/span&gt; litrpg
inkos write next
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One command writes a full chapter: draft → audit → auto-revise. Run &lt;code&gt;inkos up&lt;/code&gt; for daemon mode that writes chapters on a schedule.&lt;/p&gt;

&lt;p&gt;Works with Claude, GPT-4, or any OpenAI-compatible API including local models. Multi-model routing lets you put Claude on the Writer and GPT-4o on the Auditor.&lt;/p&gt;

&lt;p&gt;InkOS is also published as an &lt;a href="https://clawhub.ai" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; skill — install with &lt;code&gt;clawhub install inkos&lt;/code&gt; and any compatible agent can invoke it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/Narcooo/inkos" rel="noopener noreferrer"&gt;github.com/Narcooo/inkos&lt;/a&gt; (2.4k stars, MIT license)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;npm&lt;/strong&gt;: &lt;code&gt;npm i -g @actalk/inkos&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;Would love feedback from anyone working on multi-agent systems, long-context state management, or creative AI. What continuity problems have you run into with long-form AI generation?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>typescript</category>
      <category>writing</category>
    </item>
  </channel>
</rss>
