<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Comad.J</title>
    <description>The latest articles on DEV Community by Comad.J (@_c4b82d2458240eece0292).</description>
    <link>https://dev.to/_c4b82d2458240eece0292</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_c4b82d2458240eece0292"/>
    <language>en</language>
    <item>
      <title>Why I Reversed My Own Architecture After 27 AI Luminaries Reviewed It</title>
      <dc:creator>Comad.J</dc:creator>
      <pubDate>Tue, 14 Apr 2026 13:10:11 +0000</pubDate>
      <link>https://dev.to/_c4b82d2458240eece0292/why-i-reversed-my-own-architecture-after-27-ai-luminaries-reviewed-it-gai</link>
      <guid>https://dev.to/_c4b82d2458240eece0292/why-i-reversed-my-own-architecture-after-27-ai-luminaries-reviewed-it-gai</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;TL;DR — I built a personal knowledge system where the act of reading continuously reshapes the tools you read with. Six agents on Claude Code, MCP, Neo4j, $0/day runtime. Today I simulated 27 software luminaries reviewing it, shipped four response packs, and reversed my own repository-strategy ADR from two weeks ago. This post is the honest tour.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The frustration that started it
&lt;/h2&gt;

&lt;p&gt;Most mornings I read arXiv. By Friday, I can't remember what Tuesday's paper argued. I have Notion pages, highlighted PDFs, bookmarked threads — and yet when someone asks me "so, what did you learn this month?", I hesitate.&lt;/p&gt;

&lt;p&gt;The ritual scales. The accumulation doesn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;comad-world&lt;/strong&gt; began at that asymmetry. What if reading a paper mutated the system I use to read the &lt;em&gt;next&lt;/em&gt; paper? Not as a prompt I remember to invoke, but as a trajectory the graph silently integrates every day.&lt;/p&gt;

&lt;p&gt;That one idea is the whole product.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ear (listen) → brain (think) → eye (predict)
                  ↑
photo (edit)    sleep (remember)    voice (automate)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Six agents, one config file (&lt;code&gt;comad.config.yaml&lt;/code&gt;). Swap the config and the whole system reconfigures for &lt;code&gt;ai-ml&lt;/code&gt; or &lt;code&gt;finance&lt;/code&gt; or &lt;code&gt;biotech&lt;/code&gt;. I shipped v0.2.0 two weeks ago with 15 stars on HN and a decent 1,336-test CI.&lt;/p&gt;

&lt;p&gt;It was fine. It needed a mirror.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 27-angle luminary review
&lt;/h2&gt;

&lt;p&gt;I couldn't afford to wait for real user feedback to catch structural problems — at 15 stars, the feedback loop is too thin. So I simulated reviewers. Not cosplay; disciplined role-prompting across 27 distinct angles, each with its own decision rubric.&lt;/p&gt;

&lt;p&gt;Here's the short version of how they scored it:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Angle&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;What they flagged&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ousterhout (design philosophy)&lt;/td&gt;
&lt;td&gt;9/10&lt;/td&gt;
&lt;td&gt;Deep modules, shallow interfaces — ADRs pay off.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stallman (freedom, local-first)&lt;/td&gt;
&lt;td&gt;9/10&lt;/td&gt;
&lt;td&gt;Local Ollama, Claude Max OAuth, no telemetry.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Karpathy (simplicity)&lt;/td&gt;
&lt;td&gt;8.5/10&lt;/td&gt;
&lt;td&gt;6 modules + 4 MCP + 2 Neo4j is a lot for a solo project.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kleppmann (reliability)&lt;/td&gt;
&lt;td&gt;8/10&lt;/td&gt;
&lt;td&gt;Trust boundaries clear, but no graph backup drill.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Schneier (security)&lt;/td&gt;
&lt;td&gt;8/10&lt;/td&gt;
&lt;td&gt;28 MCP tools = attack surface; threat model missing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LeCun (world models)&lt;/td&gt;
&lt;td&gt;7.5/10&lt;/td&gt;
&lt;td&gt;"Prediction accuracy" tracked but not calibrated.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Norman (first-run UX)&lt;/td&gt;
&lt;td&gt;7/10&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;./install.sh&lt;/code&gt; then... now what?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dijkstra (rigor)&lt;/td&gt;
&lt;td&gt;7.5/10&lt;/td&gt;
&lt;td&gt;1,336 tests, but mostly example-based.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Popper (falsifiability)&lt;/td&gt;
&lt;td&gt;6.5/10&lt;/td&gt;
&lt;td&gt;How do wrong predictions decay a lens?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;O'Neil (algorithmic bias)&lt;/td&gt;
&lt;td&gt;6/10&lt;/td&gt;
&lt;td&gt;31 RSS feeds = big-tech echo chamber risk.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pearl (causal inference)&lt;/td&gt;
&lt;td&gt;6.5/10&lt;/td&gt;
&lt;td&gt;Graph edges are associative; where's "intervention"?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Moore (crossing the chasm)&lt;/td&gt;
&lt;td&gt;6.5/10&lt;/td&gt;
&lt;td&gt;Beachhead too wide. Pick one persona.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Harari (narrative)&lt;/td&gt;
&lt;td&gt;6/10&lt;/td&gt;
&lt;td&gt;Engineering 9, storytelling 6. README leads with features.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thiel (moat)&lt;/td&gt;
&lt;td&gt;6/10&lt;/td&gt;
&lt;td&gt;What's the unreplicable secret, honestly?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wilson (ecosystems)&lt;/td&gt;
&lt;td&gt;6.5/10&lt;/td&gt;
&lt;td&gt;Pipeline is a food chain, not mutualism.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Jepsen (chaos)&lt;/td&gt;
&lt;td&gt;6/10&lt;/td&gt;
&lt;td&gt;No partial-failure playbook.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Average: 7.7/10.&lt;/strong&gt; Strong in the bones, weak in the places that decide whether it grows.&lt;/p&gt;

&lt;p&gt;The real insight wasn't any single score. It was the pattern:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Engineering maturity 9/10. Narrative maturity 6/10. Observability maturity 6-7/10. Epistemic hygiene 6-7/10.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A system that's measured by its code looks healthy. A system that's measured by whether it would &lt;strong&gt;survive a bias audit, a chaos day, or a new visitor's first 90 seconds&lt;/strong&gt; looks much less healthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four response packs
&lt;/h2&gt;

&lt;p&gt;I grouped the gaps into four independent packs and shipped them in v0.3.0. Small enough that each is a single commit; independent enough to parallelize.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pack A — Narrative (Harari · Moore · Thiel · McLuhan)
&lt;/h3&gt;

&lt;p&gt;The README's old hero: &lt;em&gt;"A self-evolving personal knowledge system — what you read automatically improves your tools."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Features. Nouns. Forgettable.&lt;/p&gt;

&lt;p&gt;New hero:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You read arXiv every morning. By Friday, you can't remember what Tuesday's paper argued.&lt;br&gt;
Comad World turns each paper you read into a graph edge, a sharpened retrieval lens, a calibrated prediction.&lt;br&gt;
Your reading stops evaporating — it compounds into a system that thinks alongside you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then two new documents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;STORY.md&lt;/code&gt; — origin, why six modules, two real failure stories (the Neo4j single-instance collapse that pushed p95 from 20.7s to 13.8s only after I split into two instances; the 17K-line cleanup that had to come &lt;em&gt;before&lt;/em&gt; v0.2.0 because over-design had been accumulating silently).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docs/moat.md&lt;/code&gt; — a Thiel-shaped answer. The moat isn't any single thing. It's a &lt;strong&gt;multiplicative&lt;/strong&gt; combination: self-evolving loop × Claude Max $0/day × local-first. With one axis missing the moat collapses. With all three, time widens the gap.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pack B — Observability (Jepsen · Dean · Vogels)
&lt;/h3&gt;

&lt;p&gt;Upgrade/rollback/lock DX was 8/10. SLO/SLI was 0/10 because it didn't exist.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docs/slo.md&lt;/code&gt;: three SLIs (brain query p95 latency target ≤15s; crawl success rate ≥95%/24h; MCP server uptime ≥99%/month).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docs/chaos-drill.md&lt;/code&gt;: partial-failure playbook for brain, eye, and each Neo4j instance. Predicted behavior, manual reproduction, recovery commands, verification.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;brain/docs/query-plan.md&lt;/code&gt;: how to capture Neo4j Cypher EXPLAIN/PROFILE output, with one worked example.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scripts/comad status&lt;/code&gt; now prints &lt;code&gt;SLI summary&lt;/code&gt; — stubbed at first, then genuinely wired two commits later when I added p95 sample tracking to &lt;code&gt;brain/packages/core/src/perf.ts&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// perf.ts — sample ring buffer (cap 1000) → p95 calculation&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;MAX_SAMPLES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;writeSnapshot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;overallSamples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;timings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;flatMap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;samples&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;snapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;p95_ms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;percentile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;overallSamples&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;snapshot&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;comad_brain_perf&lt;/code&gt; MCP tool now writes that snapshot after each call, so the shell status command reads real numbers without a live MCP roundtrip.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pack C — Epistemic hygiene (Popper · O'Neil · Gebru · Pearl · Korzybski)
&lt;/h3&gt;

&lt;p&gt;This pack is the one I'm proudest of. A self-evolving loop that never asks "am I converging on truth or on myself?" is a bias amplifier with good PR.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;eye/docs/falsification.md&lt;/code&gt;&lt;/strong&gt; (Popper). When an &lt;code&gt;eye&lt;/code&gt; lens predicts wrong, its weight decays: &lt;code&gt;w_new = w_old × 0.9^n&lt;/code&gt;. Predictions that can't even be falsified (no verifiable outcome) are excluded from the log entirely. A lens that can't be wrong can't earn trust.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ear/docs/source-diversity.md&lt;/code&gt;&lt;/strong&gt; (O'Neil). The 31 RSS feeds skew hard toward big tech and English-speaking academia. Three monitoring metrics: &lt;code&gt;BigTechRatio&lt;/code&gt;, &lt;code&gt;RegionDiversity&lt;/code&gt;, &lt;code&gt;PerspectiveSpread&lt;/code&gt;. When any reaches "severe," the weekly digest gets a manual-supplement flag before it goes out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;brain/docs/model-cards/&lt;/code&gt;&lt;/strong&gt; (Gebru). Google-format model cards for &lt;code&gt;synth-classifier&lt;/code&gt; and &lt;code&gt;eye-lens&lt;/code&gt;: intended use, training data, known failure modes, ethical considerations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;brain/docs/causal-edges.md&lt;/code&gt;&lt;/strong&gt; (Pearl + Korzybski). Edge typology (&lt;code&gt;assoc&lt;/code&gt;/&lt;code&gt;corr&lt;/code&gt;/&lt;code&gt;causal&lt;/code&gt;), intervention-evidence requirement before promotion, temporal decay rules (Korzybski: the map is not the territory; old nodes should &lt;em&gt;visibly&lt;/em&gt; fade).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pack D — Ecosystem (Wilson · Wolfram)
&lt;/h3&gt;

&lt;p&gt;The pipeline &lt;code&gt;ear → brain → eye&lt;/code&gt; is a food chain, not an ecosystem. Wilson's mutualism is missing; Wolfram's emergent explainability is missing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docs/feedback-loops.md&lt;/code&gt;&lt;/strong&gt;: reverse edges. &lt;code&gt;eye → ear&lt;/code&gt; (high-accuracy lenses boost source priority). &lt;code&gt;brain → ear&lt;/code&gt; (hub topics nominate new RSS feeds). &lt;code&gt;sleep → brain&lt;/code&gt; (session patterns warm the query cache). &lt;code&gt;photo → voice&lt;/code&gt; (processing events trigger workflows).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;brain/scripts/graph-archaeology.ts&lt;/code&gt;&lt;/strong&gt; (432 lines, &lt;code&gt;tsc --noEmit&lt;/code&gt; clean): &lt;code&gt;whyHub(nodeId)&lt;/code&gt; and &lt;code&gt;timeline(nodeId)&lt;/code&gt;. When a node becomes a surprise hub, the script replays &lt;em&gt;how&lt;/em&gt; it got there — degree over time, first three inbound edges, peak week. Post-hoc forensics as a first-class tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The parallelization trick
&lt;/h3&gt;

&lt;p&gt;I didn't write Pack B/C/D alone. Once I'd chosen what needed to exist, I ran a &lt;code&gt;pumasi&lt;/code&gt;-style workflow — Codex CLI as parallel sub-developer. Four workers, ~15 minutes, 24/24 gates passed. Pack A I wrote myself; it required the review context Codex didn't have.&lt;/p&gt;

&lt;p&gt;That boundary — &lt;em&gt;delegate structure; don't delegate judgment&lt;/em&gt; — turned out to be the hour of the review I most needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The reversal (ADR 0001 → ADR 0011)
&lt;/h2&gt;

&lt;p&gt;Two weeks ago I wrote &lt;a href="https://github.com/kinkos1234/comad-world/blob/main/docs/adr/0001-repository-strategy.md" rel="noopener noreferrer"&gt;ADR 0001&lt;/a&gt;: "umbrella repo + six nested &lt;code&gt;.git&lt;/code&gt; repos, one per module." Clean separation. Each module ships independently someday.&lt;/p&gt;

&lt;p&gt;Auditing v0.3.0, I found three facts I hadn't wanted to see:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The umbrella was &lt;strong&gt;already tracking&lt;/strong&gt; every module's source code. A user cloning &lt;code&gt;comad-world&lt;/code&gt; got a working system. The nested &lt;code&gt;.git&lt;/code&gt; was a dev-only artifact nobody but me ever interacted with.&lt;/li&gt;
&lt;li&gt;The nested &lt;code&gt;.git&lt;/code&gt; remotes pointed at &lt;code&gt;github.com/kinkos1234/comad-{brain,ear,eye,...}.git&lt;/code&gt; — &lt;strong&gt;all 404&lt;/strong&gt;. The per-module pull logic in &lt;code&gt;scripts/upgrade.sh&lt;/code&gt; had literally never worked.&lt;/li&gt;
&lt;li&gt;In today's session, I accidentally committed the same file to both the nested git and the umbrella, because the dual-tracking made it unclear which git I was in.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So I ran the 6-luminary adoption check again, specifically on the repo strategy. This time I asked: &lt;strong&gt;at 15 stars and 1 maintainer, what decision maximizes adoption?&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Axis&lt;/th&gt;
&lt;th&gt;A. Status quo (7 repos)&lt;/th&gt;
&lt;th&gt;B. Mono-repo&lt;/th&gt;
&lt;th&gt;C. Submodules&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Norman (first-run)&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linus (pragmatic)&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DHH (convention)&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Collison (DX)&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Moore (chasm)&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Evan You (OSS)&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Unanimous: B.&lt;/strong&gt; Nobody picked submodules (C) because submodules are a known war crime against Dependabot, changesets, release-please, and first-time contributors.&lt;/p&gt;

&lt;p&gt;I moved the six &lt;code&gt;.git&lt;/code&gt; directories to &lt;code&gt;/tmp/comad-nested-git-archive/&lt;/code&gt; (recoverable for 7 days), absorbed the module source directly into the umbrella, and wrote &lt;strong&gt;ADR 0011 — Mono-repo Reversal, Supersedes ADR 0001.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The sentence at the top of the new ADR is the one I want to remember:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;YAGNI. The case for module-level independent release doesn't exist yet. When it does, ADR 0012 can re-split.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Writing an ADR whose entire content is "I was wrong" was the most honest thing I shipped this sprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Self-review compounds faster than real reviews for early-stage projects.&lt;/strong&gt; At 15 stars, user feedback is too thin to expose structural issues in under a month. Role-prompting 27 distinct perspectives — each with a rubric, not just a voice — surfaces in two hours what would take two quarters of community growth. The value isn't "AI reviews your code"; it's &lt;strong&gt;forcing yourself to argue from a perspective you haven't chosen&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Engineering scores lie about product health.&lt;/strong&gt; ADRs, CI, deep module boundaries, cleanup commits — all great. None of them would've moved my star count from 15 to 50. What moves it is: a hero sentence a stranger can parse in 3 seconds, a &lt;code&gt;comad hello&lt;/code&gt; that works in one terminal, a README that passes the "would I clone this on my phone while waiting for coffee?" test.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The most dangerous architecture is the one you wrote while smart.&lt;/strong&gt; Two weeks ago I was deeply thoughtful about repo strategy. I wrote an ADR. I earned the right to revisit by writing a superseding ADR out loud, not by deleting the old one. That asymmetry — &lt;strong&gt;decisions are cheap; reversals must be expensive&lt;/strong&gt; — is the thing that keeps future-me honest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. YAGNI scales better than premature pluralism.&lt;/strong&gt; Nested repos for independent release. Submodules for "flexibility." Multi-tier caching before traffic. All of these felt smart; all of them cost me adoption. At every level I'm learning to ask: &lt;em&gt;which user, right now, will thank me for this complexity?&lt;/em&gt; If the answer is "a theoretical future user," delete it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/kinkos1234/comad-world
&lt;span class="nb"&gt;cd &lt;/span&gt;comad-world
./install.sh
comad hello    &lt;span class="c"&gt;# 5-minute quickstart&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;v0.3.0 is live: &lt;a href="https://github.com/kinkos1234/comad-world/releases/tag/v0.3.0" rel="noopener noreferrer"&gt;https://github.com/kinkos1234/comad-world/releases/tag/v0.3.0&lt;/a&gt;&lt;br&gt;
gitub repo: &lt;a href="https://github.com/kinkos1234/comad-world" rel="noopener noreferrer"&gt;https://github.com/kinkos1234/comad-world&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you use it, I want to hear what breaks. Not the polite feedback — the "this felt wrong and here's why" feedback. That's what the review pattern above will happily automate for &lt;em&gt;you&lt;/em&gt; on whatever you're building next.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Comments and pull requests welcome. The falsification log is already counting my predictions wrong. Eventually, if the loop works, it'll count yours too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>architecture</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
