<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Raj Navakoti</title>
    <description>The latest articles on DEV Community by Raj Navakoti (@raj_navakoti).</description>
    <link>https://dev.to/raj_navakoti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/raj_navakoti"/>
    <language>en</language>
    <item>
      <title>Work on the Circuit Board, Don't Box It Yet</title>
      <dc:creator>Raj Navakoti</dc:creator>
      <pubDate>Tue, 14 Apr 2026 14:57:06 +0000</pubDate>
      <link>https://dev.to/raj_navakoti/work-on-the-circuit-board-dont-box-it-yet-2i3n</link>
      <guid>https://dev.to/raj_navakoti/work-on-the-circuit-board-dont-box-it-yet-2i3n</guid>
      <description>&lt;p&gt;Your multi-agent system isn't ready for a UI. And that's fine.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Temptation
&lt;/h2&gt;

&lt;p&gt;Every enterprise I've seen in the last 18 months does the same thing: they build an agent, it works in a prototype, and immediately someone says "let's make this an app." A nice UI, a button, maybe some charts. Box it up, hand it to users, move on.&lt;/p&gt;

&lt;p&gt;I get it. There's a product manager somewhere who just watched the demo and their eyes lit up. "Can we put this in front of customers by Q3?" And the engineer who built it feels the pull too — a polished app feels like real work, a terminal feels like hacking.&lt;/p&gt;

&lt;p&gt;But multi-agent orchestration is still in the circuit board stage. And boxing a circuit board is how you kill trust in AI across your entire organisation.&lt;/p&gt;




&lt;h2&gt;
  
  
  What a Circuit Board Looks Like
&lt;/h2&gt;

&lt;p&gt;I run 17 projects with AI agents. Not through apps. Through terminal sessions and tmux panes.&lt;/p&gt;

&lt;p&gt;Here's what a typical multi-agent session looks like on my screen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────┬──────────────────────────┐
│ Agent: Architect        │ Agent: Code Reviewer     │
│                         │                          │
│ &amp;gt; Reading CLAUDE.md...  │ &amp;gt; Waiting for PR...      │
│ &amp;gt; Found 3 context files │ &amp;gt;                        │
│ &amp;gt; Reasoning: "The API   │ &amp;gt;                        │
│   contract suggests     │ &amp;gt;                        │
│   this is a bounded     │ &amp;gt;                        │
│   context for orders,   │ &amp;gt;                        │
│   not fulfillment"      │ &amp;gt;                        │
│ &amp;gt; Tool call: Read       │ &amp;gt;                        │
│   /models/order.yaml    │ &amp;gt;                        │
│ &amp;gt; ...                   │ &amp;gt;                        │
├─────────────────────────┴──────────────────────────┤
│ Orchestrator Log                                    │
│ 14:23:01 architect → code-reviewer: handoff         │
│ 14:23:01 context: 3 files, 2847 tokens              │
│ 14:23:02 code-reviewer: starting review...          │
└─────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's ugly. It's not something you'd demo to a VP. But I can see everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What context the agent loaded&lt;/li&gt;
&lt;li&gt;How it reasoned about the problem&lt;/li&gt;
&lt;li&gt;When it handed off to another agent&lt;/li&gt;
&lt;li&gt;What got passed in the handoff&lt;/li&gt;
&lt;li&gt;Where it's stuck&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That visibility is the entire point. Because agents fail — and right now, they fail in ways you need to see to fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Failure Modes You Can't See Through a UI
&lt;/h2&gt;

&lt;p&gt;Here's what actually goes wrong in multi-agent systems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The wrong tool call.&lt;/strong&gt; Agent picks &lt;code&gt;search_confluence&lt;/code&gt; when it should have picked &lt;code&gt;read_api_contract&lt;/code&gt;. Through a UI, you see a bad answer. Through the circuit board, you see exactly which tool was selected and why — and you fix the tool selection logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The handoff fumble.&lt;/strong&gt; Agent A passes context to Agent B, but drops a critical piece. The user sees a weird response. You see... nothing, because the UI doesn't show inter-agent communication. On the circuit board, the handoff is logged line by line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The infinite loop.&lt;/strong&gt; Agent asks for clarification, gets a response, asks for clarification again, gets the same response, asks again. Through a UI, the spinner just keeps spinning. Through tmux, you see the loop happening in real time and kill it at iteration 3, not iteration 47.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The confidence problem.&lt;/strong&gt; Agent is 30% confident in its answer but presents it with 100% certainty. The UI shows a clean response. The circuit board shows the reasoning chain that led there — and you see the hedging, the contradictions, the "I'm not sure about this but..."&lt;/p&gt;

&lt;p&gt;Every one of these is a real failure I've hit in the last six months. Every one of them was caught because I could see the circuit board. None of them would have been visible through a dashboard.&lt;/p&gt;




&lt;h2&gt;
  
  
  "But Our Users Need an App"
&lt;/h2&gt;

&lt;p&gt;I hear this constantly. And the answer is: which users?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Power users and engineers&lt;/strong&gt; should be on the circuit board. They're the ones who can spot failures, provide feedback, and help the system improve. Give them terminal access, tmux sessions, or at minimum a verbose logging view. They'll love it — it's like having X-ray vision into the AI's brain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business users&lt;/strong&gt; are a different story. They need something simpler. But "simpler" doesn't mean "a polished app." It means a carefully constrained interface for a narrow, proven workflow.&lt;/p&gt;

&lt;p&gt;The mistake is jumping from "prototype works in terminal" straight to "let's build a full app for everyone." There's a middle ground:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;STAGE 1: Circuit Board
  Who: Engineers, power users
  Interface: Terminal / tmux
  Goal: Find failure patterns, refine agents
  Duration: Weeks to months

STAGE 2: Guided Circuit Board
  Who: Technical users, early adopters
  Interface: Simple web UI with visible reasoning
  Goal: Validate with real workflows, broader feedback
  Duration: Weeks

STAGE 3: Protective Case
  Who: Business users, general audience
  Interface: Polished app with "inspect reasoning" option
  Goal: Production use
  Prerequisite: Reliability metrics from Stage 1-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most enterprises try to skip to Stage 3. They end up back at Stage 1 anyway — just with more sunk cost and more disappointed users.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Maturity Test
&lt;/h2&gt;

&lt;p&gt;How do you know when an agent workflow is ready to graduate from circuit board to boxed app? Here's the checklist I use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ ] Agent succeeds on 90%+ of cases in its target domain
[ ] Failure modes are known and documented (not "it sometimes breaks")
[ ] Recovery from failure is automated or gracefully handled
[ ] Handoffs between agents are consistent and auditable
[ ] A non-engineer has used it successfully for 2+ weeks
[ ] You can explain every tool call the agent makes, not just the output
[ ] You've watched it fail and know exactly why each time
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you can't check all of these, you're not ready to box it. And that's fine. The circuit board isn't a limitation — it's where the learning happens.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for Your Enterprise
&lt;/h2&gt;

&lt;p&gt;If you're building multi-agent systems right now, here's the practical takeaway:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't invest in agent UIs yet.&lt;/strong&gt; Invest in observability. Build logging, tracing, and inspection tools. Make the circuit board visible and navigable. That's your competitive advantage — not a pretty dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let engineers live in the terminal.&lt;/strong&gt; The feedback loop from "I saw the agent fail" to "I fixed the prompt" is minutes in a terminal. It's days through a bug report from a UI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resist the demo pressure.&lt;/strong&gt; When leadership asks for a demo, show them the tmux session. Explain what they're seeing. The honest demo — "here's the agent thinking, here's where it struggled, here's how we fixed it" — builds more trust than a polished UI that hides the mess.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan for the box, but don't build it yet.&lt;/strong&gt; Know what the eventual app looks like. Design the API. Sketch the UI. But don't build it until the circuit board tells you it's ready.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Takeaway
&lt;/h2&gt;

&lt;p&gt;We're in the circuit board era of multi-agent AI. The agents work — sometimes brilliantly — but they fail in ways that require human eyes on the wiring. The enterprises that win won't be the ones that boxed it fastest. They'll be the ones that stayed on the circuit board longest, learned the failure patterns, and only boxed it when it was genuinely reliable.&lt;/p&gt;

&lt;p&gt;Ship the circuit board. Let people see it work. The box can wait.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Are you boxing agents too early? Or have you found the right moment to graduate from terminal to app? I'm running 17 agent projects from tmux panes and I'm curious what's working for others at scale.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>enterprise</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Contained Chaos: A Prototyping Operating Model for the AI Era</title>
      <dc:creator>Raj Navakoti</dc:creator>
      <pubDate>Wed, 01 Apr 2026 16:13:17 +0000</pubDate>
      <link>https://dev.to/raj_navakoti/contained-chaos-a-prototyping-operating-model-for-the-ai-era-2n9b</link>
      <guid>https://dev.to/raj_navakoti/contained-chaos-a-prototyping-operating-model-for-the-ai-era-2n9b</guid>
      <description>&lt;p&gt;Your enterprise has a prototyping problem. Not too little. Too much. Here's how to channel it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fire Just Got More Fuel
&lt;/h2&gt;

&lt;p&gt;Claude just announced 1 million token context. OpenAI keeps raising limits. Every month, the barrier to building something drops further. And the name of the fire this fuels isn't "AI adoption." It's prototyping.&lt;/p&gt;

&lt;p&gt;I'm deliberately not calling this vibe-coding. Vibe-coding is one leg of a much bigger thing. Prototyping is what happens when AI tools get good enough that anyone — not just engineers — can translate an idea into something that works. Product managers are building internal tools. Designers are generating functional front-ends. Business analysts are wiring up data pipelines. The ability to prototype is no longer a technical skill. It's a social right.&lt;/p&gt;

&lt;p&gt;And right now, people in your enterprise are prototyping like rats on cocaine.&lt;/p&gt;

&lt;p&gt;I've seen it first-hand. Across a large enterprise with hundreds of engineering teams, I watched the prototyping volume go from "a few side projects" to "we genuinely don't know how many AI-powered tools people have built" in under a year. Not because anyone approved it. Because the tools got good enough and the problems were obvious enough that people just... started building.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Two Choices (Both Wrong)
&lt;/h2&gt;

&lt;p&gt;As an enterprise, you're staring at this and you have two instincts. Both are bad.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choice 1: "If I don't look, it's not there."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Ostrich. Head in the sand. Let people do whatever they want. Meanwhile, someone in a corner of your organisation has already solved a million-dollar problem and can't ship it because the PO told them the Jira ticket is the priority. Someone else, knowingly or unknowingly, just published confidential customer data to a public API because the prototype needed "real data to feel real."&lt;/p&gt;

&lt;p&gt;You didn't see it. So it didn't happen. Until it did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choice 2: "I see you. All of you. All the time."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Empire. Lock it down. Approved tools lists. Usage policies. Mandatory reviews. Kill every unsanctioned experiment before the ideas even seed. Congratulations — you've eliminated the security risk and the innovation at the same time. The Force is strong with governance, but the Rebellion has better ideas.&lt;/p&gt;

&lt;p&gt;I watched an organisation do exactly this. They sent a company-wide email banning all "unapproved AI tool usage." Within a week, the same engineers were using the same tools — they'd just stopped talking about it. The prototyping didn't stop. The visibility did.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Third Way
&lt;/h2&gt;

&lt;p&gt;Is there a path where you channel this energy instead of ignoring it or crushing it? Where you turn contained chaos into enterprise value?&lt;/p&gt;

&lt;p&gt;Yes. But before you say "you're a genius" — let's make sure you actually need this. Let's look at what you've probably already tried, and why it didn't work.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You've Already Done (And Why It Didn't Work)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hackathons.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every enterprise's first move. Put people in a room for a day, high adrenaline, end-of-day presentations, a few standouts, investment promises from leadership, applause.&lt;/p&gt;

&lt;p&gt;Next morning: new dawn, new day, back to the same work.&lt;/p&gt;

&lt;p&gt;Hackathons are top-down events dressed up as bottom-up innovation. Most organisations run them for AI adoption and awareness, not for sustained value creation. They reward speed over substance. They produce demos, not products. And there's no path from "you won the hackathon" to "here's a team and a budget." Winning means a trophy, not traction.&lt;/p&gt;

&lt;p&gt;The Death Star looked impressive too. Didn't survive contact with a thermal exhaust port.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Announcements and policies.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Please be careful using unauthorised AI tools. Here are our data protection policies." Sent via email. Nobody reads it. The people who need it most are the ones who've already found their own tools and aren't looking at their inbox for permission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI awareness events.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Expensive. Low retention. The people who attend are already interested. The people who need it don't show up. I sat through one that cost more than a junior developer's annual salary. Three months later, I asked ten people what they remembered from it. Nobody could name a single takeaway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI governance and adoption teams.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They exist somewhere in your org chart. Most people don't know where. These teams do important work — tooling evaluation, risk assessment, vendor management — but they operate in the background. They're not connected to the people actually building things. I once asked a developer if they knew their company had an AI governance team. "We have a what?"&lt;/p&gt;

&lt;p&gt;None of these are wrong. They're just incomplete. They address pieces of the problem but none of them answer the real question: how do you systematically absorb the prototyping energy and turn it into value?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Prototyping Operating Model
&lt;/h2&gt;

&lt;p&gt;Here's the model. Three parts: the system, the team, and the infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 1: The System
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A rolling submission model with quarterly selection.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not a one-day event. Not "submit your ideas by Friday." A continuous, open pipeline where people prototype at their own pace and submit when they're ready.&lt;/p&gt;

&lt;p&gt;Why rolling? Because good ideas don't arrive on schedule. The engineer who cracks a problem at 2am on a Tuesday shouldn't have to wait for the next hackathon to show anyone. And people should be able to submit as many ideas as they want — the evaluation process filters, not the submission process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The submission rule: prototypes only.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the critical filter. And it's inspired by something Jeff Bezos understood early: the best way to evaluate an idea isn't to read a pitch deck about it — it's to use it.&lt;/p&gt;

&lt;p&gt;Submissions are not ideas. If it's just a sketch on a napkin, it's too early — go build it first. Submissions are also not deployed projects. If it's already running in production, you've skipped the process entirely and we need a different conversation.&lt;/p&gt;

&lt;p&gt;A submission is a prototype: an idea translated to a minimum working level. It runs. It demonstrates the concept. It's rough. That's fine. But it exists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What a submission looks like:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every submission is a pull request. The PR description follows a standardised template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Prototype Submission&lt;/span&gt;
&lt;span class="gt"&gt;
&amp;gt; **Submission Title:** AI-Powered Inventory Alert System&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; **Submitter:** Alex Chen / @achen&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; **Date:** 2026-02-14&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; **Category:** ai-ml&lt;/span&gt;

&lt;span class="gu"&gt;## Problem Statement&lt;/span&gt;
Our regional warehouse network (12 locations) relies on manual stock checks
and static reorder thresholds to manage inventory across 8,400 SKUs.
Warehouse coordinators spend 2-3 hours daily reviewing spreadsheets and ERP
dashboards to identify items approaching stockout.

&lt;span class="gu"&gt;## Dollar Value Framing&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Value type:**&lt;/span&gt; Cost saving
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Estimate:**&lt;/span&gt; $680,000/year
&lt;span class="p"&gt;  -&lt;/span&gt; Stockout reduction: 340 events/quarter x 60% reduction x $320 avg
    emergency freight premium = $261,120/year
&lt;span class="p"&gt;  -&lt;/span&gt; Labor savings: 12 coordinators x 1.5 hrs/day saved x 250 days x
    $48/hr = $216,000/year
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Confidence level:**&lt;/span&gt; Medium

&lt;span class="gu"&gt;## Working Prototype&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Link:**&lt;/span&gt; github.com/achen/inventory-alert
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Tech stack:**&lt;/span&gt; Python, scikit-learn, PostgreSQL, Slack SDK, Streamlit
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**How to run it:**&lt;/span&gt; &lt;span class="sb"&gt;`docker compose up`&lt;/span&gt; → load data → train → dashboard

&lt;span class="gu"&gt;## Effort So Far&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Time invested:**&lt;/span&gt; ~60 hours over 5 weekends
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**People involved:**&lt;/span&gt; Alex Chen (ML), Dana Torres (warehouse ops, domain)
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**What was hardest:**&lt;/span&gt; Demand signal for promotional periods — model
  treated promo spikes as anomalies, not patterns.

&lt;span class="gu"&gt;## What's Needed to Graduate&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Team size:**&lt;/span&gt; 2 engineers + 1 ops lead for 3 months
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Budget:**&lt;/span&gt; ~$85,000
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Key risks:**&lt;/span&gt; ERP API access may require 6-8 week security review
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the structure. Dollar value framing forces people to think about whether the problem is worth solving before they submit. Effort transparency shows skin in the game. Graduation requirements keep expectations realistic. This isn't a wish list — it's an engineering proposal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quarterly evaluation and selection.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every quarter, the organising team reviews all submissions from that cycle. Each submission gets scored against five criteria:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Criterion&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;What It Measures&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Business Value&lt;/td&gt;
&lt;td&gt;/5&lt;/td&gt;
&lt;td&gt;Real problem? Credible dollar framing?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Technical Feasibility&lt;/td&gt;
&lt;td&gt;/5&lt;/td&gt;
&lt;td&gt;Can this actually reach production?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Efficiency&lt;/td&gt;
&lt;td&gt;/5&lt;/td&gt;
&lt;td&gt;Lean ask relative to value?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Strategic Alignment&lt;/td&gt;
&lt;td&gt;/5&lt;/td&gt;
&lt;td&gt;Fits org priorities?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Innovation Factor&lt;/td&gt;
&lt;td&gt;/5&lt;/td&gt;
&lt;td&gt;Novel approach or new-to-us?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;/25&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The scoring isn't a beauty contest. It's a structured way to compare apples to oranges — an ML inventory system against a meeting decision recorder against a compliance automation tool. Every evaluator fills in the same template. Comments are mandatory. The numbers create ranking; the comments create accountability.&lt;/p&gt;

&lt;p&gt;From the scores, they select:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Top 5 to reward&lt;/strong&gt; — public recognition, small prizes, visibility to leadership&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top 15 to invest&lt;/strong&gt; — dedicated time, resources, or team allocation for the next cycle&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rest get documented feedback. Every submission gets a response. No idea goes into a void.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graduation paths:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every selected prototype gets one of four outcomes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Graduate&lt;/strong&gt; — fund it, staff it, put it on a product roadmap. It's real now.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterate&lt;/strong&gt; — promising but not ready. Gets another cycle with more support.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shelve&lt;/strong&gt; — valuable concept, wrong timing. Archived with documentation. Can be resurrected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kill&lt;/strong&gt; — learned what we needed. Thank the team, archive the code, close the loop.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every decision gets a graduation record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Graduation Decision&lt;/span&gt;
&lt;span class="gt"&gt;
&amp;gt; **Submission:** AI-Powered Inventory Alert System&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; **Cycle:** Q1-2026&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; **Evaluation Score:** 21/25 (ranked 3rd of 42 submissions)&lt;/span&gt;

&lt;span class="gu"&gt;## Outcome: GRADUATE&lt;/span&gt;
Fund it, staff it, integrate into product roadmap.

&lt;span class="gu"&gt;## Reasoning&lt;/span&gt;
Strong dollar value framing backed by real backtesting data (78% of
stockout events flagged with 5-day lead time). Low resource ask relative
to projected $680K/year savings. Warehouse ops champion already lined up.
Primary risk (ERP API access) is a known 6-week process, not a blocker.

&lt;span class="gu"&gt;## Next Steps&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Assign product owner: Maria Gonzalez
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Allocate team: 2 engineers + 1 ops lead, 3 months
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Schedule kickoff by 2026-04-01
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Define MVP scope: pilot at 2 locations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key: every path has closure. People know what happened and why. No prototype enters zombie state — alive enough to haunt you, dead enough to not ship.&lt;/p&gt;




&lt;h3&gt;
  
  
  Part 2: The Operating Team
&lt;/h3&gt;

&lt;p&gt;Two groups make this work: a &lt;strong&gt;leadership team&lt;/strong&gt; and an &lt;strong&gt;organising team&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The leadership team&lt;/strong&gt; has one job: investment authority. They review the organising team's recommendations, make funding decisions, and connect graduated prototypes to product roadmaps. They don't evaluate submissions — that's the organising team's job. They make it possible for good ideas to actually become real things.&lt;/p&gt;

&lt;p&gt;What they should do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Take time to understand what people are trying to solve, not just what they built&lt;/li&gt;
&lt;li&gt;Attend demo sessions, ask questions, show genuine interest&lt;/li&gt;
&lt;li&gt;Move fast on graduation decisions — momentum kills if you wait too long&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What they should not do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Treat this as a checkbox exercise — people can tell when leadership is performing interest versus showing it&lt;/li&gt;
&lt;li&gt;Sit on graduation decisions for weeks — the engineer who built the prototype on their weekends deserves a timely answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The organising team&lt;/strong&gt; is the engine. These aren't managers. These are your rockstars — the people who are already respected across the organisation. A cross-functional mix: product, UX, engineering, data, security. They understand both the technical and the business side.&lt;/p&gt;

&lt;p&gt;Think of them as the Jedi Council — except one that actually listens to people instead of dismissing Anakin's concerns. The job is curation, not gatekeeping.&lt;/p&gt;

&lt;p&gt;What they should do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review submissions with open minds — the best ideas often look weird at first&lt;/li&gt;
&lt;li&gt;Group similar submissions together — if five people independently prototyped something similar, that's a signal about a real problem, not a coincidence. Pay attention to that.&lt;/li&gt;
&lt;li&gt;Connect people working on adjacent problems — facilitate the conversations that wouldn't happen organically&lt;/li&gt;
&lt;li&gt;Look for patterns in what's being built — the aggregate tells you more than any individual submission&lt;/li&gt;
&lt;li&gt;Build tooling that helps people prototype faster — shared components, design systems, API templates, even AI-powered extensions for common patterns&lt;/li&gt;
&lt;li&gt;Maintain transparency — everyone can see what's been submitted, what's been selected, and why&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What they should not do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gate-keep based on role or seniority — a business analyst's prototype is as valid as a staff engineer's&lt;/li&gt;
&lt;li&gt;Reject ideas for being "too simple" — simple solutions to real problems are the most valuable kind&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Part 3: The Infrastructure (It's a GitHub Repo)
&lt;/h3&gt;

&lt;p&gt;This is where I might sound opinionated, but hear me out.&lt;/p&gt;

&lt;p&gt;You don't need a platform. You don't need a multi-tenant app with versioning, workflows, approval chains, and a React dashboard. You need a GitHub repository.&lt;/p&gt;

&lt;p&gt;I know what you're thinking: "GitHub? For managing enterprise innovation?" Yes. And here's why.&lt;/p&gt;

&lt;p&gt;The instinct is to build a tool first. A submissions portal. A review dashboard. An analytics layer. Three months and $200K later, you have a beautiful platform and zero submissions because nobody wants to learn another internal tool. I've seen this happen twice. Both times, the platform outlived the programme.&lt;/p&gt;

&lt;p&gt;Git + GitHub gives you almost everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prototype-ops/
├── templates/
│   ├── submission-template.md      # What submitters fill in (the PR)
│   ├── evaluation-template.md      # How reviewers score (per submission)
│   └── graduation-template.md      # Decision record (graduate/iterate/shelve/kill)
├── submissions/                    # One branch per submission, merged PRs = archive
├── examples/
│   ├── example-submission-1.md     # AI-Powered Inventory Alert System
│   ├── example-submission-2.md     # Meeting Decision Recorder
│   └── example-submission-3.md     # Compliance Doc Generator
├── docs/
│   ├── guidelines.md               # Do's and don'ts, approved tools
│   ├── ai-tools-guide.md           # What AI tools the company offers + how to use
│   ├── problem-framing.md          # How to think about dollar value and effort
│   └── reference-problems.md       # Org-level problems leadership wants solved
└── dashboard/                      # Simple UI to visualise submissions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Submissions are pull requests.&lt;/strong&gt; The prototype code lives in the PR. The description follows the submission template. Discussion happens in comments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decisions are recorded.&lt;/strong&gt; Approved, rejected, needs iteration — it's all in the PR history. Reviewers' reasoning is captured.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Everything is in one place.&lt;/strong&gt; No context-switching between a submissions portal, a Slack channel, an email thread, and a spreadsheet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency is built in.&lt;/strong&gt; Anyone can browse open PRs to see what's been submitted. No duplicate ideas because you can search before you submit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Governance is native.&lt;/strong&gt; Branch protection, required reviewers, templates — GitHub already has the workflow primitives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;docs/&lt;/code&gt; directory does the heavy lifting that policy emails never could. &lt;code&gt;guidelines.md&lt;/code&gt; tells people what tools they can and can't use — which naturally educates about shadow AI without a single compliance slide. &lt;code&gt;ai-tools-guide.md&lt;/code&gt; shows people the power of tools the company already pays for — reducing the incentive to go rogue. &lt;code&gt;problem-framing.md&lt;/code&gt; teaches people to think about dollar value and effort before they build — the best filter is self-filtering. &lt;code&gt;reference-problems.md&lt;/code&gt; gives direction without being prescriptive: "here are problems leadership wants solved, if you're looking for inspiration."&lt;/p&gt;

&lt;p&gt;And yes — you can add AI-powered extensions to the repo that automate submission review, evaluation grouping, status tracking, and reporting. The organising team doesn't spend their time on mechanics. They spend it on judgment.&lt;/p&gt;

&lt;p&gt;I've built a starter repo that any organisation can fork to bootstrap their prototyping operating model. It includes the submission templates, evaluation workflows, team manifest, and example submissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/ea-toolkit/prototype-ops" rel="noopener noreferrer"&gt;prototype-ops on GitHub&lt;/a&gt;&lt;/strong&gt; — fork it, adapt the templates to your org, and you're running by next quarter.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Works (When Everything Else Didn't)
&lt;/h2&gt;

&lt;p&gt;The hackathon failed because it was an event. This is a system.&lt;/p&gt;

&lt;p&gt;The governance policies failed because they said "don't." This says "do — but here."&lt;/p&gt;

&lt;p&gt;The awareness events failed because they were broadcasts. This is a conversation.&lt;/p&gt;

&lt;p&gt;The AI adoption teams failed because they were invisible. This is visible by design — every submission, every decision, every outcome is public within the organisation.&lt;/p&gt;

&lt;p&gt;And shadow AI? It doesn't disappear. But it surfaces. When people have a legitimate channel for their prototyping energy — with clear guidelines, approved tools, and a path to recognition — the incentive to operate in the shadows drops dramatically. Not because you banned it. Because you made the alternative better.&lt;/p&gt;

&lt;p&gt;Like the Rebel Alliance, you don't win by building a bigger Death Star. You win by giving people something worth fighting for. A system that recognises their ideas, resources the good ones, and doesn't waste their time with the rest. That's the Prototyping Operating Model.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Takeaway
&lt;/h2&gt;

&lt;p&gt;Your people are already prototyping. They're doing it on lunch breaks, on weekends, in the gaps between sprint tickets. The question isn't whether to allow it. The question is whether you channel it into a system that creates value — or ignore it until someone publishes your customer data to a public repo.&lt;/p&gt;

&lt;p&gt;Build the operating model. Stand up the team. Fork the repo. Start this quarter.&lt;/p&gt;

&lt;p&gt;The dinosaurs are already out. The fence is already down. The only question left is whether you build Jurassic World with better containment — or keep pretending the park is still under control.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If your enterprise is drowning in unsanctioned prototypes — or worse, pretending they don't exist — I'd like to hear how you're handling it. What's working? What spectacularly isn't? And if you've tried hackathons and they didn't stick, you already know why.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>enterprise</category>
      <category>innovation</category>
      <category>opensource</category>
    </item>
    <item>
      <title>What if your context maps, event flows, and dependency graphs just... generated themselves from Markdown?</title>
      <dc:creator>Raj Navakoti</dc:creator>
      <pubDate>Tue, 24 Mar 2026 10:18:47 +0000</pubDate>
      <link>https://dev.to/raj_navakoti/what-if-your-context-maps-event-flows-and-dependency-graphs-just-generated-themselves-from-5d57</link>
      <guid>https://dev.to/raj_navakoti/what-if-your-context-maps-event-flows-and-dependency-graphs-just-generated-themselves-from-5d57</guid>
      <description>&lt;p&gt;What if your context maps, event flows, and dependency graphs just... generated themselves from Markdown?&lt;/p&gt;

&lt;p&gt;Your architecture diagrams are lying to you. Not intentionally — they were accurate the day someone drew them. But that was six months ago, and since then three services got renamed, two teams reorganised, and the person who maintained the draw.io file left the company. The model still lives in a desktop app that nobody opens, on a Confluence page nobody finds, in someone's head that is now at a different employer.&lt;/p&gt;

&lt;p&gt;The tooling is either too heavy (paid enterprise tools that require a two-day training course) or too manual (Markdown ADRs that are great for decisions but tell you nothing about how 40 systems relate to each other). There is a gap between "I have docs" and "I have a living architecture model."&lt;/p&gt;

&lt;h2&gt;
  
  
  The idea
&lt;/h2&gt;

&lt;p&gt;What if architecture elements were just Markdown files in Git? And what if the relationships you declared in those files were enough for the diagrams to draw themselves?&lt;/p&gt;

&lt;p&gt;No diagram tool. No proprietary format. No model-to-code sync problem — the Markdown &lt;em&gt;is&lt;/em&gt; the model. You declare what exists and how things connect. A static site build reads those files, resolves the graph, and generates every view automatically.&lt;/p&gt;

&lt;p&gt;That is what Architecture Catalog does.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;Three steps. That is the whole thing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Step 1: Define your schema (one YAML file)
         ↓
Step 2: Add elements (Markdown files with YAML frontmatter)
         ↓
Step 3: Build (npm run build → static site)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The schema file (&lt;code&gt;registry-mapping.yaml&lt;/code&gt;) defines your layers, element types, relationships, and branding. The Markdown files are your elements — one file per system, service, domain, or whatever your vocabulary calls them. The build reads both, resolves the graph, and generates an interactive site.&lt;/p&gt;

&lt;p&gt;No server. No database. No runtime dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you get
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard with domain cards&lt;/strong&gt; — top-level view across all domains with health indicators, element counts, and quick navigation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interactive context maps&lt;/strong&gt; — search, filter, and focus mode. Click any element to see its first-degree and second-degree relationships. The graph is built from the relationship declarations in your Markdown files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Animated event flow diagrams&lt;/strong&gt; — shows which systems publish events, which systems consume them, and how data flows across domain boundaries. Designed for teams that have moved to event-driven architectures and lost track of who owns what.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PlantUML, BPMN, and draw.io viewer&lt;/strong&gt; — if you have existing diagrams, they render inside the catalog next to the registry elements. You are not forced to throw away what you have.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Element detail pages&lt;/strong&gt; — every element gets its own page: description, layer, domain, all declared relationships, and links to related diagrams.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dark mode by default&lt;/strong&gt; — because architects apparently live at night.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;White-label and deploy anywhere&lt;/strong&gt; — it is a static site. Firebase, S3, GitHub Pages, Netlify — your choice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  See it live
&lt;/h2&gt;

&lt;p&gt;The live demo is at &lt;a href="https://architecture-catalog.web.app" rel="noopener noreferrer"&gt;architecture-catalog.web.app&lt;/a&gt;. It has 6 domains, 180+ entities, and is fully interactive — context maps, event flows, element drill-down, the lot.&lt;/p&gt;

&lt;p&gt;The documentation site is at &lt;a href="https://docs-architecture-catalog.web.app" rel="noopener noreferrer"&gt;docs-architecture-catalog.web.app&lt;/a&gt; if you want to understand the schema before you start.&lt;/p&gt;

&lt;p&gt;Open either one before reading the rest of this post. The 30 seconds you spend clicking around will make the next part more concrete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmnq8pyedhziscjtya63.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmnq8pyedhziscjtya63.gif" alt="Architecture Catalog dashboard showing domain cards with dark&amp;lt;br&amp;gt;
  mode" width="600" height="351"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The schema-driven part
&lt;/h2&gt;

&lt;p&gt;This is the piece that makes it maintainable at scale.&lt;/p&gt;

&lt;p&gt;Everything flows from a single YAML file. Here is the minimal version — a site configuration and one element type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;site&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Architecture&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Catalog"&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Architecture&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Acme&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Corp"&lt;/span&gt;
  &lt;span class="na"&gt;accent_color&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#4A90D9"&lt;/span&gt;

&lt;span class="na"&gt;layers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
    &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#1E3A5F"&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
        &lt;span class="na"&gt;icon&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;⬡"&lt;/span&gt;
        &lt;span class="na"&gt;graph_rank&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;A&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;deployable&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;service&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;or&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;microservice"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is it. With this in place, you create Markdown files in the &lt;code&gt;registry-v2/application/services/&lt;/code&gt; folder and they automatically appear in the dashboard, get their own detail pages, and participate in context maps.&lt;/p&gt;

&lt;p&gt;Adding a new element type means adding one entry to this YAML file and creating a &lt;code&gt;_template.md&lt;/code&gt;. Zero code changes. The UI derives everything — page structure, graph layout, relationship rendering, sidebar navigation — from the schema.&lt;/p&gt;

&lt;p&gt;It is also vocabulary-agnostic. You can use ArchiMate, TOGAF, C4, or whatever your organisation invented. The catalog does not care what you call your elements. Rename every type and layer in the YAML and the site still builds and renders correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Validated at scale
&lt;/h2&gt;

&lt;p&gt;I have been running a version of this internally across 30 domains with over 6,000 registered elements. The build still takes under 15 seconds. The output is pure static HTML — no server, no database, no runtime dependencies.&lt;/p&gt;

&lt;p&gt;The architecture team stopped manually maintaining diagrams. Context maps generate from what teams declare in their Markdown files. New elements show up in the catalog as soon as the PR merges.&lt;/p&gt;

&lt;p&gt;That is the real test — not whether it works on a demo dataset, but whether it holds at enterprise scale without becoming a maintenance burden.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Markdown and Git
&lt;/h2&gt;

&lt;p&gt;This is not a new argument, but it is worth making clearly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Git history is your architecture changelog.&lt;/strong&gt; Every structural change is a commit. You can diff the architecture between quarters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PRs are architecture reviews.&lt;/strong&gt; When a team adds a new service or declares a new dependency, it goes through the same review process as code. No separate approval workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI can read it without instruction.&lt;/strong&gt; Plain Markdown with structured frontmatter is natively parseable by any LLM. Ask your AI assistant about a domain — it can read the actual model, not a summary someone wrote last year.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No vendor lock-in.&lt;/strong&gt; Markdown files in a Git repo will be readable in 20 years. The same cannot be said for every architecture tool on the market today.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/ea-toolkit/architecture-catalog.git
&lt;span class="nb"&gt;cd &lt;/span&gt;architecture-catalog/catalog-ui
npm &lt;span class="nb"&gt;install
&lt;/span&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;localhost:4321&lt;/code&gt; and you will see the dashboard. The demo data is a fictional B2B SaaS CRM — 3 domains, 71 elements — built to show off the features without being distracting.&lt;/p&gt;

&lt;p&gt;The README has instructions for pointing it at your own registry. The schema documentation on the docs site walks through every field in &lt;code&gt;registry-mapping.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open source, MIT licensed
&lt;/h2&gt;

&lt;p&gt;The project is open source under the MIT license. Contributions are welcome — issues, PRs, feature requests, and feedback on the schema design are all useful.&lt;/p&gt;

&lt;p&gt;I am presenting this at DDD Europe 2026, where the talk covers the broader problem: how do you make architecture observable at enterprise scale without creating a second job for the architecture team? The catalog is part of that answer.&lt;/p&gt;

&lt;p&gt;If you have opinions on architecture tooling — what is broken, what you have tried, what you wish existed — I would genuinely like to hear them. How does your team track architecture decisions and element relationships today?&lt;/p&gt;

&lt;h2&gt;
  
  
  One Takeaway
&lt;/h2&gt;

&lt;p&gt;Architecture tooling fails because it separates the model from the thing being modelled. Plain Markdown in Git closes that gap — the model lives where the work lives, changes when the work changes, and reads the same way code does.&lt;/p&gt;

&lt;p&gt;The diagrams drawing themselves is a side effect. That is the point.&lt;/p&gt;

</description>
      <category>ddd</category>
      <category>opensource</category>
      <category>architecture</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
