You don't need a perfect design upfront. You need a direction and the willingness to keep refining until things converge.
This is what I learned after building the same app a dozen times, burning it down, and rebuilding it, until the system taught me what it wanted to be.
The Myth of the Perfect Upfront Design
Every software project starts the same way: someone draws boxes on a whiteboard and declares, "This is the architecture." There's usually a lot of nodding. Sometimes there's a second whiteboard.
Then you actually start building, and reality shows up uninvited.
The boxes don't fit. The data flows the wrong way. That service you thought would be simple has seventeen edge cases. By week three, you're writing code that quietly apologizes to the whiteboard.
The hard truth is: you cannot know the full shape of your system before you've built parts of it. Requirements lie. Users surprise you. Scale reveals things that no diagram predicted.
And that's fine. The problem isn't that your design was wrong. The problem is treating the original design as sacred.
Eventual Consistency, But For Your Codebase
If you've worked with distributed systems, you know the concept of eventual consistency: different nodes in a system might have different views of the data at any given moment, but given enough time and communication, they all converge to the same state.
Your software design works the same way.
You start with incomplete information. Your architecture has gaps. Your understanding of the domain is partial. But if you keep iterating, each cycle making the design a little more honest, a little more aligned with reality, you converge toward something that actually fits.
The goal isn't to get it right immediately. The goal is to keep getting less wrong.
Call it Figure-It-Out-As-You-Go (FIAYG) Development. Not as catchy as TDD or DDD, but considerably more honest about how software actually gets built.
The key is: every time you learn something new, from a bug, from user feedback, from hitting a scaling wall, you encode that learning into the architecture. You don't just fix the symptom. You fix the system so the symptom can't recur.
How We Built Almadar (Or: A Story of Productive Chaos)
Almadar is a declarative framework that compiles state-machine schemas into full-stack apps. A single .orb file describes your entities, behaviors, and UI, and the compiler generates a working React frontend, Express backend, and database models.
That's what it is now. Here's how we got there.
Stage 1: The Single Project
It started simple. We had one project, a client that needed a CRUD-heavy business application. We hand-coded it the normal way: React components, Express routes, TypeORM entities. Boilerplate everywhere.
We noticed we were writing the same patterns over and over: a table to list records, a modal to create them, a form to edit them, and a confirmation to delete them. Copy. Paste. Rename. Repeat.
There must be a better way.
So we tried something: what if we described the data and behavior in a JSON schema, and generated the boilerplate from that?
It was rough. The schema was barely structured. The generator was a string template with fs.writeFileSync calls. But it worked. The client was happy. We shipped faster.
What we learned: The 80% of every app that is boilerplate can be described declaratively. The 20% that's unique is what you actually want to spend time on.
Stage 2: The Second Client Breaks Everything
The second client had different requirements. Our generator was too tightly coupled to the first client's patterns. We'd hardcoded assumptions everywhere: naming conventions, file structure, component hierarchy.
We could have hacked it. Instead, we stopped and asked: What would a general version of this look like?
This is where the architecture started to crystallize. We introduced the concept of Orbitals, self-contained feature modules that combine an entity (data shape), a trait (behavior as a finite-state machine), and a page (route). We separated the schema definition from the code generation.
What we learned: The moment you have a second use case, your abstractions reveal themselves. If it doesn't generalize, it wasn't an abstraction. It was a coincidence.
Stage 3: The Monorepo and the Constraints
As the project count grew, so did the chaos. Each client project had its own design system, its own component variations, its own idea of what a "button" looked like. We were maintaining ten flavors of the same component.
So we restructured everything into a monorepo with explicit rules:
- Core components live in
@almadar/ui, shared, versioned, never duplicated - Client design systems live in
projects/{name}/design-system/, extending core, not replacing it - The compiler is in
orbital-rust/, sacred, never modified without discussion - Generated code goes in
projects/{name}/app/, never edited directly
More importantly, we added constraints. Hard rules the team encodes in a CLAUDE.md file that every developer (human and AI) must follow:
- Never use raw HTML elements, always use design system components
-
Schema first, edit the
.orbfile, recompile, never touch generated code - Compiler is sacred, adding a rule requires explicit approval
And we enforced those constraints with a custom ESLint plugin (@almadar/eslint). Not optional guidelines — actual errors that block the pre-commit hook. Zero warnings allowed. Some of the rules we built:
| Rule | What it catches |
|---|---|
no-raw-dom-elements |
Blocks <div>, <span>, <button>, etc. Use VStack, Typography, Button instead |
no-as-any |
Blocks as any type casts. Fix the actual type issue |
no-import-generated |
Blocks importing from projects/*/app/. Generated code is not an API |
require-closed-circuit-props |
Every component must accept className, isLoading, error, entity
|
require-event-bus |
Interactions must use useEventBus(), not raw onClick
|
require-display-name |
Every component must set displayName for debugging |
require-stories |
Every new component must ship with a .stories.tsx file |
event-naming-convention |
UI events must follow the UI:EVENT_NAME pattern |
template-no-hooks |
Template components are pure. No useState, no useEffect
|
template-no-iteration |
Templates don't loop. The organism does the iteration |
organism-no-callback-props |
Organisms communicate through the event bus, not prop callbacks |
The pre-commit hook runs ESLint on every staged .ts/.tsx file. If any rule fires, the commit is blocked. No exceptions. Every rule is severity 2 (error), not a warning. Warnings are aspirational. Errors are architecture.
The result: a new developer, human or AI, can't accidentally violate the design principles. The linter says no before the PR is even opened.
Constraints sound limiting. They're actually liberating. When the rules are clear, decisions get faster, inconsistencies get caught earlier, and new team members can contribute without breaking things.
What we learned: A monorepo without constraints is just a polite monolith. Constraints are the architecture, and linting is how you make them real.
Stage 4: AI Changes the Rules
Here's where it gets interesting.
When AI coding assistants arrived, our constraint-based architecture proved to be exactly what they needed to work well. An LLM with a clear schema to edit, a pattern registry to validate against, a linter that blocks bad patterns, and a compiler that says "no" to broken output? That's a productive AI agent.
An LLM working in a free-form codebase with no rules? That's a very confident agent who confidently does the wrong thing.
We formalized this: the AI gets the constraints in CLAUDE.md, checks the pattern registry before adding components, runs ESLint on every file it changes, and validates the schema with the Orbital compiler before declaring anything done. The compiler is the AI's code reviewer, and it never gets tired, never misses a case, and runs in under 50ms.
The architecture we stumbled into through iteration turned out to be the right architecture for AI-assisted development. We didn't plan that. We converged on it.
What we learned: AI amplifies whatever structure (or chaos) already exists in your codebase. Constraints aren't bureaucracy. They're the difference between an AI that helps and an AI that confidently breaks things.
The FIAYG Playbook
If you're a beginner, here's how to apply this without building a framework from scratch:
1. Start simple. Embarrassingly simple.
Your first version should do one thing. Not the thing, plus error handling, plus caching, plus a configuration system. Just the thing. You will learn more from shipping something small than from designing something large.
2. Notice repetition, it's a message
When you write the same code for the third time, that's the codebase telling you something wants to be abstracted. Don't abstract on the first repetition (premature). Don't ignore the third (negligence). The second time is the signal, the third is the confirmation.
3. Encode your learnings as constraints
When you find a bug pattern — like a modal that can get stuck open — don't just fix this one modal. Figure out what rule would prevent that class of bug entirely. Write it down. Enforce it. In Almadar's case, we turned "every modal must have a CANCEL transition" into a compiler validation rule, and "never use raw HTML elements" into a linting error. In your project, it might be a custom ESLint rule, a pre-commit hook, or a code review checklist. The medium doesn't matter. What matters is that the rule is automatic. If it requires a human to remember it, it will eventually be forgotten.
4. Your architecture should get smaller over time
Every time you spot a pattern and generalize it, you should be writing less code to accomplish the same thing. If your codebase keeps growing at the same rate forever, you're accumulating complexity, not managing it. The goal is more functionality per line, not more lines.
5. Don't fear the pivot
We renamed things. We deleted entire packages. We moved the design system twice. Every time we did, the system got cleaner, and the rules got clearer. A pivot isn't a failure. It's the evidence that your understanding has improved.
What Convergence Looks Like
After several years and many client projects, here's what we converged on:
- One schema file per project, the single source of truth
- One compiler, deterministic, validates 50+ rules, generates all boilerplate
- One design system with per-project extensions
- One custom ESLint plugin, 17 rules enforced on every commit
- One standard library of reusable behaviors (CRUD, wizards, master-detail, pagination)
- One constraint that every developer and AI agent follows
The Takeaway
The best software architecture isn't the one you design before you start. It's the one that emerges from listening to what your system is trying to tell you, through bugs, through repetition, through the friction you feel when something is wrong.
Figure it out as you go. Encode what you learn. Keep getting less wrong.
Consistency is eventual. Ship anyway.
Top comments (2)
Really enjoyed this. The “eventual consistency for architecture” analogy is spot on.
I agree with the schema-as-SoT direction, and especially with the idea that projections (UI, backend, DB) can all be derived from a single source of truth. That’s powerful.
The only nuance I’d add is about separation of concerns at the semantic level.
I think there’s a subtle but important distinction between:
Projections absolutely should be generated from the schema.
But ideally, the semantics themselves shouldn’t be defined in terms of projection concerns.
In other words: the semantic model should be stable and projection-agnostic, while UI/BE/DB are just views over that meaning.
Your compiler + constraints approach already moves strongly in that direction. I’m just curious how far you see the semantic layer being independent from its projections over time.
Thanks for sharing this — it’s refreshing to see architecture framed as convergence rather than upfront perfection.
Really appreciate this framing — you're pointing at something we think about constantly.
You're right that there's a meaningful distinction between the semantic execution layer and the projections derived from it. And yes, ideally the semantics should be stable and projection-agnostic. That's actually the core architectural bet we've made.
How We Structure the Separation
In Almadar, the schema (
.orbfile) defines three things: entities (data shapes), traits (state machines with guards, transitions, and effects), and pages (route bindings). The first two are purely semantic — they describe what exists and what can happen, not how it renders or where it persists.The compiler translates this schema into an intermediate representation we call OIR (Orbital Intermediate Representation) — a language-agnostic, serializable structure that captures the full semantic model:
The same semantic model compiles to multiple projection targets. The state machine logic, guard evaluation, and effect sequencing are identical across all of them. That's the proof that the semantic layer is genuinely independent — same
.orbfile, different shells, same behavior.Where the Tension Lives
The interesting case is
render-ui. It's an effect that lives in the semantic model but clearly has projection concerns. Here's how we handle it:Semantically, this says: "present the user with a table of Tasks, with an action that emits EDIT." It doesn't say "use a React DataTable component" or "render with Tailwind CSS" — that mapping happens in the pattern registry, which is projection-specific. The semantic model names a pattern type (
entity-table) and declares its event contract (what events it can emit). The shell's backend maps that pattern to a concrete component.So
render-uiis semantic in the sense that it declares what affordances the user has — but the visual realization is always a projection concern, resolved at compile time by the backend.Effects Are Semantic, Execution Is Projected
The same principle applies to all effects. The semantic model declares:
set— mutate state (runs on both client and server)persist— write to storage (server projection)fetch— read from storage (server projection)call-service— invoke external system (server projection)render-ui— present UI affordance (client projection)navigate— change route (client projection)emit— publish event (both)The effect executor routes based on
RuntimeEnvironment(client vs server), not based on the platform language. The semantics are: "this transition persists data and shows a confirmation." Where that persist goes (Postgres, DynamoDB, SQLite) and what that confirmation looks like (toast, modal, CLI message) are projection decisions made by the shell, not by the schema.How Far Can This Go?
To your question about how far we see the semantic layer being independent over time — we think it can go very far, and the OIR is the key.
Today, the OIR already contains everything needed to generate a working application on any target platform. The Rust compiler validates all guard expressions against entity definitions, detects circular event chains, and can simulate all reachable states via BFS — all at the OIR level, before any projection is involved.
We also run parity tests that execute the same schema through both the Rust and TypeScript runtimes and compare the resulting state transitions and effects. If they diverge, the semantic model has a bug. This is only possible because the semantics are truly projection-independent.
The direction we're heading is making the OIR itself an exchangeable artifact — something you could hand to a different compiler, a different runtime, or even a formal verification tool. The schema defines meaning. Everything else is a view.
Thanks for the sharp question — "separation of concerns at the semantic level" is exactly the right frame for what this architecture is trying to achieve.