DEV Community

Joseph Fagan
Joseph Fagan

Posted on

I don't code. I just built a $600K production platform in 3 months with a team of AI, Here's how...

Full methodology + templates (MIT licensed): github.com/recon007jf/JFMAD


There's a moment - and I think anyone who's built something real with AI has had it - where you realize your ad hoc approach has silently become a system.

For me, it happened around month two of building an AI-powered sales intelligence platform. I was staring at a Notion workspace full of constitutional documents, sprint plans, role assignments, and behavioral calibration notes, and I thought: when did I start doing this?

I'm a 3D artist and creative director by background. Mercedes, Rivian, Meta, Audi. I don't code. I didn't set out to create a software development methodology. I set out to build a product for a client whose name and reputation were on the line, and the methodology is what emerged when things kept breaking.

The Two Failure Modes

Most people using AI for development are in one of two places, and I've been in both:

Vibe coding. You open ChatGPT, describe what you want, iterate on the output until it looks right. This works beautifully for scripts, prototypes, one-off tools. It breaks down the moment your system has enough moving parts that a fix in one area silently breaks something in another. There's no one checking. There's no memory. You're rebuilding context every session.

Single-model copilot. You pick your favorite model and use it for everything - architecture, code, review, planning. Better than vibe coding, but you're asking one intelligence to be good at everything. You're also re-explaining your entire project every time the context window fills up. And nobody is challenging the model's assumptions, because the model is the only one in the room.

Both modes hit a wall around the same point: when the system gets complex enough that decisions have consequences across boundaries, and when the stakes are high enough that "it looks right" isn't good enough.

What I Ended Up Doing

I started assigning different AI models to different roles. Not as a gimmick - because I genuinely noticed they were better at different things.

Claude became my systems architect. It has a way of thinking about how pieces connect that the other models don't. When I describe a problem, Claude tends to see the system around it - the second and third-order effects.

ChatGPT became the product manager. It's excellent at process enforcement, scope management, and asking "but does this actually serve the user?" It keeps things grounded in practical requirements.

Gemini became the technical lead. Particularly good at catching risk - "this approach won't scale," "this introduces a dependency you haven't accounted for," "here's the edge case that breaks this."

A developer AI ships code from plain text directives I write. No documents, no files passed around - just clear text saying exactly what to build.

I set direction, make final decisions, and hold everyone accountable. I can't write code, but I can tell you if the architecture serves the client, if the behavior is right, and if the output earns the trust of the person whose name is on it.

The Six Principles

As the approach matured, I identified six principles that made it work:

1. Model-Native Role Assignment. Don't role-play - cast. Each model is chosen for a role because it's genuinely better at that function. This isn't one model pretending to be five people.

2. Adversarial Consensus. I force models to challenge each other's work at every level - not to be contrarian, but to find the best solution through consensus. The architect proposes, the PM pushes back on user impact, the tech lead flags feasibility risk. They debate until the strongest answer emerges. The friction is intentional and goal-directed.

3. Constitutional Governance. The system is governed by a living constitution. Not a PRD. A behavioral contract that defines what the product does, how the AI behaves on behalf of real people, what design principles are non-negotiable, and what decisions are locked.

4. Client as Co-Creator. The end user isn't a passive stakeholder submitting feature requests. Their feedback is treated as constitutional amendments. Their exact phrasing matters. Their discomfort with a draft isn't a bug report - it's a calibration signal.

5. Institutional Memory via Documentation. No single AI session holds the full picture. Your documentation workspace is the team's shared brain. If it's not documented, it didn't happen.

6. Plain Text Directives. All instructions to the development layer are plain text in chat. Never documents or files. This prevents misinterpretation and keeps the implementation layer focused on exactly what was specified.

What It Built

Using this approach, I built in about three months what I've been told would typically require 9-12 months and $400-600K with a traditional team:

  • A proprietary data pipeline combining DOL filings, BenefitFlow, and PDL enrichment
  • A deterministic scoring engine with narrative intelligence
  • An AI drafter calibrated to a specific person's voice and market context
  • A morning plan workflow across 8 states
  • A live production application with safety gates
  • 97+ contract tests with zero regressions

I want to be honest: this was one project. I don't know yet how well the methodology generalizes. I think it will, but I haven't proven that yet.

The Discovery That Changed How I Saw This

Just before writing up the methodology in February 2026, I found BMAD - the Breakthrough Method of Agile AI-Driven Development, published in April 2025. The name JFMAD was inspired by BMAD - that's not a coincidence. But the methodology itself was developed completely independently.

The structural overlap is significant: role-based AI assignment, governance documents, structured handoffs, institutional memory as a core pillar. But the approaches diverge in interesting ways:

BMAD JFMAD
Role assignment Roles via system prompts to a single LLM Different AI models per cognitive strength
Orchestration Automated orchestrator agent Human orchestrator with explicit chain of command
Governance PRDs and architecture docs Living constitutional documents
Review process Role-based review stages Adversarial consensus at each level
Origin Theory-first (April 2025) Practice-first (November 2025)

I don't think either approach is better. They're complementary. Two people arriving at the same core structural conclusions independently - one from theory, one from practice - suggests this is a real pattern, not just one person's workflow preference.

Getting Started

If any of this resonates, the full methodology and starter templates are here:

github.com/recon007jf/JFMAD

Templates for your constitution, roadmap, and project principals are included. MIT licensed - use it, adapt it, make it yours.

The thing I'd suggest starting with: write your constitution before you write any code. Define what the product is, who it serves, and what's non-negotiable. Everything else flows from there.

I'd genuinely love to hear from anyone who's tried something similar. What worked? What didn't? What am I missing?

Top comments (0)