DEV Community

Cover image for Code and Coding is Dead: Function Driven Development or Extinct
Ryo Suwito
Ryo Suwito

Posted on

Code and Coding is Dead: Function Driven Development or Extinct

You're still reading code reviews line by line, aren't you?

Let me show you why everything you learned about software engineering is about to become... optional.


ACT 1: The Stairs Nobody Reads

Here's a test.

You're in a burning building. Second floor. There's a fire. A fireman needs to get up there.

Quick: What are the stairs made of?

Bamboo? Steel? Carbon fiber? Telescopic aluminum? Folded wood?

You don't know. You don't care.

You care about one thing: Can the stairs hold the fireman?

If yes: Good stairs.

If no: Bad stairs.

That's it. That's the whole fucking review process.

Nobody's checking if the stairs follow SOLID principles. Nobody's asking if the joints are DRY enough. Nobody gives a shit if the architect used bamboo in a non-standard way.

Did it work? Ship it.

Welcome to Function Driven Development.


ACT 2: The SQL Moment

You run a query:

SELECT * FROM users WHERE email = 'bob@example.com'
Enter fullscreen mode Exit fullscreen mode

Result comes back in 3ms.

Do you care how PostgreSQL achieved this?

Do you care about the B-tree structure? The query planner's decisions? The index implementation? The page layout? The WAL buffer?

No. You care that it returned correct data fast.

That's Function Driven Development.

State the outcome you want. If it delivers, the implementation is irrelevant.

You don't review PostgreSQL's source code before running queries.

You don't debate whether the query planner made "clean" decisions.

You don't care if it follows SOLID principles internally.

It works. That's the contract.


ACT 3: Your E-commerce Site is Not Special

You're building an online store.

Old way (Code Driven Development):

Step 1: Architecture review
Step 2: Technology selection committee
Step 3: Design patterns discussion
Step 4: Code standards document
Step 5: Implementation begins (3 months later)
Step 6-47: Code reviews, refactoring, "best practices"
Step 48: Ship (6 months total)
Enter fullscreen mode Exit fullscreen mode

New way (Function Driven Development):

Requirement: E-commerce site
Prompt: "Build e-commerce with products, cart, checkout, Stripe"
AI: *generates 50,000 lines*
You: Run tests

✓ Can see products?
✓ Can add to cart?
✓ Can checkout?
✓ Stripe integration works?
✓ Webhook receives payment?

SHIP IT. (3 days)
Enter fullscreen mode Exit fullscreen mode

But the code is a mess!

Who. The fuck. Cares.

Your users aren't running code reviews. They're running transactions.

Metrics that matter:

  • Conversion rate: 3.2% ✓
  • Page load: 1.2s ✓
  • Payment success: 99.7% ✓
  • Security audit: Passed ✓

Metrics that don't matter:

  • Cyclomatic complexity: 47
  • Code coverage: 63%
  • SOLID violations: 127
  • "Clean Code" score: D-

Your customers can't see your code.

Your bank account can see your metrics.

Choose.


ACT 4: The SPMS Revolution

Here's the actual architecture that makes FDD work:

Single Purpose Microservice (SPMS)

The Supreme Law:

If your product requirements + changelog + known issues can't fit in an AI context window, your service is too big. Split it.

Not:

  • Lines of code (could be 50, could be 5,000)
  • Number of endpoints (could be 1, could be 10)
  • "Does it feel cohesive?" (subjective bullshit)

Just:

  • Can AI understand the complete context in one shot?
  • Can you explain everything this service does + why it does it that way?

If no: Too big. Split it.

The context window becomes your natural unit of decomposition.


The Three Artifacts

Each SPMS has exactly three things:

1. Product Requirements

What does this service do?

2. Changelog

Every edge case. Every quirk. Every fix. Every "we tried X but it broke because Y."

This is your tribal knowledge. This is how AI regenerates correctly.

3. Playwright Tests

The behavioral contract. What must remain true.

That's it. That's the whole memory.

The implementation? Disposable.


The Regeneration Cycle

When you need to change something:

Input: Requirements + Changelog + "Add feature X"
AI: *generates new implementation*
Validation: Run Playwright tests
Result: Pass → Ship / Fail → Iterate
Enter fullscreen mode Exit fullscreen mode

When something breaks:

Input: Requirements + Changelog + "Fix bug Y"
AI: *generates new implementation*
Validation: Run Playwright tests
Result: Pass → Ship / Fail → Iterate
Enter fullscreen mode Exit fullscreen mode

When the code gets messy:

Input: Requirements + Changelog (unchanged)
AI: *generates completely fresh implementation*
Validation: Run Playwright tests
Result: Pass → Ship / Fail → Iterate
Enter fullscreen mode Exit fullscreen mode

Notice something?

The flow is identical. Implementation is always disposable.

Like asking PostgreSQL to re-plan your query. You don't care. It either returns correct results or it doesn't.


ACT 5: The Maintenance Economics

Traditional thinking:

"Rewriting is expensive. Debugging is cheaper. Avoid rewrites."

This was true when rewrites took months.

It's false when rewrites take hours.

The New Economics:

Patch (AI edits existing code):

  • Time: 30 minutes
  • Risk: Might break subtle things
  • Cost: Low

Overhaul (regenerate from scratch):

  • Time: 45 minutes
  • Risk: Playwright catches breaks
  • Cost: Also low

Decision tree:

Is patching clearly simpler?
├─ Yes → Patch
└─ No → Overhaul

Did patch work?
├─ Yes → Ship
└─ No → Overhaul
Enter fullscreen mode Exit fullscreen mode

When costs are similar, choose fresh over fix.

Debugging crusty AI code vs regenerating clean code?

Just regenerate.

The stairs don't need to be durable if rebuilding is faster than repairing.


ACT 6: The Self-Regulating Architecture

Old Problem: Systems grow complex until nobody understands them.

New Solution: Systems can't grow complex beyond context window limit.

The cycle:

Service starts simple
↓
Requirements added
↓
Changelog grows
↓
Context window filling up
↓
"Can't fit in context anymore"
↓
SPLIT THE SERVICE
↓
Two simple services
Enter fullscreen mode Exit fullscreen mode

The architecture self-regulates.

Can't fit in context → too complex → split → stays regeneratable.

Just like:

  • Unix philosophy: "Do one thing well"
  • Microservices: "Bounded contexts"
  • SPMS: "Fits in AI context"

The context window isn't a limitation.

It's a design principle.


ACT 7: But What About...

"What about security?"

Companies already pay external security auditors. Whether your code was hand-crafted or AI-generated doesn't change the audit.

Security scan passes → Ship it.

Security scan fails → Fix and re-scan.

Same process. Different implementation source.

"What about tech debt?"

Tech debt only matters if you're maintaining the code.

If regeneration costs < maintenance costs, there is no debt.

The implementation is disposable.

"What about losing context?"

That's what the changelog is for.

Traditional codebases lose context too:

  • Original dev leaves
  • Comments get outdated
  • Git blame is archeology

FDD forces documentation discipline that should have existed anyway.

If you can't explain the requirement clearly enough for AI to implement it, your requirement wasn't clear enough for humans either.

"What about integration breaking?"

That's what Playwright tests are for.

If regeneration breaks other SPMS, tests fail. You know immediately.

The system tells you when you fucked up.

"What about edge cases?"

That's what the changelog captures.

## v2.3.1 - Bug Fix
- Fixed: German SEPA payments failed due to race condition
- Solution: Added 500ms delay for webhook processing
- Context: Stripe webhooks arrive out of order for SEPA
Enter fullscreen mode Exit fullscreen mode

AI regenerates with that context.

"What if tests don't catch everything?"

Then your tests were insufficient. You learn. You add that test.

Outcome validation, not process validation.

"What about [insert concern]?"

Does it pass tests?

Do metrics look good?

Is production stable?

Yes? Then it's good.

No? Then fix it.

Stop asking "what if" questions. The system tells you when things break.


ACT 8: The Two Types of Stairs

Not all code is created equal.

There's hello.cpp and there's hftbidding.cpp.

hello.cpp mindset:

  • Does it work? ✓
  • Does it meet basic requirements? ✓
  • Ship it.
  • Never think about it again.

hftbidding.cpp mindset:

  • Does it work? ✓
  • Performance requirements met? 🔍
  • Security implications understood? 🔍
  • Edge cases handled? 🔍
  • Failure modes documented? 🔍
  • Can we debug this at 3 AM? 🔍

Most code is hello.cpp.

Your internal Slack bot? hello.cpp. Ship the AI slop.

Your content management system? hello.cpp. Regenerate when needed.

Your recommendation engine? hello.cpp. Outcomes matter, implementation doesn't.

Some code is hftbidding.cpp.

Your payment processing core? hftbidding.cpp. Maybe actually understand it.

Your high-frequency trading algorithm? hftbidding.cpp. Probably hand-code this.

The mistake developers make:

Treating all code like hftbidding.cpp.

The skill isn't writing perfect code for everything.

The skill is knowing which category you're in.

And here's the thing: 90% of your codebase is hello.cpp.


ACT 9: The Framework

90% of code: Function Driven Development

1. Write requirements clearly
2. Maintain changelog
3. Build Playwright tests
4. Generate implementation (AI)
5. Validate outcomes

✓ Tests pass?
✓ Metrics green?
✓ Other SPMS work?

SHIP IT.
Enter fullscreen mode Exit fullscreen mode

Don't read the code unless:

  • 🔴 Security audit fails
  • 🔴 Performance unacceptable
  • 🔴 Tests pass but production breaks
  • 🔴 You're in hftbidding.cpp territory

When code gets messy:

Don't refactor. Regenerate.

When requirements change:

Don't edit. Regenerate.

When bugs appear:

Add to changelog. Regenerate.

The implementation is always disposable.

Like PostgreSQL query plans. You don't maintain them. The database regenerates them.

Your job is to maintain:

  • Clear requirements
  • Comprehensive tests
  • Detailed changelogs

Not the implementation.


ACT 10: Code is Dead. Long Live Outcomes.

The thesis:

Code is dead as a primary artifact.

Code is alive as compiler output.

The practice:

Ship functions, not implementations.

Monitor outcomes, not code quality.

Maintain requirements, not codebases.

The architecture:

SPMS: Services bounded by context windows.

Three artifacts: Requirements, changelogs, tests.

Regeneration: Always cheaper than you think.

The validation:

Does it pass tests?

Do metrics look good?

Is production stable?

Then it's good.

The mindset shift:

Stop asking: "Is this good code?"

Start asking: "Does this work?"

Stop asking: "How should we implement this?"

Start asking: "What outcomes do we need?"

Stop asking: "Should we refactor?"

Start asking: "Should we regenerate?"


The Supreme Law

State the outcome you want.

If it delivers, the implementation is irrelevant.


Code is dead. Long live outcomes. The stairs can be bamboo, steel, or carbon fiber. As long as they hold the fireman. And when they break, you build new ones. Because rebuilding is faster than repairing. And outcomes matter more than implementation. Welcome to Function Driven Development. Where code is compiler output. And your job is to maintain requirements, not implementations. The teenager already shipped. While you were reading this. Choose accordingly.

The fire is real.

Build your stairs accordingly.

Top comments (5)

Collapse
 
dannyengelman profile image
Danny Engelman • Edited

Just 2 words reply is enough: Ford Pinto youtube.com/watch?v=EKnfEEsDkP4

Collapse
 
ryo_suwito profile image
Ryo Suwito

thats...according to FDD - is the F part...
it doesn't help if you start with broken first principle! the goal statement!

Requirements Document:

  • Transport passengers: ✓
  • Cost under $2000: ✓
  • Survive rear collision: ❌ FAIL

Changelog:
"Known issue: 20mph rear collision ruptures fuel tank
Reason: $11 part omitted to meet cost target
Status: SHIPPING ANYWAY"

Playwright Test:
Test: Rear collision safety
Expected: Passengers survive
Actual: PASSENGERS DIE
Result: ❌ CRITICAL FAILURE

Traditional development didn't save the Pinto. Code reviews didn't save it. Engineering rigor didn't save it.
What would've saved it? Validating the actual FUNCTION against reality.
Which is... exactly what FDD proposes. Start with the right function, validate relentlessly, don't ship if it fails.

Collapse
 
dannyengelman profile image
Danny Engelman • Edited

Luckily that was 50 years ago, my Tesla is 100% safe

That teenager in his garage is now designing the new skyscraper downtown,
I am cool, he has the right kwalifications... knows how to prompt.

Its not about Function, its about Experience, without it you can not write a prompt.

Thread Thread
 
ryo_suwito profile image
Ryo Suwito

Just saying...
Those Teslas aren't 100% safe. And their safety doesn't come from "artisanal hand-coded perfection."
Those crash test dummies getting crushed? That's Playwright in physical form. Outcome validation, not code review.
Elon's entire playbook: "Can it fly? Send it to Mars and find out."
Anyway, read slowly next time. The "teenagers building skyscrapers" panic? Article explicitly covers that in the hello.cpp vs hftbidding.cpp section.
And honestly? Nobody cares if Tesla's LiDAR is coded by Grok AI... as long as it stops before hitting the pedestrian.
That's the whole point.

and uh by the way this comment is AI assisted/generated, wonderful right? you understand it like its hand written by a sage...anyway i revise it several times because it doesn't pass my savage but polite metric...fdd comment - what a day to be alive

Thread Thread
 
dannyengelman profile image
Danny Engelman

If the function is to deliver A site, they succeeded.
They got a $16M funding and now their site does nothing:

Nasa lost a spaceship in 1999, because some stupid American team decided NOT to use the Standard (SIS) but calculated in their familiar US units.
en.wikipedia.org/wiki/Mars_Climate...