DEV Community

Cover image for AI Needs Curriculum, Not Better Prompts
Jono Herrington
Jono Herrington

Posted on • Originally published at jonoherrington.com

AI Needs Curriculum, Not Better Prompts

You've been there.

The prompt that should work. The rephrase that feels clever. The context you're sure will fix it this time. And the AI just keeps barreling forward ... confident, eager, completely lost.

I was in that spiral over a year and a half ago, trying to get a module refactored. Each exchange felt like I was getting dumber. The AI wasn't being difficult. It was being helpful in exactly the wrong way.

That's when I wanted to throw my monitor out the window.

Not because the AI was wrong. Because I was. I was treating a systematic problem like a communication problem.

I've thought about that moment a lot since. The frustration wasn't the AI's fault. It was mine. I was asking for output without building the system that produces good output.

The frustration wasn't the AI's fault. It was mine.

The Vending Machine Fallacy

Here's how most engineers use AI.

Prompt in. Code out. When it fails, prompt harder. Different words. More context. Hoping the next output works.

We treat it like a vending machine. If the first dollar doesn't work, we try another dollar. Same machine. Same problem. Different input.

That's nonsensical.

When my junior breaks something ... and they will ... I don't rephrase the question. I don't keep feeding dollars into a broken machine. I trace their reasoning. Find the gap. Train the pattern. Encode the why so they learn.

The junior who keeps going down the wrong track isn't being difficult. They're being human. They've lost sight of what they're actually trying to do. They're trying to please the person asking them to put output in. It's a quintessential systems problem. The context is missing. The constraints are unclear. The definition of "good" hasn't been established.

AI has the same problem. It just doesn't know it.

Non-Determinism Is a People Problem

Engineers love to complain that AI is non-deterministic. Same prompt, different outputs. Unpredictable. Unreliable.

I've got news. Humans are non-deterministic too.

Ask an engineer the same question three times ... morning, afternoon, after a bad deploy ... and you'll get different answers based on stress, sleep, what they ate. We've always managed variability with standards and accountability. Same solution applies to AI.

The problem isn't that the tool is unpredictable. It's that we haven't built the system to make predictions useful.

The problem isn't that the tool is unpredictable. It's that we haven't built the system to make predictions useful.

The Operating System

I built something on my team. Not because AI joined the codebase. Because good engineering requires it.

Lint rules as feedback loops. Not nitpicks in pull requests ... encoded patterns that teach before the mistake ships. Architectural tests as training data. Constraints that define what "good" looks like before the AI starts guessing.

Here's the thing engineers miss. These aren't AI-specific ideas. We didn't create lint rules because AI is in the codebase. Good engineers already have lint rules to avoid nitpicks in pull requests. We don't have architectural tests because AI is in there. We have them because we want to scale without breaking things.

The AI doesn't get special treatment. It operates inside the constraints that were already there.

Now when my AI breaks, the system catches it. Teaches it. Tightens the pattern. The feedback is immediate, consistent, and encoded. Not "try again with better words." Try again inside boundaries that define what right looks like.

Garbage In, Systems Out

Real leaders create systems where people thrive. This isn't a new idea. It's the foundation of management.

Give somebody a task without context, without training, without a definition of success ... even smart people produce garbage. Give them clear constraints, feedback loops, and guardrails ... suddenly they're producing at level.

AI is no different. "Garbage in, garbage out" applies no matter how smart the tool is.

The engineers who will win this transition aren't the ones who prompt better. They're the ones who think systematically. Who build infrastructure before they ask for output. Who understand that managing AI isn't different from managing people ... both need context, constraints, and curriculum to produce consistently.

People leaders who understand technology are going to dominate the next decade. Not because they can write the best prompts. Because they know how to build systems that produce good work.

Accountability Is Architecture

Accountability isn't "I prompted better."

It's "I built the system that trains."

When I was in that prompting spiral over a year and a half ago, I was trying to solve a people problem with better vocabulary. The AI wasn't failing to understand me. It was failing to operate inside a system that could teach it what I wanted.

That's the shift. The question isn't how do I get better output from my AI. The question is how do I build the infrastructure that produces good output reliably.

My junior's mistakes teach because I built the system that turns mistakes into learning. My AI's breaks teach because I built the system that catches breaks before they ship.

Build the operating system first. Then accelerate.

The vending machine doesn't need better dollars. It needs better design.


One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.

Top comments (0)