I started with 252 lines of AI coding principles. After four rounds of review, 86 lines survived. But first — a question.
Is your team measuring AI coding productivity by any of these?
- Lines of code generated
- Number of prompts per session
- Response speed
- Commit count
- Number of AI tools adopted
If so, you might be optimizing for the wrong things. Lines-of-code targets reward bloat. Prompt-count targets punish thinking. Speed targets skip verification. When these metrics become goals, quality pays the price.
I wrote an 86-line document called The AI Coding Way that explains why — and what to measure instead. It tries to capture what stays true about human-AI coding collaboration, regardless of which tool or model you use.
What's in it
Three principles, in order of priority:
Keep things reversible (prerequisite) — Linters, tests, CI, version control. Safety enables boldness. You can't ask AI to refactor a module if you can't undo it.
Make your intent explicit (starting point) — Every context you give AI has three elements: purpose, constraints, and knowledge. Missing any one of them degrades output quality. Project-level intent (types, tests, naming conventions) compounds across every session.
Verify the output (non-negotiable) — The bottleneck has shifted from generation to verification. Code that was generated fast should be reviewed slow. The most expensive decision in AI coding: "it works, ship it."
One practice section covering the collaboration cycle (instruct → generate → verify → improve) and two habits: ask questions instead of giving instructions, and turn repeated instructions into project-level conventions.
One measurement section listing six metrics that are tempting to track but dangerous if used as goals: lines generated, prompt count, response speed, commit count, session count, tool count. These aren't useless — but optimizing for them alone leads you away from what matters. Measure density instead: acceptance rate, rework frequency, final code quality.
What's not in it
No tool names. No model names. No programming languages. No prompt templates. No opinions on whether AI is your boss, your colleague, your subordinate, or your tool — that's your call. The three principles apply regardless.
Why I wrote it
There are plenty of AI coding guides out there. Anthropic, OpenAI, and countless blog posts tell you how to write better prompts. But most of them have a shelf life of about six months — the next model update makes half the advice obsolete.
I wanted something different: a set of principles that hold true even as models get smarter, context windows get larger, and tools come and go.
So I set a rule: don't write anything that a better model would invalidate. "AI hallucinates" is a current fact, not a lasting principle. "AI output is probabilistic" is a lasting principle. "Context windows are small" will age poorly. "Humans are responsible for verifying output" won't.
How I wrote it
This document is itself a product of AI coding. AI agents debated the structure. AI generated the text. I made every decision on what to keep, what to cut, and how to frame it — exactly the cycle the document describes.
It started at 252 lines. Four AI agents reviewed it. One of them said: "A blog post would have been enough." That forced me to ask what actually survives if you strip everything away. For example, a full section called "AI's Amplification Effect" was cut as a standalone section — but its core insight ("AI amplifies both good and bad design") survived as two lines in the principles preamble. That's the kind of compression that happened across the board. The answer was 86 lines.
The full document
Here it is — the entire thing. 86 lines.
The AI Coding Way
Principles for AI Coding — v0.1, March 2026
If you write code with AI and want to get better at it, these principles are for you. Being a good engineer is the best AI strategy.
This document is meant to live in your project repository — not read once and forgotten, but referenced daily as shared understanding across your team. It will be revised based on real-world feedback.
Three Principles
AI output is probabilistic. The same instruction can produce different code. AI has knowledge gaps and states incorrect things with confidence. And AI amplifies — good intent produces good code at scale; sloppy instructions produce plausible but fragile code at scale. Given these properties, humans bear three responsibilities: intent, context, and verification.
These three principles are non-negotiable requirements for AI coding. The numbers indicate the order you should address them.
1. Keep things reversible (prerequisite)
The foundation for everything else. Without this, nothing is safe to try.
Prevention: Type checking, linters, test suites, CI. If AI-generated code doesn't meet the bar, it gets rejected automatically. The stronger your prevention, the bolder you can delegate.
Recovery: Version control, branching strategies, snapshots. The last line of defense when prevention fails.
Safety mechanisms are not constraints. They are the foundation that enables bold delegation.
2. Make your intent explicit (starting point)
Generating and verifying without clear intent is running without a map.
Context has three elements: purpose (what you want to achieve), constraints (what must not happen), and knowledge (background information needed for decisions). Without purpose, AI wanders. Without constraints, you get unwanted output. Without knowledge, AI guesses.
Intent operates at two levels. At the task level, you communicate purpose, constraints, and knowledge in your instructions. At the project level, type definitions, tests, naming conventions, and directory structure express intent — set these up once and they improve every session.
3. Verify the output (non-negotiable)
The bottleneck has shifted from generation to verification. Generation takes seconds. Verification takes human time. That's why the quality of your verification determines the quality of your outcomes.
The faster code was generated, the more carefully you should read it.
A common failure: AI generates 200 lines that appear to work. Tests pass. But half is unnecessary abstraction from trying to be too clean, and the rest silently swallows errors and breaks on edge cases. "It works, merge it" is the most expensive decision in AI coding.
Practice
AI coding follows this cycle:
- Communicate — Pass purpose, constraints, and necessary knowledge to AI
- Generate — AI produces code
- Verify — Human judges the output
- Improve — Revise instructions and context based on results
Skipping step 4 and jumping back to step 1 is the primary cause of spinning your wheels.
Ask questions, not just instructions. "Is there a problem with this design?" draws out more of AI's capability than "Write this function." When AI offers suggestions, dismissing them as "not what I asked for" is a missed opportunity.
If you're writing the same instructions every time, turn them into conventions. Put them in a rules file. Automate with hooks. What you systematize compounds across every future session.
What to Measure
The only measure of progress is working software.
The following metrics are tempting to track but dangerous when used as goals. They aren't useless as signals — but optimizing for them leads you away from what matters.
- Lines generated. Measuring quantity incentivizes quantity.
- Prompt count. Many exchanges signal low instruction quality, not productivity.
- Response speed. Penalizes people who think before they instruct.
- Commit count. Split commits and the number goes up. Measuring quantity invites inflation.
- Session count. More isn't better. Context loss from session breaks can reduce efficiency.
- Number of tools used. Using a tool and mastering a tool are different things.
The common problem: measuring quantity and speed incentivizes quantity and speed at the expense of quality.
Measure density instead. Acceptance rate. Rework frequency. Final code quality.
This document is a product of AI coding. AI agents debated, AI generated text, and a human made the decisions, verified the output, and revised it.
Whether you see AI as a subordinate, a collaborator, a supervisor, or a tool is up to you. The three principles apply regardless of what you expect from AI.
v0.1
This is v0.1, not a finished product. If you have feedback, I want to hear it. The only way this earns the right to be called a "guide" someday is through revision.
Top comments (0)