I've been using AI assistants for pair programming almost daily for the past few months. After roughly 100 sessions, I've converged on three rules that consistently produce better code and fewer rewrites.
These aren't theoretical. They come from tracking what worked, what failed, and what cost me hours of debugging.
Rule 1: Never Accept the First Draft
The biggest mistake I made early on was treating the first response as the answer. It almost never is.
AI assistants optimize for plausible-looking code. That means the first draft will usually compile, might pass a quick eyeball test, and will subtly break in ways you won't notice until production.
What I do instead:
After getting the first draft, I always ask:
Review your implementation for:
1. Edge cases you didn't handle
2. Assumptions you made about the input
3. Places where error handling is missing
List each issue, then rewrite.
This consistently catches 2-3 real issues. Not theoretical "what ifs" — actual bugs that would have shipped.
The key insight: the assistant already "knows" about these problems. It just doesn't surface them unless you ask.
Rule 2: Give the Scope Before the Task
When I say "add user search to the API," the assistant starts writing immediately. It picks a search library, chooses a database pattern, decides on pagination — all without knowing my constraints.
Then I spend 20 minutes explaining why half those choices are wrong for my project.
What I do instead:
I give scope constraints first, task second:
Scope:
- This is a REST API using Express + PostgreSQL
- We use raw SQL (no ORM)
- Search should use pg_trgm for fuzzy matching
- Max 50 results, cursor-based pagination
- No new dependencies
Task:
Add a GET /api/users/search endpoint that accepts
a \`q\` parameter and returns matching users.
This takes 60 extra seconds to write. It saves 20 minutes of back-and-forth.
The pattern: constraints narrow the solution space. A good assistant with tight constraints produces better code than a great assistant with no constraints.
Rule 3: Make It Write Tests, Then Code
This one took me the longest to adopt because it feels backwards. But it's the single biggest quality improvement in my workflow.
Instead of asking for the implementation and then adding tests, I ask for the tests first:
Before writing any implementation, write the test file for
a user search endpoint with these test cases:
1. Returns matching users for a valid query
2. Returns empty array for no matches
3. Handles special characters in query safely
4. Respects the 50-result limit
5. Returns 400 for missing query parameter
Use Vitest. Follow our existing test patterns in __tests__/.
Once the tests exist, the implementation prompt becomes trivial:
Now write the implementation that makes all 5 tests pass.
Don't modify the tests.
Why this works: The tests act as a specification. The assistant can't drift, add unnecessary features, or make silent assumptions — because the tests will fail.
I've found that test-first sessions produce code that needs roughly 60% fewer revisions than code-first sessions.
The Meta-Rule: Treat the Assistant Like a Junior Developer
All three rules boil down to one principle: AI assistants are skilled but careless.
They can write any pattern you need. They know every library. They'll happily produce 200 lines of code in 10 seconds.
But they won't:
- Question their own assumptions
- Ask for clarification on ambiguous requirements
- Write tests unless you insist
- Consider your specific project constraints unless reminded
A good tech lead doesn't let a junior developer push code without review, scope, or tests. Don't let your AI assistant do it either.
Quick Reference
| Rule | One-liner | Time cost | Time saved |
|---|---|---|---|
| Never accept first draft | Ask for self-review before accepting | +30 sec | ~15 min debugging |
| Scope before task | Write constraints, then the ask | +60 sec | ~20 min rework |
| Tests first, then code | Specify test cases before implementation | +2 min | ~30 min revisions |
Total investment: under 4 minutes per session.
Total savings: roughly an hour of pain.
After 100 sessions, I don't skip any of them.
Top comments (0)