I have a take that is going to annoy two groups of people at the same time:
- The “real engineers don’t use AI” crowd
- The “AI wrote my whole app” crowd
Here it is:
If AI is in your workflow, your codebase is now a human factors problem.
Not a model problem.
Not a prompt problem.
A human problem.
Because the hardest part is no longer generating code.
The hardest part is knowing what to trust, what to delete, what to keep, and what you are willing to be responsible for at 2:00 AM when prod is on fire and the person who “helped” is a chat bubble with no pager.
The new sin is not bad code. It’s unowned code.
AI makes it easy to produce code that looks plausible.
That’s the trap.
Plausible is not correct. Plausible is not maintainable. Plausible is not secure. Plausible is not even consistent with your repo.
Plausible just means your brain gets a quick dopamine hit and says: “ship it.”
So here’s the controversial thing I think we should start saying out loud:
If you did not read it, you did not write it.
If you did not write it, you do not own it.
If you do not own it, it does not belong in main.
That’s not anti-AI. That’s pro-software.
“But I can read it later”
No you won’t.
You will merge it while it’s fresh. Then a week later you will forget you even asked for it. Then three months later it will fail in a weird edge case and you will be in a code archaeology session, scrolling through a file full of polite variable names and zero intent.
AI code has a smell.
Not because it is always bad.
Because it often has no story.
Human-written code usually has fingerprints:
- slightly annoying but consistent naming
- weird shortcuts taken for a specific reason
- comments that reflect real pain
- a mental model that shows up across files
AI code often looks clean but detached, like it was written by someone who will never have to maintain it.
Which is true.
The real cost is not bugs. It’s ambiguity.
Bugs are normal. We have tests. We have monitoring. We have rollbacks.
Ambiguity is poison.
Ambiguity is when you can’t tell:
- what the function is supposed to guarantee
- what failure looks like
- what the invariants are
- why a decision was made
- what tradeoff was chosen
AI generates code faster than it generates intent.
So if you are using AI and you are not also increasing clarity, you are building a repo that will eventually punish you.
The “AI pair programmer” fantasy is incomplete
Most devs use AI like a hyperactive junior.
“Write me a thing.”
It writes a thing.
You merge the thing.
That is not pairing.
Pairing is: reasoning out loud, constraints, tradeoffs, and a shared model of the system.
So the only way AI becomes a legitimate pair is if you force it to act like one.
Which means you need to change what you ask for.
Instead of: “write the code”
Ask:
- “Before you write anything, tell me what you think I’m trying to do.”
- “List assumptions you are making about the system.”
- “Propose 2 approaches and argue for one.”
- “Tell me how this fails.”
- “Write tests first.”
- “Show me the minimal diff that gets us there.”
If the tool cannot explain itself, it is not helping. It is performing.
A rule that saved me from shipping garbage
I started doing something that feels almost too simple:
Every AI-generated change must come with a receipt.
Not a comment block of fluff.
A receipt like:
- What problem is this solving, in one sentence?
- What are the inputs and outputs, explicitly?
- What are the invariants?
- What are the failure modes?
- What tests prove it?
- What did we choose not to do, and why?
If I cannot answer those, I do not merge.
Because I know what happens otherwise.
I get fast today and slow forever.
“This is just good engineering, nothing new”
Exactly.
That’s the point.
AI did not change what good engineering is.
It changed how easy it is to accidentally do bad engineering.
It lowered the effort required to create complexity.
So we need friction in the right places.
Not bureaucracy.
Friction that forces ownership.
Practical patterns (non-hype, actually usable)
Here are a few patterns that make AI helpful without letting it rot your repo:
Use it for diffs, not features
Ask for the smallest change that moves you forward, then iterate.
Make it write tests and edge cases
Not because it’s perfect, but because it will often suggest failure modes you forgot to consider.
Make it explain the code to you like you are tired
If it can’t do that, it’s too complex or too hand-wavy to merge.
Keep a “kill switch” mindset
Prefer designs you can remove in one commit if it turns out to be wrong.
Treat generated code as untrusted input
Same posture as copy-pasting from Stack Overflow, but faster and more frequent.
The part people avoid: responsibility
This is the emotional part for me.
A lot of us got into software because it felt like a clean meritocracy: you ship, it works, you win.
AI blurs the line between “I built this” and “I assembled this.”
That can mess with your identity.
So some devs swing into denial: “I don’t use it, I’m pure.”
Other devs swing into cosplay: “AI built everything, I’m 10x.”
Both are insecurity.
The mature posture is boring:
Use it. Verify it. Own it.
Your future self will thank you.
A question I want to ask the Dev.to crowd
What is your “AI code ownership” rule right now?
Do you have a hard line like “no generated code without tests” or “no generated code without a design note”?
Or are you just vibing and hoping future you figures it out?
Top comments (0)