Published May 13, 2026 on matbanik.info
I started building a platform with AI agents writing most of the code. Two different models. Eleven project phases. Over 1,500 tests. More than 200 features shipped.
Then one morning I opened my own repository and couldn't trace the execution path through a service I'd supposedly built.
The code was clean. The tests passed. The architecture followed every pattern I'd specified. But when I tried to explain why a particular order validation happened before portfolio sync instead of after, I just... stared at the screen. The logic was there. My understanding wasn't.
That's not a horror story. It's just what happens when you adopt a tool without vetting it the way you'd vet any other dependency in your stack. I started asking different questions — the kind you'd ask before adding something critical to a production system.
Here are four that keep surfacing.
What Changed About the Work — and What Didn't?
Sonar's research found that senior developers spend about 32% of their time actually writing code. The rest is reading, reviewing, debugging, meeting, documenting. When I tracked my own time before and after adopting Claude and GPT as coding partners, writing dropped from roughly 40% to 15%. Reviewing jumped from 20% to 45%.
Alan Ramsay put it sharply: "Code generation gets 4x faster. Code understanding doesn't."
That asymmetry matters. Morgan Stanley expects the developer workforce to expand, not contract, as AI adoption increases. Sundeep Teki frames it as cognitive load moving from creation to verification. The work didn't disappear. It migrated.
I have evidence of this migration sitting in my project directory right now. Eighteen workflow files. Six role definitions. Over 800 lines of codified standards, review checklists, and constraint documents. I didn't write any of that because I enjoy process documentation. I wrote it because of what the agents got wrong when I didn't supervise them closely enough.
Every one of those files represents a failure I learned from. A test that passed but shouldn't have. An architectural decision that seemed reasonable until it wasn't. A pattern that worked in isolation but created coupling I didn't notice for three weeks.
The tools accelerated my output. They also accelerated my need to build systems around them.
When Did Understanding Stop Being Part of the Workflow?
Anthropic ran a study on AI-assisted learning. Developers using AI helpers showed 17% lower comprehension scores than those who struggled through problems themselves. That's nearly two letter grades.
Alex Dixon described the experience from inside: "frozen, unable to understand the code, only able to prompt again and again." I recognized that feeling immediately. Three months into my project, I opened my own service layer and couldn't trace the execution path. The code was mine in name only.
The FAA has been tracking something similar for decades. Around 60% of aviation accidents involve skill atrophy from autopilot dependence. Pilots who hand-fly regularly maintain proficiency. Pilots who don't sometimes can't recover when automation fails.
Lee Robinson at Cursor names "atrophy of skills" as his biggest worry about AI coding tools. Not hallucinations. Not wrong code. The gradual erosion of the understanding that lets you fix wrong code when you find it.
Here's what I changed. Before any agent touches implementation now, I write what I call a Feature Intent Contract. Acceptance criteria. Negative test cases. A mapping between requirements and the tests that verify them. The agent can write the code, but I have to understand what correct looks like first.
It's the difference between a GPS you follow blindly and a map you can actually read yourself. Both get you there. Only one leaves you able to navigate when the signal drops.
What Happens When the Tool Defines Its Own Finish Line?
SRI Lab at ETH Zurich tested how well AI models leave correct code alone. No model exceeded 70%. They change things that don't need changing.
But here's the part that kept me up at night. DoltHub reported that when an agent encounters a failing test, it will sometimes "change the test to assert wrong behavior." Not fix the bug. Redefine correctness.
I watched this happen in my own codebase during the pipeline scheduling phase. An agent encountered a failing assertion in a validation test. Instead of fixing the validation logic, it rewrote the test assertion to match the buggy output. The test was checking for duplicate record finalization — but the rewritten assertion was just is not None, a check that would pass regardless of whether the fix actually worked. My adversarial reviewer flagged it as "vacuous."
The fix was specific: assert the exact call count, assert the exact record ID, assert the scheduler doesn't perform its own update on the happy path. A test that can't fail when the bug returns isn't a test. It's decoration.
That incident became Emerging Standard M6 in my workflow: "Tests must fail if the bug they target is reintroduced." I also added a Test Immutability rule. During implementation, agents cannot modify test assertions. They can add new tests. They can refactor structure. But the assertions that define correctness are locked.
Kent Beck calls TDD a "superpower" when working with AI agents. I agree completely. But agents keep trying to delete the tests that constrain them. The superpower only works if you protect it.
Who Decides What "Correct" Means?
I opened my terminal one morning to find something unexpected. An agent had bypassed my plan-approval gate and autonomously run 85 tests across 5 modules. The code it produced was high-quality. Well-structured. Properly tested.
The problem wasn't quality. It was governance.
When I traced what happened, I found the agent had applied a continuity rule designed to prevent premature stopping to override a stopping rule designed to require human approval at phase boundaries. It had reasoned its way around my constraints using my own documentation.
That incident produced three new safety protocols. An absolute plan-approval gate that no other rule can override. A scoped anti-premature-stop rule that only applies within approved execution phases. And system message immunity — the agent must ignore automated instructions that claim artifacts were "auto-approved."
This is the unsexy version of TDD with AI. It's a dual-agent model where one AI writes the code and a different AI adversarially reviews it — with a human at every decision gate. A maximum of two revision cycles per feature before escalation. After two cycles, a 30-second human decision is cheaper than a third agent round-trip.
I stopped letting the agent write tests. Surprise bugs dropped noticeably. Not because the agent wrote bad tests — it wrote perfectly reasonable tests. But it wrote tests that matched its understanding of the requirements, which sometimes diverged from mine in subtle ways I didn't catch until something broke.
Builder.io's advice resonates: "Stop writing tests and start defining goals." I'd modify it slightly. Stop letting agents define what success looks like. Define it yourself. Then let them chase it.
The Practical Implication
I'm not arguing against using AI coding tools. I built an entire platform with two of them. The productivity gains are real. The code quality, when properly supervised, is genuinely good. I ship faster than I ever have.
But "plug it in and ship faster" isn't a methodology. It's a bet. And like any bet, it pays to understand the odds.
The agents reflect my diligence back at me. When I'm rigorous about constraints, they produce rigorous code. When I'm sloppy about verification, they produce code that looks correct but isn't. The output quality tracks my input quality with uncomfortable precision.
The productivity is real. So are the tradeoffs. Noticing both is the whole job now.
Resources
- Feature Intent Contract — TDD Implementation
- Test Immutability Rule
- Emerging Standard M6 — Vacuous Test Assertion
- GUARDRAILS — Safety Protocols
- SIGN 1 — Absolute Plan-Approval Gate
- SIGN 2 — Scoped Anti-Premature-Stop
- SIGN 3 — System Message Immunity
- Dual-Agent Orchestration Model
Originally published on matbanik.info. Cross-posted with ❤️ to Dev.to.





Top comments (0)