An AI researcher told me something that won't leave my head:
"If a human cannot outperform or meaningfully guide a frontier model
on a task, the...
For further actions, you may consider blocking this person and/or reporting abuse
This is a brilliant mapping of the 'Post-AI' reality.
The distinction between Cheap vs. Expensive Verification is the real signal. We are entering a 'Velocity Trap' where juniors look like seniors because they can clear syntax hurdles at 10x speed, but they haven't spent time in the trenches where you live with the consequences of a bad architectural pivot for three years.
As you noted, the skill has shifted from generation to curation. In the old world, the 'cost' of code was the effort to write it. In the new world, the cost is the cognitive load required to justify keeping what the AI spit out.
The 'Last Generation Problem' is the real existential threat. If we stop learning through the 'friction of search' and the 'pain of refactoring,' we risk becoming pilots who only know how to fly in clear weather.
"velocity trap" is perfect framing.
youve nailed the illusion: juniors clearing syntax at 10x speed LOOK like seniors but havent lived with architectural consequences.
and your cost shift: "effort to write → cognitive load to justify keeping" captures the economics exactly.
doogal called this "discipline to edit" - abundance requires different skill than scarcity.
"pilots who only fly in clear weather" - this is the metaphor ive been looking for. ai training without friction = clear weather only.
when turbulence hits (production bugs, scale issues, technical debt), they have no instrument training.
anthropic just proved this: juniors using AI finished faster but scored 17% lower on mastery. velocity without understanding
appreciate you synthesizing this so clearly - "velocity trap" goes in the framework collection.
As we’ve discussed before, the real concern isn’t just immediate performance - it’s that vibe-coding junior developers may lack the skills needed when they eventually have to untangle the spaghetti code that often emerges from AI-first development years later.
"vibe-coding junior developers untangling spaghetti years later".
this is the v2+ problem exactly. tiago forte: AI makes v1 easy, v2+ harder.
juniors building impressive portfolios with AI but never learning:
theyre learning generation, not maintenance. market will pay for maintenance in 2027.
"years later" is the key timeline. damage isnt immediate, its deferred. by the time they need those skills, mentors who could teach them are gone.
Spot on, Daniel. This is one of those issues AI companies knowingly create - and they’d rather we stayed silent about it.
Thank you very much for this article and explaining in simple words complex thoughts about AI and its effects on the dev community and not just only on the coding process.
appreciate you reading. the goal was making these abstract patterns concrete and relatable.
Cheap vs expensive verification is the frame I keep coming back to. I work on policy enforcement for MCP tool calls (keypost.ai) and it's exactly this. Checking if an agent's API call returned 200 is trivial. Figuring out whether that agent should have been allowed to make that call in the first place? Nobody catches that until prod breaks.
I also wonder if AI-generated v1 code makes the v2 problem worse than we think. When I write code myself I leave accidental extension points everywhere. AI tends to produce something "complete" that's genuinely harder to crack open later.
Great piece, very sharp and timely!!
You explain the “above vs below the API” idea in a way that actually sticks...
One practical tip I’d add is to always rewrite AI-generated code in your own words before committing it, that’s where real understanding shows up!
It’s a small habit, but it keeps you in the judge’s seat instead of on autopilot.
"rewrite AI code in your own words before committing" . this is gold.
perfect example of staying Above the API. youre using AI output as draft, not final.
forces you to:
this is doogal's "discipline to edit" in practice. AI generates abundance, you curate with judgment
also aligns with ujja's approach: treat AI like "confident junior.helpful but needs review"
small habit, huge impact. turning generation into understanding
👍
Yeah, exactly. The rewrite is the point. If you can’t restate the code in your own words, you don’t really own it yet. AI’s fine as a draft, but the edit is where understanding and responsibility kick in...
exactly. "if you cant restate it, you dont own it".
this is the litmus test. can you explain to someone else WHY this approach vs alternatives? if not, youre below the API.
also: "edit is where responsibility kicks in" - perfect. AI doesnt take responsibility for production failures. you do.
making this practice default = building Above the API muscle.
Interesting approach and ideas. Thanks for sharing. But I was wondering for quite some time, whether or not this human input is still needed at large scale. I think there are safe bets like critical thinking. But lets take "pattern mismatch" across a huge codebase. Is this an actual issue when AI is the only one maintaining it? We invented DRY, because it is easy to overlook stuff and it is easier to have a central place to control things. This is still true, when AI is working on it. But AI is much better at detecting similar code structures and update them, when needed. I am not saying that this is good. But I do think that we will have a new AI first code base paradigm, where some of our patterns and approaches are not needed anymore or even worse are anti patterns.
fascinating question.pushing on real assumptions.
youre right AI might handle pattern consistency better across huge codebases.
DRY violations, mechanical refactoring. AI excels.
AI maintaining AI code assumes original architecture sound.
uncle bob: "AI piles code making mess." if initial structure flawed, consistency amplifies flaw faster.
"AI only maintaining" assumes no human consequences. but someone decides
WHAT to build, verifies it solves right problem, handles unforeseen edges
new AI-first patterns coming, yes. question: do those require human judgment to establish, or can AI bootstrap good architecture?
anthropic study: AI creates velocity without understanding. if no human
understands system, who catches m systemic issues?