DEV Community

Cover image for Full agentic coding corrodes the very skills it needs to work
Andrew Kew
Andrew Kew

Posted on

Full agentic coding corrodes the very skills it needs to work

The pitch for full agentic coding sounds clean: you write specs, agents write code, you review and steer. The human stays "in the loop" as the expert orchestrator.

But buried in Anthropic's own research on how AI is transforming work at Anthropic is a sentence that should give every engineer pause:

"Effectively using Claude requires supervision, and supervising Claude requires the very coding skills that may atrophy from AI overuse."

That's not a blogger's hot take. That's the vendor itself naming the contradiction.

What's actually happening

Studies from MIT, Microsoft, and Anthropic's own internal research are converging on the same finding: heavy AI tool use measurably degrades critical thinking and coding skills — often within months. Not just junior devs. Simon Willison, with nearly 30 years of experience, has reported losing a "firm mental model" of applications he built with heavy AI assistance.

The data points are stacking up:

  • Anthropic's research found a 47% drop-off in debugging skills among heavy AI users
  • A LinkedIn Director of Engineering overseeing 50 engineers asked his team to avoid AI for "tasks that require critical thinking or problem-solving"
  • During a recent Claude Code outage, teams were at a standstill — their own ability to code had quietly atrophied

Why this isn't just "another abstraction"

The "we've seen this before with compilers/FORTRAN/jQuery" argument doesn't hold. When developers moved from assembly to C, they didn't report brain fog. When sysadmins moved to AWS, they didn't lose their ability to reason about networking.

LLMs don't just abstract away complexity — they insert ambiguity and non-determinism into the loop. They invert a good developer's priority list: from understand first, ship second to ship first, understand maybe.

And coding isn't just typing. As the creator of OpenCode (an open-source coding agent) put it:

"Me typing out code is the process by which I figure out what we should even be doing. I have a really tough time just sitting there, writing out a giant spec."

What to do

The answer isn't ditching AI — it's demoting it.

  • Stay in the code. Use LLMs for specs and planning, but do the implementation yourself (at least partially).
  • Set a review budget. Never generate more than you can review in one sitting. If it's too much, split the task.
  • Don't delegate the unknown. Don't ask an LLM to implement something you couldn't do yourself — except explicitly for learning.
  • Watch for lock-in signals. If your workflow stops when the API goes down, that's the warning sign.

Token costs are unpredictable. Models shift with every release. The skills you atrophy today are the ones you'll need when the rug gets pulled.


Source: Agentic Coding is a Trap — Lars Faye

✏️ Drafted with KewBot (AI), edited and approved by Drew.

Top comments (0)