Like many working developers, I have been increasingly writing code with the help of artificial intelligence, and the company I work for has been keen to encourage us to do so. Also like many, I was somewhat resistant at first. To be honest, I never liked the early autocomplete feature. I found it too intrusive - it felt like it would literally interrupt my train of thought as I started typing. I'd already had an idea of what I wanted to do, and now I had to push that aside, inspect the suggested code and see if it properly implemented what I wanted, and whether it was better or worse than my original plan.
When more advanced features like co-pilot were released, I actually began to find it exciting and as a help to learning more deeply. But I was very strict in my use. I would not allow the agent to change files. Instead, I would ask questions on how to do something, and then would manually type any suggested code myself into the relevant files, asking clarifying questions along the way. Suddenly, I could more confidently work in all sorts of new areas, while learning more as I went. It was like a having a personal senior engineer standing beside me and teaching me step by step!
Unfortunately, that phase didn't last long enough. As my career advanced, I was expected to work on more complex issues alone, the code base was becoming more complicated, and management continued to press us to make the most use of the latest AI tools possible. Perhaps most importantly, a new pressure to reach a certain number of weekly PRs and shake-ups in the larger company created an intense pressure to be constantly and visibly productive. I was still growing and learning, but it was different now. Suddenly taking the time to write code by hand and learn deeply along the way seemed a luxury I couldn't afford. Claude, take the wheel!
Now, I want to be clear that I don't think this development of increasingly using coding agents to write code is necessarily problematic. It is genuinely helpful for speeding up implementation of boilerplate or easily solvable contained algorithms. It's great for helping think through possible solutions, for searching codebases for patterns or areas for improvement. For allowing those with general programming knowledge to work in domains or languages outside their expertise (within limits) or on more complex issues. It forces users to think at a more architectural and product-oriented level (which definitely makes software engineers better in general).
At the same time, "use it or lose it" definitely still applies. Skill atrophy when it comes to basic syntax and code that used to be simple is something I and countless other devs have noticed. Some recent research shows we aren't just imagining it.
A 2024 paper by Macnamara et al. already predicted these effects based on known results from other domains. Studies on airline pilots have already shown that over-reliance on autopilot can cause skill decay, and Macnamara et al. think the effects on experienced software developers may be even worse. Since AI takes over more advanced cognitive processes than basic automation (which depends on deterministic rules), and since cognitive skills decay faster than physical ones, AI-induced skill decay is likely to be more severe than what's been observed with simpler automated systems. And for those still developing skills, the paper warns that AI learning aids may create illusions of understanding, something well known to those who have coded along with multiple tutorials only to find they could not recreate the code on their own or start a similar project from scratch.
Anthropic itself performed a study which adds some confirmation to these predictions. The study had 52 developers learn a new Python library (Trio for asynchronous programming). One group used AI assistance alongside web search and instructions. The control group coded manually with the same resources. Anthropic's study doesn't directly measure skill atrophy in experienced developers, but it illuminates the likely mechanism. When developers learning a new Python library used AI assistance, they completed tasks at similar speeds to those working manually — but scored 17% lower on comprehension immediately afterward (the gap was widest in debugging questions). If passive AI use impairs learning even in a short, focused session, the long-term effect on practicing developers across thousands of hours seems worth taking seriously.
If something like this is happening, then the problem may be a lack of deliberate, unassisted engagement with code. There are various ways one might try to remedy this, but I want to make a case for one that might sound eccentric: writing code by hand, on paper.
Neuroscience consistently shows that on many types of learning tasks, handwriting outperforms typing for learning and retention. Brain imaging studies reveal that handwriting synchronizes motor, visual, sensory, and memory regions more richly than keyboard input. Handwriting slows you down naturally, encouraging summarization, reflection, and reorganization of ideas—processes linked to better long-term recall and conceptual grasp. Studies on note-taking, for instance, find that students who write by hand demonstrate superior understanding and memory compared to those who type, with enhanced connectivity in brain networks essential for learning. (See nice summary of research here.)
Though there haven't been any direct studies on handwriting code vs other methods, it's plausible to assume these benefits will be more or less directly transferable. Handwriting forces you to recall syntax, structure logic flows, and anticipate errors without the various crutches and aids now available. The tactile act of drawing variables, loops, and function signatures engages fine motor skills in ways that build durable mental models. One study even linked handwriting on physical paper to stronger brain activity during later memory retrieval.
If there’s something to this, the obvious next question is what it actually looks like in practice. Doing so requires some intentionality. Before opening your editor, sketch data structures or pseudocode on paper — it clarifies thinking free from syntax distractions. When learning something new, handwrite key examples line by line and annotate the margins as if explaining it to someone else. For debugging, reproduce a buggy snippet from memory and trace execution by hand; the slower pace reveals things that rapid AI-assisted fixes obscure. Even 20-30 minutes of purely unaided problem-solving daily, before comparing your solution to what an AI suggests, can rebuild the feedback loop that passive use erodes.
Doing this consistently is a bit harder than it sounds. I found myself wanting something more structured, so I ended up putting together a workbook of problems that I could work through a little each day, much as a musician might practice scales or an athlete does mobility work. Not for absolute beginners, but not interview-style algorithms either—just problems that require some focused thought. If you want to see what this feels like, I pulled together a few sample problems you can print and work through: sample problems page The book can be purchased on Amazon. (JavaScript edition is available now, but several other editions are in the works as well.)
I said at the beginning that I found AI most helpful when I treated it like a senior developer pair programming with me—patient, explanatory, always ready to suggest an alternative approach. That worked because I was still doing plenty of the thinking myself. The moment I handed over the keyboard entirely, the teaching slowed or stopped. Getting some of that back requires a willingness to slow down and sit with the uncomfortable process of moving from uncertainty to understanding.
Top comments (0)