When my dad tried to teach me to drive a manual car, I quickly realized I was out of my depth. I'd been cruising in an automatic for years. Suddenly, I was stalling the engine left and right. At one point, I nearly drove into a gutter while trying to downshift on a turn. My dad kept repeating, "Feel the car, feel the biting point, let it breathe, be part of the car." Be part of the car? I barely felt like part of myself. But the way he said it… you could tell he loved it as much as I love computers. I thought it would be simple. It was anything but. I almost quit.
Meanwhile, my friend Elorm seemed to take to it naturally. I shared my ordeal with him and he kept on saying the same thing, "Feel the breaking point. That resistance, and slowly release the pressure". He was laughing through every mistake I shared with him. I started thinking about why I struggled so much. The automatic had done all the work for me for so long that I had never learned what it actually takes to drive. The timing, the feel, the control. Automatic gets you where you're going, but manual teaches you how the machine actually works.
That's what I see with engineers and AI. Many hit a problem, immediately ask ChatGPT for the answer, copy the solution, and move on. No thinking. No experimenting. No growth. It's like learning to drive but leaving the cruise control on the whole time. You reach the destination, but you never really drive. And there's research to back this up. An MIT study found that people who rely on AI for complex tasks like writing essays actually show weaker brain connectivity. Strategic thinking, memory, creativity, they all take a hit. The researchers call it "cognitive debt." You borrow mental effort from AI, but the interest stacks up fast.
I think about it like this: I'm the craftsman, AI is the builder. I handle the reasoning, the trade-offs, the strategic thinking. AI handles the syntax, the boilerplate, the pattern matching. That's the distinction that matters. It keeps you valuable while still leveraging AI's power. When I first learned HTML and CSS, I used Notepad. Python? Just the basic IDLE. Java?? I read [Herbert Schildt's Java Reference Book](https://www.accessengineeringlibrary.com/content/book/9781260463415. No code helpers, no shortcuts, no AI. Just me and the docs, making mistake after mistake. That struggle built something in me. The neural pathways that make you actually understand what you're doing, not just copy-pasting solutions. It's like knowing why a recipe works versus just following steps. But here's the thing: in a fast-moving world, you can't learn everything the hard way. You need speed. You just can't let it replace understanding.
So where's the line? It's what I call the "biting point," same as with that manual transmission. It's that moment when you've struggled enough to understand the problem, but you're ready for AI to help you implement faster. You're not skipping the thinking. You're accelerating the doing. Like with driving manual, it takes adjusting. You mess up. You refine. Eventually, it becomes muscle memory. You know when to engage first gear, when to let the clutch out, when to give it gas. Same with coding. You struggle through a problem until you understand the principles. Then you iterate. You build those neural pathways. Eventually, you can spot similar problems instantly and know exactly how to approach them. That's how you avoid "cognitive offloading," which is what the research warns about. You're not skipping the understanding phase. You're just speeding up implementation. The system works in phases: pure struggle to learn, AI assistance while you maintain control, validation where you test everything, and finally mastery where you recognize patterns instantly. You become that manual transmission driver who can feel exactly what the car needs.
My approach is simple but takes discipline. When learning something new? No AI. Zero. You build understanding layer by layer. AI would just shortcut that process and rob you of real learning. Set your own goals. Struggle through it. Build the mental models. For implementation? That's when AI becomes useful. You already know what you want. AI writes it. You validate. You iterate. The thinking stays yours. For validation? Use AI to find how others solved similar problems. Study their patterns. Then come back and make yours better. The key is agency. AI responds to your questions. It doesn't drive. You decide when you need help.
Steve Yegge nailed it recently. What makes someone "senior" in the AI era? Two things: knowing what you want before AI writes it, and catching when AI gives you garbage. That's exactly what the research shows. When you lose the ability to validate AI output, your brain literally stops engaging. It's cognitive offloading in action.
Here's my go-to prompt when I need AI help:
Ask me one question at a time so we can develop a thorough, step-by-step spec for this idea. Each question should build on my previous answers, and our end goal is to have a detailed specification I can hand off to a developer. Let's do this iteratively and dig into every relevant detail. Remember, only one question at a time.
It forces me to think through each step. I maintain ownership. I develop understanding instead of just accepting whatever AI spits out. The real danger? It's not that AI will replace engineers. It's that engineers will replace themselves with AI-dependent versions that can't think independently.
Think about building intelligent systems. You can't build what you don't understand. We're creatures of reasoning. How do you build something you have no clue about? LLMs can't go beyond their training. Sure, they remix existing solutions in new ways. But critical thinking? Like the flat earth versus spherical earth debate? If LLMs existed back then, they'd only work within the flat earth paradigm. It took someone to challenge the premise. To say, "Hey, that doesn't seem right." Newton observing an apple. Euclid discovering density. That's human reasoning breaking new ground.
As 3Blue1Brown points out, LLMs are pattern matching systems. Sophisticated ones, but still just pattern matchers. They generate content based on similarity. They don't understand context. They don't actually "know" anything. They predict what comes next based on training data. When you outsource thinking to a system that doesn't think, you're not delegating work. You're letting your skills atrophy.
The solution isn't avoiding AI. That's not realistic. The solution is using it strategically while keeping your edge sharp. Build deep understanding of algorithms, data structures, system design. Get operations experience. Learn to validate AI output and catch bad guidance. Cross-validate with multiple AI systems. Use AI as your turbocharger, not your replacement. Stay the craftsman. Let AI be the builder.
Your brain is a muscle. It needs exercise. Challenges make you grow. Reasoning through problems makes you better. Don't let AI steal that from you. The projects I work on and build today are way more complex than anything I imagined years ago. But I understand them deeply. I kept my thinking sharp while using AI to move faster. That's the difference. Using AI versus being used by it.
Everyone's replaceable, believe it or not. In a world riddled by mediocrity, I'm still enthused by what the human mind can really do. I've met some really amazing people whose thinking process always marvels me. AI would always be a supportive partner, like Tony Stark and Jarvis type thing. But just as that. May we find our place as masters of these tools in the world and season we are ushering into.
The research referenced in this post comes from "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" by Kosmyna et al. from MIT Media Lab. The study provides compelling evidence for the cognitive impacts of AI dependence.
Top comments (0)