TL;DR
I used to measure my productivity by how fast I could generate code. Then I realized I was optimizing for the wrong metric.
The trap: LLMs make code generation nearly free. But faster code generation ≠ better problem-solving. The bottleneck isn't typing-it's reasoning.
The reality: I was shipping more features but accumulating technical debt faster. The code looked perfect, but I didn't always understand what I was building.
The shift: Value comes from understanding what to build, why to build it that way, and what will break when you do. LLMs can't do that reasoning for you.
I've been using LLMs in my daily workflow for over two years. At first, I was amazed. I could scaffold a new API endpoint in minutes instead of hours. I could generate test cases, refactor modules, and write boilerplate at speeds that felt superhuman.
My productivity metrics looked incredible. I was shipping more features, closing more tickets, and generating more code than ever before.
But something felt off.
I was moving fast, but I wasn't sure I was moving in the right direction. I was generating code quickly, but I wasn't always confident I understood what I was building. I was shipping features, but I was also accumulating technical debt faster than I could pay it down.
The Wake-Up Call
The moment I realized I had a problem was during a code review. A teammate asked me why I'd chosen a particular approach for handling pagination. I stared at the code—code I'd generated with ChatGPT just two days earlier—and realized I couldn't explain the decision.
The code worked. It passed tests. It looked professional. But I had no idea why I'd made certain choices. I'd just accepted whatever the LLM suggested because it looked right.
That's when I understood: I was mistaking speed for skill, generation for understanding, and output for impact.
A Real Example: The Pagination Incident
Last month, I was asked to "add pagination to the user list endpoint." I opened ChatGPT, pasted the request, and got 50 lines of clean, working code in 30 seconds. I tested it locally—it worked! I opened a PR in 20 minutes.
By naive metrics, I was incredibly productive. But here's what I didn't consider:
- The endpoint was used by three different clients (web dashboard, mobile app, admin tool). Adding pagination meant all three needed updates.
- The current endpoint was cached at the CDN. With pagination, cache effectiveness would drop because each page is a separate cache entry.
- I didn't investigate why pagination was needed. Was the list slow to load? Was the client having trouble rendering? Different problems, different solutions.
A week later, I spent three hours fixing issues I'd created. The mobile app broke because it wasn't expecting pagination metadata. The caching strategy became ineffective. And I still hadn't solved the actual performance problem.
If I'd spent 30 minutes understanding the problem first, I would have written less code, but it would have been the right code.
What Actually Changed
LLMs compressed feedback loops, not cognitive work.
Before LLMs:
- Finding information: 30-45 minutes
- Typing boilerplate: 15-20 minutes
After LLMs:
- Finding information: 2-5 minutes
- Typing boilerplate: 1-2 minutes
The loops are faster. But faster loops don't mean no loops. They don't mean you can skip understanding, judgment, verification, or responsibility.
Before LLMs: The bottleneck was often access to information.
After LLMs: The bottleneck is knowing what information to trust and how to apply it.
The skill that matters now isn't finding answers—it's evaluating answers, asking the right questions, and understanding context deeply enough to know when the AI is wrong.
The Illusion of Understanding
Here's the most dangerous trap: LLMs generate code that looks like it was written by someone who understands the problem. The code is well-structured, follows conventions, includes error handling, has comments. It looks professional.
This creates an illusion of understanding. If the code looks right, it's easy to assume it is right. You skim the PR, run the tests, merge it. Only later do you discover that it doesn't handle the edge case, or violates an implicit invariant, or makes assumptions that don't hold in production.
I've been there. I've merged code that looked perfect, only to debug it at 2 AM when it broke in production. The problem wasn't the code—it was that I didn't understand what the code was actually doing.
The Thinking That Matters
Effective developers operate in a cycle: understand → hypothesize → implement → verify → reflect.
LLMs short-circuit this cycle by jumping straight to implementation.
Understanding: What is the actual problem? Not the stated problem, the actual problem. When someone says "the dashboard is slow," is it slow queries? Inefficient rendering? Too much data? This requires investigation—profiling, tracing, reading code. An LLM can help, but it can't do the investigation itself.
Hypothesizing: Based on your understanding, what might fix it? This is where experience matters. An LLM can suggest solutions, but it can't prioritize them based on your specific context. It doesn't know that your database is already IO-bound, or that your team struggles with cache invalidation.
Implementing: This is where LLMs excel. Once you know what to build, they can help you build it quickly. But "quickly implementing the right thing" is very different from "quickly implementing something."
Verifying: Does it actually work? Not "does it compile," but "does it solve the problem?" An LLM can generate tests, but it can't tell you if the tests are meaningful. It can't catch the subtle bug that only appears in production with 10,000 concurrent users.
Reflecting: What did you learn? Why did this solution work (or not work)? An LLM has no memory of this iteration. Every conversation starts fresh. It can't learn from your mistakes. You can.
This cycle is where reasoning lives. LLMs can accelerate the implementation step, but they can't replace the rest of the cycle. And the rest of the cycle is where the value is created.
What This Means for Your Career
If your value as a developer came primarily from knowing syntax, remembering API signatures, or typing fast, then yes, your value has decreased.
But if your value comes from understanding systems, making trade-offs, debugging complex issues, designing solutions, and knowing your domain, then your value is higher than ever. You can now move faster through the mechanical parts and spend more time on the parts that actually matter.
The gap between mediocre and excellent work might be widening. If you use LLMs as a crutch, you'll ship faster but learn slower. If you use them as an accelerator, you'll both ship faster and deepen your expertise.
My Framework Now
Before generating code, I ask myself:
Do I understand the problem? Not just the stated problem, but the actual problem. What's the root cause? What are the constraints?
Do I know what I want to build? Can I explain it clearly to a colleague? If not, I need to think more, not generate more.
Am I using the LLM for mechanical work or thinking? Boilerplate, syntax, repetitive patterns—good. Problem-solving, architecture, critical business logic—risky.
Can I explain why this code works? If I can't explain it, I don't understand it. And if I don't understand it, I can't maintain it.
What happens when this breaks? Will I be able to debug it at 2 AM? Will I understand the failure modes?
The rule: If I can't confidently answer these questions, I slow down. I think more. I generate less.
The Bottom Line
LLMs are powerful tools. They can generate syntactically correct code in seconds. But they can't understand your infrastructure, your constraints, or your operational requirements.
Use AI to accelerate implementation, but maintain ownership of the reasoning.
The code that looks perfect might have subtle bugs that only become obvious when you understand systems deeply. And most importantly: if you can't explain why the code works, don't ship it.
Speed is not the same as skill. Faster code generation doesn't make you a better developer—better reasoning does.
What's your experience with the productivity illusion? Have you caught yourself optimizing for speed over understanding? Share your thoughts in the comments below!
If you found this useful, I've documented this and other frameworks for working with LLMs strategically in my book, "Being a Software Developer After LLMs". It covers how to use LLMs as accelerators without losing your engineering edge.
Top comments (0)