Last summer I spent two weeks vibe-coding an app with Lovable. The result looked great in the demo. Under the hood, everything lived in a handful of bloated files, state scattered like confetti across components that probably shouldn't have existed. I closed the tab and went back to writing code the old way.
That was the wrong lesson.
The dismissal
I think most developers went through some version of this. You try the shiny AI tool, it produces something that kind of works, the code makes you wince, and you walk away convinced that AI-assisted development is a toy. I did. I used Cursor for a while after that, but only for the safe stuff — utility functions, test scaffolding, things I could verify at a glance. I didn't trust it with anything that mattered.
Then I started using Claude Code with custom skills — feeding it my architectural patterns, my naming conventions, the way I structure projects. And something shifted. The output stopped feeling like a stranger's code. It felt like mine, written by a junior developer who'd actually read the docs.
The flip
Here's the thing nobody tells you about rubber duck debugging. The duck doesn't need to understand your code. You do. The act of explaining makes complexity visible — you hear yourself say something illogical and catch it before the words finish leaving your mouth.
We've spent the last two years treating AI as our rubber duck. We explain what we want, the model generates something, and we learn from the sparring. Better model, better result.
That relationship is flipping.
The models are getting good enough that they don't need us to explain the problem — they need us to answer the questions they can't figure out alone. Where should this service live? Should we prioritize consistency or flexibility here? Is this abstraction earning its complexity? These aren't questions a non-developer can answer. They require years of watching systems grow, rot, and collapse under their own weight.
The smarter you are, the better the AI performs. You've become the rubber duck.
The vibe-coding lesson
There's a popular narrative that AI doesn't care about code quality. That architecture is a human vanity. That you can just vibe your way to a working product and let the machine figure out the details.
Wrong.
LLMs suffer under bad code quality exactly like humans do. Point an agent at a messy codebase and watch it fix one bug while introducing three new ones. The same tangled dependencies that confuse a junior developer confuse the model. It hallucinates connections between modules that shouldn't know about each other. It duplicates logic because it can't find the existing implementation buried in a 2000-line file.
Clean code isn't about aesthetics anymore — it's about making your AI collaborators effective. Good naming, clear boundaries, small modules. The things we always said mattered but sometimes let slide when we were the only ones reading the code. Now there's a team of tireless agents navigating your codebase around the clock, and every shortcut you took becomes their stumbling block.
From production to direction
We spent years grinding the skills that are now being commoditized. Syntax mastery. Writing components. Debugging line-by-line. The raw speed of cranking out code. These were the things that made you a "fast" developer, and fast developers got promoted.
That ladder is dissolving.
What compounds now is everything we used to call "soft" skills — the tech lead skills. Knowing what good looks like. System thinking. Asking the right questions. Deciding what to build, not just how to build it. The irony is that these skills were always more valuable. We just couldn't prove it because the bottleneck was production speed. Remove the bottleneck, and suddenly the person who can steer a system matters more than the person who can type one out.
If you're a developer who spent ten years building intuition about what makes software resilient, what makes abstractions worth their cost, what makes teams ship without tripping over each other — those years weren't wasted. They were preparation for a role that didn't fully exist until now.
The fear underneath
I hear the same thing from developers everywhere. "We spent years building skills that are no longer relevant." It's not usually said that bluntly. It shows up as skepticism toward the tools, as insistence that "real" developers don't need AI, as a quiet refusal to engage.
I get it. The fear isn't irrational. But it's pointed at the wrong thing.
Your skills aren't obsolete. They're applied differently. The developer who understood why a certain pattern fails at scale still understands that — and now has agents that can implement the alternative in minutes instead of days. The developer who could spot a leaky abstraction from a code review still spots it — and now the fix ships the same afternoon.
What actually holds people back isn't a lack of technical skill. It's the fear of stepping away from what's comfortable. The fear of admitting that the thing you were great at — raw code production — is no longer the differentiator.
The new differentiator
Smarter human, better result. That's the equation now.
Not "faster typist, better result." Not "more Stack Overflow answers memorized, better result." The developer who reads widely, thinks in systems, and can articulate why a decision matters — that developer gets more out of AI than someone who can write a React component from memory.
So the question isn't whether AI will replace developers. It's whether you're ready to be the person the AI turns to when it gets stuck. Whether you're ready to sit on the other side of the desk, answer the hard questions, and steer a system you didn't write every line of.
Are you ready to become the rubber duck?
Top comments (0)