DEV Community

Kevin
Kevin

Posted on

I've Been Vibe Coding for a Year. Here's What Actually Happened.

The first time I shipped a feature using nothing but natural language prompts and a coding agent, I felt like I'd discovered fire. A full REST endpoint — auth, validation, tests — in about 40 minutes. I sent a smug message to my tech lead and went for a walk.

That was February 2025. A year later, I have thoughts.


The Hype Was Real (Partially)

Let me give credit where it's due. Coding agents got good. Not "impressive demo" good — actually, genuinely useful good. The jump from GPT-4-era autocomplete to the agentic tools we're using now is not incremental. It's categorical.

I'm talking about tools that can read your codebase, understand your conventions, write the boilerplate, run the tests, see them fail, and fix the failure — without you touching the keyboard. That's a different thing. That's not autocomplete. That's closer to pair programming with someone who never gets tired and doesn't complain about your variable names.

In raw throughput terms? I'm probably shipping 2x the surface area I was 18 months ago. Some weeks more. For greenfield work — new features, new services, scaffolding — the speed increase is real and I'd fight anyone who says otherwise.

But Here's the Part Nobody Talks About

Maintenance is a different story.

Code that gets generated fast doesn't necessarily get understood fast. I've got modules in one of my side projects that I genuinely could not explain under pressure, because I watched them get written in 3 minutes and said "looks good" without really tracking the logic. That was fine until I needed to debug a race condition in one of them at 11pm.

Here's the dynamic nobody warned me about: vibe coding creates a debt that isn't technical debt in the classic sense. It's comprehension debt. The code might work fine. It might even be well-structured. But if you weren't in the driver's seat when it got written, there's a gap between what the system does and what's in your head — and that gap compounds.

Small projects, low stakes? Probably fine. Ship fast, iterate, if it breaks you rewrite it. But anything you're going to maintain for 18+ months, anything with multiple contributors, anything with non-obvious business logic baked in — that comprehension gap starts to hurt.

The "Just Read the Code" Counterargument

I know what you're thinking. Just read the generated code before you merge it.

Yes. Obviously. And I do. But there's a real psychological thing that happens when an agent produces 200 lines of correct-looking code in 8 seconds: your review threshold shifts. You spot-check instead of read. You run the tests and if they pass, you move on. The speed that makes the tool valuable is the same speed that makes it easy to not really engage with the output.

This isn't a complaint about the tools. It's a complaint about the workflow culture that grew up around them. "Just ship it, the AI tested it" is the new "works on my machine."

Where It's Actually Transformative

There are categories where I've genuinely never looked back:

Prototyping. Want to explore an idea without committing? Having an agent scaffold a working prototype in an afternoon changes how you make product decisions. You're not arguing about feasibility in a doc — you're poking at a real thing.

The tedious stuff. Writing serializers, migration scripts, type definitions, test fixtures — all the work that's mechanical and low-creativity. Offloading that is a genuine quality-of-life improvement. I actually like my job more now because I spend less time doing the parts I found mind-numbing.

Crossing unfamiliar territory. Working in a language or framework I don't know well? The agent basically acts as a tutor who also writes the code. I shipped a Rust service last summer. I don't know Rust. It works in production. I have mixed feelings about this.

The Thing That Surprised Me Most

I expected the biggest change to be speed. I was wrong.

The biggest change is scope of ambition. When the cost of trying something drops this much, you try more things. I've experimented with architectural patterns I would've dismissed as too complex to justify the implementation time. Some of them were bad ideas — but I found that out by building them, not by theorizing.

There's something valuable in that. The filter between "interesting idea" and "thing that exists" is much thinner than it used to be. Whether that's unambiguously good depends on your discipline, your context, and honestly your personality.

Where We Are Right Now

March 2026, and the discourse has split into two camps that are both kind of wrong. Camp A says AI coding agents are replacing developers — look at the productivity numbers, look at how much smaller engineering teams are getting at startups. Camp B says none of this matters because the hard problems are still hard and "real engineering" is untouched.

Both camps are being selective about the evidence.

The honest picture: for a large and growing percentage of software work — the scaffolding, the boilerplate, the obvious feature implementation, the bug that looks like the bug you fixed last week — AI agents are already doing most of the lifting. That work isn't gone, but it takes a fraction of the human time it used to.

The work that's still fully human is shrinking more slowly. Complex distributed systems reasoning, debugging subtle emergent behaviors, making judgment calls about product direction, doing the political and collaborative work of aligning engineering with business context — that's all still human-shaped work.

But that's a smaller fraction of the total job than it used to be. And it's getting smaller.


So What Do You Do With That?

Honestly? Get comfortable being a reviewer, a director, an editor. The skill that matters most right now isn't writing code. It's evaluating code quickly and accurately. Knowing when the output is subtly wrong. Knowing when the tests are testing the wrong thing. Knowing when "this works" isn't the same as "this is right."

That's not a lesser skill. It might actually be harder to develop than raw implementation ability, because it requires deep understanding without the forcing function of having written the thing yourself.

I've been vibe coding for a year. I'm faster. I'm also more aware than ever of how much I don't know about code I've shipped.

Both of those things are true, and I'm still working out what to do about it.

Top comments (0)