We stopped writing code manually six months ago. Not because we got lazy — because we got smart about where our time should actually go.
I know, I know. "Vibe coding is just letting AI do the work while you take credit." We've heard it all. But here's what three months of running AI-assisted development in production at Gerus-lab actually taught us — and why every senior dev calling it a gimmick is probably losing ground to teams who don't.
What Vibe Coding Actually Is (Not What Twitter Says)
Andrej Karpathy coined the term, and then almost immediately wanted to retire it because people started using it as either a badge of honor or a slur. The actual practice is simpler: you describe the outcome you need, let the model generate an implementation, verify the result actually works, and move on.
That's it. No magic. No "AI replaces developers." Just a dramatically different allocation of where human judgment goes.
At Gerus-lab, we build products across Web3, AI infrastructure, GameFi, and SaaS. Our clients range from fintech startups on TON blockchain to mid-size companies automating internal workflows. What we found is that vibe coding isn't one thing — it's a spectrum, and where you sit on that spectrum determines whether you ship faster or whether you ship garbage faster.
The Three Modes We Actually Use
Mode 1: Scaffolding at Speed
When we kick off a new project — say, a Telegram bot connected to a TON smart contract — the first 20% of the work is architecture decisions and boilerplate. With AI-assisted development, we compress that from 3 days to 4 hours. We describe the component structure, the data flow, the auth pattern. The AI generates the scaffolding. We review it, fix the parts that don't match our internal conventions, and we're off.
Result: more time on the actual interesting problems. More cognitive budget for the parts that matter.
Mode 2: Feature Implementation with Human Review Gates
This is the bread and butter. For any feature that isn't touching critical security or core business logic, we write the spec, let Claude Code or Cursor generate the implementation, run our test suite, and human-review only the parts that interact with existing system state.
On a recent project — a SaaS platform with complex user permission hierarchies — this let us implement 14 features in a sprint that would normally have taken us through 8. Not because the AI wrote perfect code. It didn't. But the iteration cycle was so fast that we caught and fixed mistakes in minutes instead of hours.
Mode 3: Exploration Without Commitment
This is underrated. When we're evaluating whether a technical approach is viable — can we make this Solana integration work? will this cache invalidation strategy hold under load? — we can now generate and discard 5 implementations in the time it used to take to write one. The ones that don't work teach us something. The one that does work becomes the real solution.
You can read more about how we apply this in real client projects at gerus-lab.com.
The Argument Against (That Actually Has Merit)
Here's where I'll give the skeptics their due, because they're not entirely wrong.
Vibe coding produces brittle code when you don't know what you're looking at. If you're a junior dev who can't evaluate whether the AI's database indexing strategy is going to cause problems at scale, you're going to ship something that looks fine until it absolutely isn't.
The failure mode isn't "AI wrote bad code." The failure mode is "human didn't catch bad code because they didn't know what to look for." That's a skills problem, not an AI problem — but the AI absolutely amplifies it.
We've also seen it in open source: repositories that look polished and well-documented but have fundamental architectural decisions that clearly came from an AI that had no context about the project's history. The AI said "yes" to a pattern that anyone with 6 months on the codebase would have said "absolutely not" to.
The fix: you still need people who can read the output critically. Vibe coding doesn't remove the need for engineering judgment. It changes where that judgment gets applied.
What It's Actually Doing to Our Team
We're a lean team. That's intentional — we think small, senior-weighted teams with strong tooling beat bloated org charts almost every time. AI-assisted development has made that calculus even more favorable.
Things that changed:
- Our onboarding time for new projects dropped by 40%. New client context gets synthesized faster.
- We do fewer code reviews of implementation details, more reviews of architectural decisions.
- Junior developers on our team are actually developing faster because they can generate an implementation, get feedback from the AI on why it's wrong, and learn from that loop rather than waiting for a senior dev to have time to explain.
Things that didn't change:
- Security reviews. We still do these manually, carefully, every time. The AI's opinion on "is this input validation sufficient" is not the final word.
- Client-facing architecture decisions. Those still require human judgment with context about the client's business, constraints, and risk tolerance.
- Anything touching funds or on-chain state in our blockchain projects. You do not vibe code around money. Full stop.
We've written more about our engineering philosophy at gerus-lab.com.
The Actual Threat Model
The developers who should be worried aren't the ones who understand their domain deeply and are now using AI as a force multiplier.
The ones who should be worried are the ones whose primary value was writing boilerplate quickly and accurately. That work is now table stakes. If your professional advantage was "I can set up a CRUD API faster than most people," that advantage has been significantly compressed.
The developers who are eating right now are the ones who understand systems, who can reason about tradeoffs, who can look at AI-generated code and immediately see the three things it got wrong. Those people are 5x more productive than they were two years ago. They're not being replaced. They're being amplified.
We've been hiring with this in mind. When we evaluate candidates, we care less about whether they can write a quicksort from memory and more about whether they can take a large, messy, AI-generated codebase and tell us what's wrong with it.
How We Think About This Going Forward
Vibe coding is not the future. Agentic engineering is the future — and vibe coding is just the awkward adolescence of that.
What we're moving toward at Gerus-lab is structured AI collaboration: clear specs, human-defined architecture guardrails, AI-generated implementation with targeted human review, automated test coverage requirements, and a feedback loop that gets tighter as the AI gets better context about your project.
The teams that figure this out in the next 12 months are going to have a structural advantage that's going to be very hard to close. The teams that are still debating whether AI coding is "real" engineering are going to notice the gap when they're trying to compete on delivery speed.
We've seen this play out in our own client base. The clients who are integrating AI into their development workflow — with appropriate oversight — are moving faster than they ever have. The ones who are resistant are watching timelines slip.
What We'd Tell Teams Starting This Now
Don't start with production. Use AI for internal tools, proofs of concept, and exploratory work first. Build your intuition for what the AI gets right and what it consistently screws up before you depend on it.
Define your review gates before you start. Know in advance which categories of code require senior human review. Don't make that decision in the moment when you're excited about shipping.
Invest in your specs. The quality of AI output is almost entirely determined by the quality of your input. Sloppy prompt → sloppy code. A well-structured technical spec will produce dramatically better results than "write me an auth system."
Track failures, not just successes. When the AI-generated code causes a bug, document it. You'll find patterns. Those patterns tell you where to apply more scrutiny.
Keep the humans who understand the domain. The worst outcome is using AI to justify cutting your most experienced engineers. Those are exactly the people you need to make AI-assisted development work safely.
If you're building something where these tradeoffs matter — Web3 infrastructure, AI-powered products, complex SaaS systems — we'd be glad to talk through the approach with you. We've made the mistakes so you don't have to.
The vibe coding debate is a proxy debate. The real question is: how do you restructure your engineering workflow to get the benefits of AI assistance without the failure modes? That's an engineering problem. It has engineering solutions.
We're working on them every day at Gerus-lab.
Top comments (0)