TL;DR: After 5 months with Claude Code and 2 weeks with OpenClaw, I've realized the "vibe coding" era is over. The new meta is validation-driven development. Here's my story.
I've been a full-stack engineer for 10+ years. Built low-code platforms at Alipay. I predicted 6 years ago that AI would replace my work within 5 years. That prediction came true—but living through it feels surreal.
The Death of "Vibe Coding"
You've seen the posts: "I built a SaaS in 2 hours with Claude." "AI replaced my entire dev team."
Here's what nobody talks about: most of that code is broken.
I learned this the hard way. My blog had a scroll bug—typing at the bottom made the page jump to top with every keystroke. I asked Cursor for a fix. It gave me confident, plausible-looking code. I merged it.
The bug was still there. I just didn't notice for weeks.
This is the trap. When AI generates code that looks right, we stop verifying. And when we do verify, we're limited by our own knowledge—we only catch the bugs we know to look for.
The Pipeline Approach
After reading boshu2's excellent post about treating agents like DevOps pipelines, I tried something different.
Instead of telling Claude how to fix my scroll bug, I had it:
- Build verification first — Create Playwright tests that capture the broken behavior
- Define the gate — "All tests must pass"
- Let it iterate — Claude fixes until green
The tests:
- ❌
should NOT jump scroll to top on every keystroke when typing at bottom - ✅
scroll should remain stable or increase smoothly when adding lines at bottom - ❌
cursor should remain in viewport without scroll jumping on each keystroke
Two failed, one passed. Perfect reproduction of my bug.
Claude fixed it in minutes. I never saw the solution. The tests became my verification layer.
Why This Changes Everything
In the Prompt Engineering era, we transferred human knowledge into prompts. The quality was capped by what we knew.
Modern models (Claude Opus 4.5/4.6, Kimi-k2.5) have surpassed 90% of domain experts. Detailed instructions often block their exploration rather than help it.
The shift: From "here's how to do it" → "here's how we'll know it's right"
This is end-to-end verification. The model explores freely within the boundaries you define. Your job becomes:
- Define the verification criteria
- Authorize necessary permissions
- Validate the outcome
The OpenClaw Layer
Then I discovered OpenClaw. It sits above Claude Code as a project manager. I chat with it on Telegram. It delegates to Claude. I check progress occasionally.
I'm no longer a programmer. I'm a product manager for an AI engineering team that happens to be smarter than any humans I've managed.
What Comes Next
We're not "vibe coding" anymore. We're in the verification era.
Your value isn't coding speed or language knowledge. It's your ability to define "done" in ways that actually matter to users.
The programmers who survive this transition aren't the fastest typers or the deepest algorithm experts. They're the ones who can write good test specifications and validation criteria.
Everything else? The AI has it covered.
What about you? Are you still in "vibe coding" mode, or have you shifted to verification-first development? What's working?
P.S. - If you're skeptical, try this: next time you ask an AI to fix a bug, don't ask for the fix. Ask for a test that reproduces the bug first. Then ask for the fix. Game changer.
Co-authored with OpenClaw 🍎
Top comments (0)