There is a moment in every founder's journey when you realize you are not just building a product, you are betting your livelihood on it. For me, that moment came when I decided that Avery.dev, our AI coding platform, would build itself.
Not as a marketing stunt. As a responsibility.
Can we trust our own product to build itself before asking others to trust it to build their products?
The Background Nobody Asks For (But Explains Everything)
I've spent over two decades in the trenches of AI and machine learning, back when "AI" meant writing your own neural networks from scratch, not prompting a chatbot. I've co-founded two tech startups, both of which were successfully exited. I know how to code. I actually enjoy it.
But I love solving problems more.
So when the vibe coding wave hit in 2024: Lovable, Bolt, Replit, and the rest, I was genuinely excited. Finally, I could raid my idea graveyard. All those side projects I never had time to build? Now I could ship them in an afternoon.
I signed up for the $25/month plans. Wrote my first prompts. Watched the beautiful previews render.
And then I tried to ship something to production. This is where the facade came crumbling down.
The Gap Nobody Talks About
Here's what I discovered: these platforms are spectacular at generating demos. The UI looks polished. The preview works. You feel like a wizard.
Then you peek behind the curtain.
The backend wires are dangling. The data is dummy. The authentication is half-implemented. Basic functionality that looked correct was held together with duct tape and optimism.
No problem, I thought. AI makes mistakes. I'll just iterate.
And that's when I hit the wall, the credit limit wall.
The Perverse Economics of Credit-Based AI Coding
Most AI coding platforms charge by credits/tokens consumed and the iterations used. On the surface, this seems fair. You pay for what you use.
But think about what this actually means:
- When the AI makes a mistake, you pay to fix it.
- When the AI hallucinates broken code, you pay for the hallucination AND the repair.
- When you are 80% of the way to production, and the AI goes in circles, you are burning credits on its confusion.
The incentives are fundamentally misaligned. The platform profits whether you succeed or fail. In fact, they profit more when you struggle, because struggling means more iterations, more credits, more charges.
I have been in AI long enough to know that AI will make mistakes. That's not a bug; it's the nature of probabilistic systems. The question isn't whether the AI will mess up. The question is: who bears the cost when it does?
The Dogfooding Decision
This is where Avery's origin story gets personal.
I didn't set out to build another AI coding platform. The market didn't need another Lovable clone. But I couldn't shake the incentive problem. It felt wrong that users were penalized for AI's limitations.
So I asked myself: what would I want?
The answer was simple:
- A system that the creators trust - not just pretty demos
- A reliable path to production
- A predictable monthly fee
If the AI takes 50 tries to get something right, that's the platform's problem to optimize, not the user's wallet to drain.
But here's the thing about building a product on conviction: you have to actually believe it works. So we made a decision that terrified me at first: Avery would build Avery.
Not just small features. Core functionality. Production code. The stuff that runs our business.
What Happens When Your Product Builds Itself
Dogfooding at this level changes everything.
When a bug ships because Avery generated flawed code, we feel it immediately in our own product, with our own users. There's no abstraction layer. No "well, users should review the output more carefully." We ARE the users.
This created a relentless feedback loop:
- Avery generates code for itself
- We deploy it
- Something breaks (or doesn't)
- We improve Avery's generation based on real pain
Every improvement we make directly benefits us. Every shortcut we tolerate comes back to bite us. The incentives are perfectly aligned because we are on both sides of the equation.
When I say "we don't profit when the AI messes up," I mean it literally. If Avery burns 100 iterations to solve something that should take 10, that's a compute cost we absorb. We are motivated financially, operationally, and existentially to make Avery get it right faster.
The Quiet Satisfaction
I've built enterprise ML systems. I've written code in more languages than I can remember.
But there's something different about watching someone who couldn't code six months ago ship a real SaaS product. Not a demo. Not a prototype. A business.
That's what this was always about.
Not AI hype. Not disrupting developers. Just... removing the friction between "I have an idea" and "I have a business."
The vibe coding revolution promised this. But it delivered beautiful screenshots and credit card charges. The prototype-to-production gap remained as wide as ever, as you just paid more to fall into it.
What I Learned
After 20+ years in AI, here's what building Avery.dev taught me:
1. Incentive alignment isn't a feature; it's a philosophy. Every pricing decision, every architectural choice flows from this. When you profit from user failure, you will unconsciously optimize for it.
2. Dogfooding isn't about eating your own cooking. It's about being hungry. We didn't use Avery because it would look good in marketing. We used it because we needed it to work for our own survival.
3. "AI makes mistakes" is not an excuse; it's a design constraint. Build systems that assume errors and optimize for recovery, not systems that punish users for inevitable failures.
4. The real democratization of software isn't generating code; it's shipping products. Anyone can make a demo now. The bottleneck moved. It's now about getting from demo to revenue.
If any of this resonates, I'd love to hear your experiences with AI coding tools: what's worked, what's frustrated you, where you have hit the walls.
The conversation about vibe coding is just getting started. I suspect the best practices are still being discovered, and they will come from practitioners sharing real stories, not from marketing demos.
What's been your experience?
Top comments (0)