As developers, we're all excited by the speed that LLMs offer. But anyone who has tried to build a production-grade feature with them has likely run into a hard truth: they are fundamentally unreliable.
After building three separate AI apps, I found myself spending more time debugging AI hallucinations than writing my own code. I realized we need to move beyond "prompt engineering" and towards a more structured "Intent Engineering."
I want to share a simple, pragmatic framework that has saved me countless hours and made my AI-generated code dramatically more robust. I call it WBS: What-Boundaries-Success.
The Problem: A vague prompt like "build a login endpoint" forces the AI to guess at a dozen critical details.
The WBS Framework: Instead, you provide a structured spec:
1. What (The Intent):
- "Handle user login with email/password."
- "Issue a JWT on success."
2. Boundaries (The Constraints):
- "Must use bcrypt for password hashing."
- "Max 5 login attempts per IP per hour."
- "Do not expose user PII in error logs."
3. Success (The Verifiable Outcome):
- "A valid login returns a 200 with a JWT."
- "An invalid login returns a 401."
- "The 6th attempt from an IP within an hour returns a 429."
By defining the solution space before generation, you're not just hoping for a good result; you're engineering one. You're giving the AI a "reasoning layer."
I wrote a much deeper post about the journey that led me to this framework and the bigger theories behind it (like the "Reasoning Ceiling" and "Bora's Law").
You can read the full story and theory here: https://chrisbora.substack.com/p/how-i-vibe-coded-3-saas-apps-with
I'm now building this framework into a full-fledged tool called The ReasoningAPI. Hope this framework is useful for your own projects!
Top comments (0)