Originally published at chudi.dev
"Should work."
The AI said it. I believed it. Six hours later, I was still debugging a fundamental error that existed from line one.
Evidence-based completion for AI code means blocking confidence phrases and requiring proof before any task is marked done. Not "should work"--actual build output. Not "looks good"--actual test results. The psychology is simple: confidence without evidence is gambling. Forced evaluation achieves 84% compliance because it makes evidence the only path forward. Research on LLM reliability confirms that AI-generated outputs require external verification to catch hallucinations and errors that models present with high confidence.
Why Do We Skip Verification?
The pattern is universal. You describe what you want. The AI generates code. It looks reasonable. You paste it in.
That moment of hesitation--the one where you could run the build, could write a test, could verify the output--gets skipped. The code looks right. The AI sounds confident. What could go wrong?
That specific shame of shipping broken code--the kind where you have to message the team "actually, there's an issue"--became my recurring experience.
I trust AI completely. That's why I verify everything.
The paradox makes sense once you've been burned enough times.
What Makes "Should Work" Psychologically Dangerous?
The phrase creates false confidence through three mechanisms:
1. Authority Transfer
The AI presents with confidence. We transfer that confidence to the code itself, as if certainty of delivery equals certainty of correctness.
2. Completion Illusion
"Should work" feels like a finished state. The task feels done. Moving to verification feels like extra work rather than essential work.
3. Optimism Bias
We want it to work. We've invested time. Verification risks discovering problems we'd rather not face.
I thought I was being thorough. Well, it's more like... I was being thorough at the wrong stage. Careful prompting, careless verification.
What Phrases Trigger the Red Flag System?
Here's the complete list that gets blocked:
Each of these phrases indicates a claim without evidence. They're not wrong to think--they're wrong to accept as completion.
What Evidence Replaces Confidence Claims?
The replacement is specific, verifiable proof:
Build Evidence
Build completed successfully:
- Exit code: 0
- Duration: 9.51s
- Client bundle: 352KB
- No errors, 2 warnings (acceptable)
Test Evidence
Tests passing: 47/47
- Unit tests: 32/32
- Integration tests: 15/15
- Coverage: 78%
Visual Evidence
Screenshots captured:
- Mobile (375px): layout correct
- Tablet (768px): responsive breakpoint working
- Desktop (1440px): full layout verified
- Dark mode: all components themed
Performance Evidence
Lighthouse scores:
- Performance: 94
- Accessibility: 98
- Best Practices: 100
- SEO: 100
Bundle size: 287KB (-3KB from previous)
That hollow confidence of claiming something works--replaced with facts that prove it.
How Does Forced Evaluation Achieve 84% Compliance?
Research on skill activation showed a stark difference, a pattern consistent with findings from the NIST AI Risk Management Framework on the importance of structured validation in AI-assisted workflows:
- Passive suggestions: 20% actually followed
- Forced evaluation: 84% actually followed
The mechanism is a 3-step mandatory protocol:
Step 1: EVALUATE
For each potentially relevant skill:
- master-debugging: YES - error pattern detected
- frontend-guidelines: NO - not UI work
- test-patterns: YES - need verification
Step 2: ACTIVATE
For every YES answer:
Activating: master-debugging
Activating: test-patterns
Step 3: IMPLEMENT
Only after evaluation and activation complete:
Proceeding with implementation...
The psychology works because evaluation creates commitment. Writing "YES - need verification" makes you accountable to the claim. Skipping feels like breaking a promise to yourself.
What Are the 4 Pillars of Quality Gates?
Every verification maps to one of four pillars:
How Does Self-Review Automation Work?
The system includes prompts that make the AI review its own work:
Primary Self-Review Prompts
- "Review your own architecture for issues"
- "Explain the end-to-end data flow"
- "Predict how this could break in production"
The Pattern
1. Generate solution
2. Self-review with prompts
3. Fix identified issues
4. Re-review
5. Only then mark complete
Self-review catches issues before they become bugs. The AI is good at finding problems in code--including its own code, when asked explicitly. Anthropic's documentation covers how to structure these review prompts for reliable results.
What Happens When Verification Fails?
The system handles failures through structured escalation:
Level 1: Soft Block
Red flag phrase detected. Request clarification:
"You mentioned 'should work'. What specific evidence supports this? Please provide build output or test results."
Level 2: Hard Block
Completion claimed without evidence. Block the completion:
"Task cannot be marked complete. Required: build output showing success OR test results passing."
Level 3: Rollback Trigger
Critical functionality broken after completion:
"Verification failed post-completion. Initiating rollback to last known good state."
The escalation makes cutting corners progressively harder. Evidence is the only path through.
FAQ: Implementing Verification Gates
Why is 'should work' dangerous in AI development?
It indicates a claim without evidence. The AI (or developer) is expressing confidence without verification. This confidence often masks untested assumptions, missing edge cases, or fundamental errors.
What is forced evaluation mode?
A mandatory 3-step protocol: evaluate each skill (YES/NO with reasoning), activate every YES, then implement. Research shows 84% compliance vs 20% with passive suggestions. The commitment mechanism creates follow-through.
What phrases indicate unverified AI code?
Red flags include: 'Should work', 'Probably fine', 'I'm confident', 'Looks good', 'Seems correct'. These all express certainty without evidence of testing, building, or verification.
What evidence should replace confidence claims?
Specific proof: 'Build completed: exit code 0', 'Tests passing: 47/47', 'Screenshot at 375px shows correct layout', 'Bundle size: 287KB'. Facts, not feelings.
How do I implement verification gates for AI code?
Add hooks that run after AI responses. Check for red flag phrases and reject them. Require build output, test results, or screenshots before marking tasks complete. Make evidence the only path forward.
What 84% Compliance Actually Looks Like
The difference between passive suggestions and forced evaluation isn't subtle. Here's a real before/after from my workflow.
Before: Passive verification (20% compliance)
I'd finish a feature and the AI would say something like: "The authentication flow should work. I've added the JWT utility, updated the login endpoint, and configured the cookie settings."
I'd read that, nod, move on. Three hours later, a 500 error in production. The cookie wasn't being set correctly because the response headers weren't configured for httpOnly. The AI was confident. I trusted the confidence.
After: Forced evaluation (84% compliance)
Now the same session ends with:
Build evidence:
npm run build → exit code 0 (9.51s)
No TypeScript errors
Test evidence:
Auth flow test: PASS
Cookie setting test: PASS
JWT expiry test: PASS
Screenshots:
Login form: renders correctly at 375px
Session persistence: verified across refresh
The AI runs the build. Shows the output. Takes screenshots. States the exit code.
The 84% compliance number comes from sessions where I tracked whether I actually ran verification before marking tasks done. Forced evaluation--where the AI had to provide evidence before I'd accept completion--produced compliance in 84 out of 100 sessions. Passive suggestions produced it in about 20.
The remaining 16%? Those were genuine time-sensitive situations where I consciously chose to accept risk. That's fine. The point isn't 100% verification. It's making the default behavior evidence-based instead of confidence-based.
Every time I skipped verification and got lucky, it reinforced the wrong habit. Every time the build caught something I'd have shipped, it reinforced the right one. Eventually the new habit won.
I hated the extra friction at first. But I love the part where I stop spending evenings debugging production issues.
I thought the problem was AI accuracy. Well, it's more like... the problem was my verification laziness. The AI generates good code most of the time. But "most of the time" isn't good enough for production.
Maybe the goal isn't trusting AI less. Maybe it's trusting evidence more--and building systems that make "should work" impossible to accept.
Related Reading
This is part of the Complete Claude Code Guide. Continue with:
- Quality Control System - Two-gate enforcement that blocks implementation until gates pass
- Context Management - Dev docs workflow that prevents context amnesia
- Token Optimization - Save 60% with progressive disclosure
Top comments (0)