Most AI apps today try to do things for you automatically.
But when it comes to personal data — like money or health — that can feel risky.
So I explored a different pattern:
What if AI suggests actions, but never executes without approval?
🧠 The Idea: Approval-Gated AI
Instead of this:
User → AI → Action happens
I built around this:
User → AI → Draft → User approves → Action happens
💬 Example
You type:
"Add coffee for 40"
The AI:
- Parses the intent
- Prepares the expense entry
- Shows it to you
You:
- ✅ Approve → saved
- ❌ Reject → nothing happens
⚖️ Why This Matters
In domains like:
- Finance
- Health / nutrition
- Personal logs
Users care about control and accuracy.
Problems with full automation
- Silent mistakes
- Wrong assumptions
- Loss of trust
Problems with manual apps
- Too slow
- Too much friction
🔄 This Pattern Sits in Between
Approval-gated AI is:
- Faster than forms
- Safer than automation
🧪 Where I Applied It
I've been prototyping this idea in a small app where AI helps with:
- Expense and income tracking
- Budget alerts
- Nutrition logging (calories and macros from descriptions)
- Receipt parsing
Everything goes through user approval first.
🤔 Trade-offs I Observed
✅ Pros
- Builds trust quickly
- Prevents unexpected changes
- Makes AI behavior transparent
❌ Cons
- Adds an extra step
- Can feel repetitive for simple actions
🧩 Open UX Questions
This is where I'd love input:
- Should small actions auto-approve?
- Should users be able to set confidence thresholds?
- When does confirmation become annoying instead of helpful?
🚀 Implementation Stack
- React 19
- Hono
- Cloudflare Workers + D1 (SQLite)
- Drizzle ORM
- Tailwind
- Radix UI
🔗 Demo & Project
- 🎥 Demo: https://www.youtube.com/watch?v=V09mr6b7RXU
- 🌐 App: https://keepmylog.com
💬 Final Thought
AI doesn't always need to be autonomous.
Sometimes the best UX is:
AI that assists decisively — but lets humans stay in control.
Curious how others are thinking about this pattern.
Top comments (0)