What if your AI agent could do anything... but only what you actually allow?
That's Aegis: the first agent with biometric-gated trust boundaries.
GeminiLiveAgentChallenge
The Problem Nobody Is Talking About
Everyone is racing to build autonomous agents that read email, manage calendars, push code, send messages.
But nobody is asking: what stops it from doing something you didn't want?
Not hallucinations. Trust.
Aegis solves this with a three-tier security model:
🟢 GREEN — Read-only, silent
🟡 YELLOW — Modifies state, verbal confirmation
🔴 RED — Irreversible/sensitive, Face ID/Touch ID required
Every action is classified by intent & consequence — not tool names.
The Biometric Flow (the hardest & most satisfying part)
RED action → Mac agent halts → GCP creates pending request → iPhone companion app receives it → Face ID fires via WebAuthn → success → agent proceeds.
Real-time cross-device flow: Mac → GCP → iPhone Face ID → Mac in seconds.
Your phone becomes the hardware lock on your AI.
The Classifier Insight
Tool names don't matter. Intent & consequence do.
Same click tool: navigating a menu = GREEN. Sending an email = RED. Context decides.
When in doubt: escalate. False RED costs one tap. False GREEN could cost everything.
The Stack
- Gemini Live API — real-time voice
- Gemini 2.5 Flash — classification & planning
- Composio — tool execution
- WebAuthn — biometric auth on iPhone
- FastAPI + GCP Cloud Run — backend & audit
- Firestore — real-time state
- React PWAs — Mac app, mobile, dashboard
Lightweight Auth (deadline pressure)
Full JWT was too risky in 8 days. Used bcrypt-hashed PIN registered to Firestore. Simple, secure, documented.
Aegis proves boundaries are buildable.
Try it now:
🔗 Live: https://aegis.projectalpha.in
📊 Dashboard: https://aegisdashboard.projectalpha.in
📱 Mobile: https://aegismobile.projectalpha.in
💻 Code: https://github.com/harshitsinghbhandari/gemini-live-hackathon
Built solo in 15 days for #GeminiLiveAgentChallenge #GoogleAI #GeminiLive #BuildWithGemini
Top comments (0)