DEV Community

omu inetimi
omu inetimi

Posted on

I built a ghost that haunts your browser

I built Muriel—a productivity ghost that haunts your browser for #Kiroween. She watches what you browse, tracks your tasks, and delivers snarky AI-powered commentary when you stray from your goals. The twist: her personality scales from gentle encouragement to savage roasts based on a "snark level" slider (1-10).

Huge props to Kiro for helping me ship this Chrome extension with spec-driven development and steering docs that kept Muriel's personality consistent.

The Problem

Productivity tools are boring. They nag you with sterile notifications and guilt-inducing statistics. Nobody wants another app that makes them feel bad about checking Twitter. I wanted something with personality—a companion that could roast you with love when you've been doom-scrolling for 45 minutes instead of finishing that report.

The challenge: making AI commentary feel consistent and characterful across hundreds of possible interactions, while keeping all user data private.

The Setup

I wrote three spec documents covering the core systems: AI Integration (on-device Gemini Nano with cloud fallback), Productivity Tracking (privacy-first local storage, task keyword matching), and Ghost UI (mood states, draggable positioning, accessibility). Each spec had requirements, design decisions, and implementation tasks.

The key innovation was the steering docs. Two files that Kiro referenced throughout development:

coding-standards.md: Enforced vanilla JS, ES6 modules, Chrome extension patterns, and the BYOK (Bring Your Own Key) model
ai-prompts.md: Defined Muriel's personality across all 10 snark levels with prompt templates
What Changed
Normally with AI coding assistants, you're constantly re-explaining character voice. "Remember, she's snarky but not mean." "Keep responses under 50 words." With steering docs, you explain once. When I asked to "add ElevenLabs with a scarier voice," Kiro already knew the privacy constraints, the BYOK model, and that voice output needed to queue to prevent overlap.

The most impressive generation was the complete ai-service.js—Gemini Nano capability detection, session management, graceful API fallback, and prompt construction that incorporates snark level, current site, time spent, and task relevance. All in one coherent file that just worked.

I also got the speech-service.js audio queue system. Multiple AI comments could trigger simultaneously (mood change + time threshold + task mismatch), and without queuing they'd overlap into chaos. Kiro generated a queue processor that handles Web Speech API and ElevenLabs TTS with automatic fallback.

The Snark Scale

Muriel's personality ranges from supportive friend to savage critic:

Level 1-3 (Gentle): "You're doing great! Maybe check on that report when you have a moment? 👻"

Level 4-6 (Moderate): "Interesting choice spending 20 minutes on Reddit when you said you'd finish the presentation..."

Level 7-10 (Savage): "Oh, we're on Twitter AGAIN? Bold strategy for someone with three overdue tasks. I'm not mad, just disappointed. Actually, I'm a little mad."

The steering doc meant every AI-generated prompt automatically included the right tone instructions without me re-specifying each time.

Privacy First

Everything stays local. Muriel uses Chrome's built-in Gemini Nano for on-device AI inference—your browsing data never leaves your machine. Cloud APIs (Gemini, ElevenLabs) only activate if you explicitly provide your own keys AND Nano isn't available. No bundled API keys, no tracking servers, no data collection.

The BYOK model was a core requirement in the specs, and Kiro enforced it across every feature.

Takehome

Steering docs aren't just style guides—they're persistent character sheets that shape every interaction. The snark level system meant I could iterate on features while Kiro ensured Muriel's voice stayed consistent. Specs for architecture, steering for personality, vibe coding for polish.

The combination worked perfectly. Muriel shipped in a weekend with a coherent personality across hundreds of possible comments.

Check out the project: here

Top comments (0)