Fitness Copilot - 🎃 Kiroween 2025
Inspiration
What if you could snap a photo of your meal or workout and get instant, context-aware feedback? Not just "that's 500 calories" but "you've got 600 left for today, and your leg workout is still pending."
The inspiration came from wanting to build something ambitious under hackathon constraints—something that stitches together incompatible systems and makes them cooperate. When you combine proper guardrails, spec-driven development, and tight steering docs, you can build things that would normally take weeks.
What it does
Fitness Copilot is an AI-powered fitness tracking app that combines:
- Vision-based logging: Snap a photo of your meal or exercise, and Google Gemini Vision analyzes it
- Context-aware coaching: The AI knows your training plan, today's progress, and recent conversation before responding
- Validated tracking: All nutrition and exercise data is validated through Pydantic schemas before hitting PostgreSQL
-
Dual interface:
- Monitor: A typical dashboard with fitness-app metrics.
- Chat: An interactive chat interface for unstructured input and coaching.
The system doesn't just track—it understands your full situation and provides personalized guidance.
How we built it
Product Discovery Phase
Prior to the spec-driven stage, and in parallel to building the guardrails, several interactions with Google Gemini AI Studio were paramount to decide which features are in/out for the MVP.
Working on a Codex/Gemini-like UI/UX interface lets you focus on the frontend and do what I'd call a product discovery stage. This can be done in Kiro too, but AI Studio is a delightful way to do it and works really well for early exploration.
Guardrails: Starting with a Strong Foundation
It's important to have good guardrails. The fullstack FastAPI template helped a lot. In this sense:
- Having a README.md helps the developer understand the project
- Steering commands from Kiro are very useful so the interaction from the LLM and the human are as similar as possible
- Having a polished task runner file (
justfilein our case, but anything can fit here) for the project has been very useful
Even when following SDD, some drifts appear (software engineering, in a nutshell). It's important that after each slice, you test thoroughly before moving on.
Steering: Teaching Kiro the Constraints
We used tight steering docs, inspired by Kiro's "Stop Repeating Yourself" blog post and by the /reflect and /verify patterns popular in the SDD ecosystem.
Steering documents tell Kiro not just what to build, but what NOT to build. They encode architectural decisions as rules:
- "CSV plans are immutable"
- "All calories must be validated"
- "Exercise names must come from an allowed list"
Spec-Driven Development: The Core Workflow
Every major capability in the app has:
- A requirements spec in
.kiro/specs/... - Optional design notes
- A clear mapping to tests or acceptance criteria
The FastAPI + React code is consistently generated or refactored under those specs:
- Backend routes mirror spec sections
- Frontend types are derived from OpenAPI
- The Update DSL is treated as a first-class contract
This structure ensures that before writing any code, you know exactly what "done" looks like. The specs become the source of truth, and Kiro generates implementations that match them.
Agent Hooks: Automation with a Twist
Compounding Context: We utilized Agent Hooks to validate the stitching between Frontend types and Backend models, automatically updating documentation whenever the schema changed.
However, I learned that hooks are useful to detect drifts on the specs, but most of the time, I find manually triggered hooks more useful. For example, fix-lint when I'm ready to finish a feature and checking everything—I usually trigger it in the background.
Why manual > automatic? Because during active development, you want things to be temporarily broken while you explore. Automatic hooks break your flow. Manual hooks let you run validation when YOU'RE ready.
The Architecture
What makes this work is how we stitched together incompatible systems:
- AI Vision (flexible estimation) → Pydantic Validation (enforces schema & ranges) → PostgreSQL (stores structured data)
- Natural language input → Two-tier parser (keyword matching + LLM fallback) → Structured logs
- Oracle Chat (adaptive, conversational) ↔️ Monitor Dashboard (rigid, mathematical)
The key innovation is context injection: Before every AI request, we inject the user's training plan, today's progress, and recent conversation. The AI doesn't just see a photo—it understands the full situation.
Next Steps
Future work to make this production-ready:
- Polish the initial user setup flow.
- Remove hard-coded paths (many, lol).
- Make it more robust against edge cases.
- Improve context engineering to make responses even better.
- Add streaming for video and audio responses.
- Leverage AI assistant for any training/nutrition questions.
Key Takeaways
- SDD makes ambitious projects shippable under hackathon constraints
- Product discovery (AI Studio) → Guardrails (templates) → Implementation (specs + Kiro)
- Steering docs prevent repeated mistakes by teaching constraints
- Manual hooks > automatic hooks during active development
- Context injection is what makes AI feel alive and personalized
References
Built for Kiroween 2025
#codewithkiro #kiroween #specsnotcode
Top comments (0)