Over the past several months, I built and shipped MovieMonk-AI — an AI-powered movie and TV discovery platform.
- 🌐 Live: https://moviemonk-ai.vercel.app
- 💻 GitHub: https://github.com/mfscpayload-690/moviemonk-ai
- 🗓️ Project start (first commit): 15 Nov 2025
This started as a personal engineering challenge: build something users can actually use daily, while going deep on product architecture, AI integration, and reliability.
What is MovieMonk?
MovieMonk combines:
- TMDB for factual metadata (titles, cast, crew, ratings, images, release details)
- Groq (Llama 3.1) for AI-generated editorial content (summaries, spoiler/full-plot breakdowns, notes)
The goal is simple: make movie discovery feel smarter, faster, and more personal — without sacrificing trust.
Core capabilities
MovieMonk currently includes:
- Intent-aware search with disambiguation logic
- Natural-language vibe search (constraints like genre, runtime, language, tone)
- Real-time autocomplete with caching
- Discovery rails with regional/global balancing
- Cloud watchlists with Supabase sync
- Watched tracking with toggle + undo flows
- Shareable watchlist links via tokenized public URLs
- Release Radar recommendations derived from watchlist signals
- User preference controls (motion/performance/experience toggles)
Tech stack
Frontend
- React 19
- TypeScript
- Vite
- Tailwind + custom CSS
Backend / Infra
- Vercel Serverless Functions
- Supabase (Auth + DB + sync)
- Redis/KV style API caching
- External integrations: TMDB, Groq, search providers
Testing
- Jest test suite across hooks, components, and flow-critical logic
High-level architecture
MovieMonk is split into two major layers:
-
Client App Layer
- Discovery/search/detail/watchlists/settings UI
- Session-level caching + optimistic UI patterns
- Controlled interaction states (undo, rollback-safe actions)
-
Serverless API Layer
- Proxies upstream APIs securely
- Sanitizes/validates requests
- Handles aggregation and response shaping
- Applies caching for expensive/high-frequency endpoints
This keeps the browser lean while protecting secrets and enforcing consistency.
A few implementation details I’m proud of
1) Trust-aware AI + factual separation
One early issue in AI products: model hallucination around factual fields.
I solved this by separating data responsibilities:
- Factual fields come from TMDB/service data
- Creative fields are AI-generated (summaries, editorial text, spoiler narration)
This dramatically reduced trust issues while still keeping AI value high.
2) Undo/rollback architecture for watchlists
Instead of naive optimistic updates, operations produce receipt-like state transitions so actions can be safely reversed.
This improved reliability, especially with cloud sync race conditions.
3) Vibe query parsing
Queries like:
“cozy thriller under 100 minutes, not horror, in Korean”
are parsed into structured filters and ranking hints.
This bridges natural-language intent into deterministic discover queries.
4) Personalization without heavy onboarding
Release Radar and discovery weighting use implicit signals (watchlist patterns, saved behavior) to provide useful personalization early.
Security approach
Security was treated as a default engineering requirement, not a final checklist.
Key practices include:
- Server-side environment variables for all secrets
- Request validation/sanitization before upstream forwarding
- Origin controls + strict security headers (CSP/HSTS/etc.)
- Token validation on shared resource flows
- Dependency hygiene and regular security updates
UX improvements shipped recently
A major UX pass included:
- Replacing native browser
confirm/alert/promptwith branded dialogs/sheets - Better empty states (clear next actions instead of dead ends)
- Watchlist drag/drop ordering and bulk actions
- Better mobile ergonomics and interaction consistency
- Preference-driven behavior controls (e.g., reduced motion / autoplay choices)
These were less “flashy” than new features, but made daily usability much better.
What didn’t work (and why that mattered)
Not every shipped experiment survived:
- Some AI provider fallback complexity created operational overhead
- Earlier interaction patterns were too heavy for primary user flows
- Sync behavior needed multiple rewrites before becoming reliable
Cutting features was as important as building them.
The biggest product lesson: a smaller, sharper feature set wins over broader complexity.
What I learned
Building MovieMonk sharpened my understanding of:
- Designing resilient full-stack systems as a solo developer
- Balancing product speed with long-term maintainability
- Building AI experiences that are useful and trustworthy
- Making performance and UX quality first-class architecture concerns
What’s next
Current direction includes:
- Continued improvement of vibe search intelligence
- Better SEO/indexability strategy for detail pages
- More robust personalization controls
- Additional quality/performance hardening as usage grows
Try it / feedback
If you’re interested in AI product engineering, search systems, or full-stack architecture, I’d genuinely value your feedback.
Thanks for reading 🙌
Top comments (0)