April 2026 · Lazy Developer EP.04
After building Apsity in EP.02, feedback from 12 apps started pouring in. Emails, reviews, DMs. At first I organized them in a spreadsheet. But "please add dark mode" and "my eyes hurt at night" are the same request, just written by different people in different words. Grouping similar requests, assigning priorities, notifying when they're resolved. All manual. The spreadsheet kept growing but never got organized.
There's a service called Canny. It does exactly this. Feedback collection, voting, roadmap. But the pricing starts at $79/month. Too much for an indie developer. If existing tools are too expensive or don't fit, I build my own.
I decided to build a SaaS with everything: feedback collection, AI auto-classification, public roadmap, changelog, voting, and email notifications. Named it FeedMission. This post is the record of how it started.

The finished FeedMission landing page / GoCodeLab
Quick Overview
- Canny at $79/mo → too expensive for indie devs → decided to build it myself
- Designed AI clustering on top of the Next.js + Supabase stack I learned from Apsity
- Handed Claude a structured spec → MVP of 10,742 lines in 52 minutes
- 9 DB models, 12 APIs, 8 dashboard pages, widget, AI pipeline — all in one commit
- The real work came after the MVP — structural changes, performance, and security took far more time
I defined what I was building
I didn't just tell Claude "make me a feedback tool." I had the Apsity experience. I knew that specific requirements produce specific results. I signed up for Canny, Nolt, and Fider and used them myself. Features they all shared: feedback boards, voting, roadmap, changelog. That's the baseline. But I wanted one more thing — when feedback piles up, automatically group similar items together.
// My FeedMission requirements
Core: Feedback collection widget + public board + voting
Management: Roadmap kanban + changelog + email notifications
AI: Auto-classify similar feedback (embeddings + clustering)
AI: Sentiment analysis + auto-generated insights
Revenue: FREE / STARTER $9 / PRO $19 plans
Platforms: Script + React + iOS + Android + iframe + GTM
The MVP came out in 52 minutes
March 26, 9:41 AM. Started the project with create-next-app. Fed Claude the organized requirements and started building.
10:33 AM. Pushed the commit.
234006b feat: FeedMission full MVP implementation
73 files changed, 10742 insertions(+)
52 minutes. 73 files. 10,742 lines. Vibe coding is fast, but the reason isn't "Claude wrote the code" — it's "I knew exactly what I was building." When requirements are clear, Claude's output is precise.
What was inside the MVP
// 9 DB models (Prisma)
User, Project, Feedback, Cluster, RoadmapItem,
Changelog, Vote, NotificationLog, Subscription
// 12 API routes
/api/feedback — feedback CRUD + widget CORS
/api/clusters — AI cluster view/edit
/api/roadmap — roadmap kanban CRUD
/api/changelog — changelog + auto email on publish
/api/dashboard — stats aggregation (8 queries in parallel)
/api/insights — AI insight card generation
// 8 dashboard pages
Overview, Feedback, Clusters, Roadmap,
Changelog, Notifications, Widget, Settings
// 3 AI pipeline files
clustering.ts — feedback → embedding → cluster assignment
embeddings.ts — Voyage AI vector generation + Claude sentiment analysis
summaries.ts — Claude generates cluster titles/summaries + insights
The Feedback model has an embedding vector(1024) column. Feedback text gets converted into 1024 numbers via Voyage AI and stored. pgvector handles similarity search on these numbers. "Please add dark mode" and "my eyes hurt at night" end up with similar number patterns and automatically get grouped together.

FeedMission Dashboard Overview / GoCodeLab
The gap between "working code" and "product"
It built successfully. No type errors either. But the moment I actually used it, things to fix started piling up.
First: the sidebar was eating too much screen space. Switching to a top navigation took 4 minutes. Second: UUIDs were baked into the URLs. I refactored to slug-based routing — 13 files were referencing params.projectId. Third: after deploying to production, it was slow. The Vercel Function was running in the US, while the Supabase DB was in Seoul. Every query was crossing the Pacific Ocean.
The reality of vibe coding
This is why you can't ship AI-generated code as-is. Region settings, middleware optimization, security vulnerabilities, CLS — these only become visible when you actually run and use the code. Claude generates the first draft quickly, and I spend my time asking: "Why is this slow?", "Is this URL structure right?", "Should this data really be exposed?"
What happened over the next few days
By midnight on Day 1, I had 5 performance-related commits stacked up. Changed the Vercel region to Seoul (icn1), skipped unnecessary auth calls for public routes in middleware, added Prisma singleton caching, and matched skeleton heights to eliminate layout shift.
For 5 days I didn't touch the code and just used it myself.
On Day 6: improved 38 files in one go. 7 security patches, 6 DB indexes, dashboard parallel query optimization. Expanded the widget SDK to 5 types, built iOS SwiftUI and Android Kotlin native widgets. Integrated LemonSqueezy payments and pivoted pricing from KRW to USD. Along the way, I accidentally committed 686K lines of node_modules and pushed a deletion commit 28 seconds later.

7-day timeline of 51 commits / GoCodeLab
| Metric | Value |
|---|---|
| Total Commits | 51 |
| Claude Co-Authored | 37 (72.5%) |
| Active coding days | 3 (out of 7) |
| MVP generation time | 52 min |
72.5% was AI, the rest was judgment
37 out of 51 commits have the Claude Co-Authored-By tag. 72.5%. This doesn't mean "Claude built 72.5% of it." I organize the requirements, Claude generates code, I review, modify, and commit.
This is why vibe coding isn't "letting AI do everything." Build fast, use it fast, decide fast. What speeds up isn't code generation — it's the entire feedback loop.
FAQ
Is a 52-minute MVP actually usable?
"Working code" came out in 52 minutes. But bringing it to product quality took the remaining 6 days. The MVP is a starting point, not the finish line.
Is building your own better than using Canny?
Depends on team size and budget. If $79/month is a stretch and you need custom features like AI auto-classification, building your own might be the way to go.
How does AI clustering work?
Feedback text gets converted into 1024 numbers (embeddings). Sentences with similar meanings produce similar number patterns. Comparing them and grouping items with similarity above 0.85 into the same cluster. Covered in detail in EP.05.
Is code quality from vibe coding acceptable?
It works at the MVP stage, but you can't ship it as-is. I separately fixed 7 security vulnerabilities and 4 performance issues. AI generates the first draft, but human review is always required.
Originally published at GoCodeLab
Top comments (0)