Six weeks ago I shipped the first version of MelodyCraft AI, a
single-page AI music generator. Today there are 21 tool pages live — rap generators, name generators, a lyrics-to-song
workflow, an 8-page band-name cluster, and a few experiments that worked despite my better judgment.
I'm a solo dev. No co-founder, no design team. The whole thing runs on Next.js 15 (App Router), Drizzle ORM, Better
Auth, Tailwind 4, and a thin abstraction over two AI providers. One Git repo, deployed on Vercel.
This is a brain dump of what I'd tell myself before week 1. Not a brag list — half of it is mistakes. Some of these
are technical, some are product, some are weirdly specific to the way solo devs accidentally make their own lives
harder.
Here's the rundown.
## 1. The stack matters less than the page checklist
Everyone obsesses over stack selection. I did too — Next.js 15 vs SvelteKit, Drizzle vs Prisma, Better Auth vs
NextAuth vs roll-your-own. In the end the stack delta was maybe 2-3 days over 6 weeks. What actually saved me was a
checklist.
After page #2 I noticed I kept missing the same things — meta tags wrong, og:image not 1200x630, internal links
rendering grey-on-grey, hreflang missing, schema.org missing. So I wrote a 100-item checklist split into 11 buckets:
SEO, Performance, Accessibility, Mobile, Internal Links, Schema, OG, Analytics, Anti-Abuse, Copy QA, Credits/Pricing.
Every new tool page has to pass the checklist before it hits main. Sounds bureaucratic. It is. But pages 3-21 took
roughly half the time of pages 1-2 each, and the post-deploy bug rate dropped to almost zero.
If you're a solo dev, write the checklist after page 2. Earlier is wasted effort; later is too late.
## 2. Don't multi-provider until you have to
Early on I plugged in two AI providers behind a TypeScript interface:
ts
interface AIProvider {
createTask(params: TaskParams): Promise<Task>;
getTaskStatus(taskId: string): Promise<TaskStatus>;
}
const provider = getProvider(env.PROVIDER || "evolink");
The reasoning was "what if A goes down?" Sounded responsible. In practice for the first 5 weeks I used exactly one
provider, the abstraction added friction every time the two APIs diverged (different param names, different error
shapes, different webhook payloads), and I built glue code I never used.
Premature provider abstraction is a YAGNI tax. Build for one provider, ship, abstract when the second one actually
earns its keep. The interface is a 30-minute refactor when you genuinely need it — not before.
3. A bad polling pattern can burn $200 in 24 hours
This one stings.
Around week 4 I rewrote the lyrics-to-song page (https://melodycraftai.com/lyrics-to-song?utm_source=devto). The new
client polled the task status endpoint every 1 second instead of every 5. I tested locally with one task, it worked, I
shipped.
Next morning Vercel sent me an email: Edge Requests at 99.7% of the monthly quota. By dinner I'd been auto-upgraded to
Pro and was watching the meter tick. The fix took 3 minutes — bump the interval, add exponential backoff, gate the
polling behind tab visibility. The damage was real money.
What I should have done: back-of-envelope math before shipping. (active users × poll frequency × avg duration × 30
days). I now keep a polling-budget.md next to the code that updates whenever a polling rate changes.
If you don't track polling math, your Edge function bill will track it for you.
4. One template, eight pages — the band-name cluster
After page 13 I stopped writing tool pages from scratch. A search-volume scan turned up 8 viable band-name generator
queries — metal band name, kpop band name, punk band name generator, etc. Each was 200-1000 monthly searches on its
own, low difficulty, no obvious dominant ranker.
So I extracted a NameToolTemplate component:
<NameToolTemplate
genre="Metal"
promptPrefix="Generate aggressive, dark band names suitable for thrash/death metal..."
examples={metalBandExamples}
schemaDescription="Free metal band name generator..."
/>
8 pages shipped in 2 days. Each one ranks for its specific query without cannibalizing the others, because the page
content is genuinely different — different examples, different FAQ, different prompt seed — even though the shell is
identical.
When you see ≥ 5 pages with the same shape, build the template. Anything less, hand-write it. The break-even is
sharper than it looks.
5. Programmatic SEO almost killed the blog plan
I had this idea: programmatically generate ~50 /blog posts from a genre × use-case matrix. Looked clean on paper,
would have shipped in a weekend.
Then I read the 2024-2026 Google Helpful Content + Spam Updates more carefully. Programmatic content with thin
variation now actively hurts site-wide rankings, not just the offending pages. So 50 templated blog posts could drag
the 21 tool pages down with them.
I killed the idea last week. Replaced it with one hand-crafted long-form blog per week. Lower volume, much higher
ceiling, no Google risk. The bandwidth I freed went into a YouTube channel — different distribution, different
algorithm, doesn't compete with the main SEO thread.
Templates are great for tool pages where the user actually wants the tool. Templates for content — where the user
wants information — are a Google trap in 2026.
6. Anti-abuse: gate the gift, not the user
I had a fingerprint + card dedup that auto-banned users looking like duplicates. Felt smart. Wasn't.
A reference SaaS in the same niche ran the same logic and later discovered, via angry support tickets, that the
"duplicates" were often a single VIP customer using multiple devices and multiple cards (work card, personal card,
family card). Banning them killed revenue.
I refactored. The dedup now runs at exactly one place: the new-user free credit gift. Same fingerprint or same card on
signup → no second free gift. But you can still register, log in, buy credits, and use the tool normally. This stops
free-credit cycling without burning real customers.
Small product call. Difference between "we lose ~$0 to abuse" and "we lose 30% of VIP revenue to false positives" is
purely where the gate sits.
7. Internal links are weirdly hard
The dumbest lesson, listed last because it's a real one.
After shipping the blog index (https://melodycraftai.com/blog?utm_source=devto), I noticed a few internal links
rendered the same color as body text — grey on white. Users (and Googlebot) couldn't tell they were clickable.
I'd assumed Tailwind's prose class handled this. It does — for external links. The way my MDX renderer wired up the
<a> component stripped the color class on internal anchors specifically.
Fix was 4 lines. But it had been live for 5 days and nobody had clicked through.
Every time you ship a new content surface, click 3 internal links yourself, on mobile, in incognito. The cost is 10
seconds. The cost of not doing it is invisible CTR loss for a week.
---
What I'd do differently next time
- Page checklist on day 1, not after page 2
- One AI provider until you literally cannot ship without a second
- Polling math is a checklist item, not an optimization
- Hand-crafted blog from day 1 — programmatic is for tool pages, not for content
- Anti-abuse logic gates on gifts, not on signups
- Click your own internal links every release
Source layout for the 21 tools is in this open repo: https://github.com/kekelele19851224-lgtm/melodycraft-ai-public-.
It's a README index — useful as a reference if you're building a similar AI tool catalog.
If you're a solo dev shipping AI tools, the meta-lesson is: most of the work is plumbing, not the AI part. The AI
providers are a stable interface you call. The hard parts are the 100 tiny things that make a page rank, convert, and
not blow up your Vercel bill at 3 AM.
That, and not getting too clever before week 6.
---
MelodyCraft AI is at melodycraftai.com (https://melodycraftai.com/?utm_source=devto). Happy to write up specific
subsystems — the credit system FIFO logic, the polling backoff, the NameToolTemplate generator — if there's interest.
Top comments (0)