DEV Community

Saviel Yamani
Saviel Yamani

Posted on

I Spent $312 Testing AI UGC Ads for SaaS. The Boring Hook Won. published: false

  • I assumed a clever hook would win. A boring one did, by 4.2x.
  • Spent $312.47 over 11 days testing five video variants for my Postgres tool.
  • The lesson wasn't about creative quality. It was about removing my own taste from the loop.

The Hypothesis I Was Sure About

When I finally accepted that my Postgres schema visualizer wasn't going to grow itself, I sat down on a Sunday morning with too much cold brew and a hypothesis I was genuinely excited about: developers are tired of generic SaaS marketing, so a sharp, contrarian hook would crush a boring one.

I'd been writing software for ten years. Of course I knew my audience. I was sure.

This post is about how that hypothesis got demolished, and what I learned about testing AI UGC ads for SaaS when you're a solo founder with no marketing team and no patience for vibes-based decisions.

Spoiler: the winning ad sounds like something a tired tech lead would say at standup. No punchline. No edge. Just a problem statement.

What I Thought Would Win

My stack is Node, Postgres, and Stripe, with a thin React frontend. The product helps devs visualize complex schemas without dragging tables around in pgAdmin like it's 2009. I had ~40 paying users from a Hacker News spike and then a four-week flatline on MRR.

So I wrote five hooks. Just in my head, I'd already ranked them:

  1. "Your ORM is lying to you about your schema." (Edgy. Contrarian. My personal favorite.)
  2. "I rewrote our migration system after a 3 AM incident. Here's what I use now." (Story-driven.)
  3. "Stop opening seven psql tabs. There's a better way." (Pain-focused.)
  4. "Onboard a new dev to your codebase in under 10 minutes." (Utility, kind of dry.)
  5. "What your database GUI isn't showing you." (Mystery.)

If you'd asked me to bet money, I'd have put it on #1. It was sharp. It started a fight. It would absolutely stop the scroll.

I was so wrong it's embarrassing.

The Filming Disaster I'd Rather Forget

Before I get to the test itself, a quick aside, because I think every solo founder needs to hear this. I tried to film these myself first. Bought a $43 ring light off Amazon. Set it up in front of the only wall in my apartment that doesn't have a leaky AC stain on it. Did 17 takes of hook #1.

They were unwatchable. I kept doing this thing where I'd glance at my notes mid-sentence and my eyes would dart sideways like I was committing a crime. My partner walked in, watched ten seconds, and said "you sound like you're being held hostage." Fair.

I burned a Saturday on this. I had nothing to show for it except a slightly sunburned forehead from the ring light.

The Tool Comparison

So I gave up on filming and looked at AI UGC video generators. I evaluated four:

Tool Why I Considered It Why I Didn't Pick It
HeyGen Best-known, polished avatars $89/mo starter tier was over my budget for a five-video test
Synthesia Strong enterprise reputation Output looked corporate; wrong vibe for TikTok-style UGC
Arcads Designed specifically for ad creative Limited avatar library at the time I tested
UGCVideo.ai Native-looking phone-style output Picked it for the per-video pricing — I needed five outputs once, not a subscription

I went with the last one purely because I didn't want a recurring charge sitting on my Stripe statement reminding me of this experiment if it failed. That was the entire decision.

Two honest gripes after using it: the lip-sync drifts noticeably on words ending in hard consonants, so "Postgres" sometimes lands a beat late, and the export queue can stall during what I assume are peak US hours — I had one render sit at 94% for 38 minutes before I refreshed and re-queued it. Not a dealbreaker for batch testing, but I wouldn't trust it for a same-day turnaround.

The Test Setup

Five scripts, five videos, one Meta campaign with Dynamic Creative Optimization, $25/day budget, 11 days. I tracked everything in a Notion doc because I refuse to pay for another tool. The columns were hook_id, spend, ctr, cpc, signups, notes.

Total burn: $312.47. Roughly what I'd pay for two months of a mid-tier SaaS subscription, which felt about right for a learning budget.

The Results

Here's what the data looked like after day 11:

  • Hook #1 (the contrarian one I loved): 0.41% CTR, 2 signups
  • Hook #2 (the 3 AM story): 0.78% CTR, 4 signups
  • Hook #3 (psql tabs pain): 0.92% CTR, 3 signups
  • Hook #5 (mystery): 0.33% CTR, 1 signup
  • Hook #4 (boring onboarding utility): 1.74% CTR, 19 signups

Hook #4 outperformed my favorite by 4.2x on CTR and converted ten times as many trials. The script was 23 seconds long and contained zero rhetorical flourishes. It just described a problem and showed the product solving it.

I sat with that result for a while. The reason it won, I think, is that "onboard a new dev in 10 minutes" is a thing engineering managers are actively, painfully searching for. It maps to a budget line. The contrarian hook was something I wanted to say, not something a buyer was looking to hear.

The Workflow I Use Now

This is the part I'd actually save if I were you. Whenever I push a feature worth promoting, I run this loop:

# my actual checklist, lives in a .md file in the repo
1. grep last 30 days of support emails for repeated phrases
2. extract 3 phrases that sound like job-to-be-done statements
3. write 5 hooks: 2 from those phrases, 3 from my own ideas
4. generate all 5 as UGC-style videos in one batch
5. ship to Meta DCO with $20-30/day cap
6. wait 7 days minimum before judging anything
7. kill bottom 3, double the winner, archive the data
Enter fullscreen mode Exit fullscreen mode

The two non-obvious rules: never let yourself pre-rank the hooks (write them in a random order in the doc), and always include at least two hooks pulled verbatim from customer language. Your taste is the bug. The customer's words are the fix.

The thing I keep coming back to is that I spent ten years learning to write code that doesn't trust my assumptions — unit tests, type checks, assertions, the whole stack. And then the first time I tried to do marketing, I trusted my gut completely. The boring hook didn't win because it was clever. It won because I finally let the data overrule me.

Top comments (0)