f you want viral content ideas, one thing works embarrassingly well: YouTube outliers. Videos that blow past their channel's average, usually on small channels nobody follows yet. A channel that does 50K views suddenly drops one at 500K? Something hit. Title, thumbnail, topic, timing, whatever. Worth studying.
There's a tool for that. It's called 1of10. It costs $29 this month. Or $39. I stopped tracking their pricing page, it keeps moving. Plus a dashboard full of features nobody asked for, no API, nothing I can wire into my own workflow.
So I built my own. Not in the heroic "one afternoon, 200 lines, look at me" sense. In the boring sense: I typed a handful of short prompts at Claude and the thing worked.
TL;DR — This is not a tutorial. It's how I actually work with AI when I build a small tool. The code isn't interesting. The pivot prompt is: the one where I stopped trying to specify a solution and handed Claude a problem instead. That move is the whole article.
The Paywall and the Spec
The seed was a forum post I won't link (Medium links the hard way and credibility dies in the click). Paraphrasing:
Train Claude on viral headlines and YouTube titles. The article itself doesn't really matter. The title and the thumbnail do. We use 1of10 to find outlier videos and reverse-engineer the patterns.
Valid workflow. Annoying paywall. And I had a hunch 1of10 was three API calls in a trench coat.
The spec, written in my head and typed nowhere. A search query in. The top 10 YouTube videos that overperform their own channel's average out. Score formula: views / channelAverage. Free YouTube Data API, 10K quota units per day, roughly 100 full searches. Cost: $0 forever.
The metric is one division. That's the whole reason 1of10 is overpriced: the math they're selling is trivial, the dashboard is where the billing lives.
The Prompt That Did The Actual Work
The setup is uninteresting. Install the app. No, put it in the right project folder. Use gh instead of curl. Three prompts, three minutes. The scaffolding of the Astro app and the algorithm itself had happened in an earlier Claude session I no longer have logs for (back up your Claude Code projects, I learned this one the hard way).
App running locally. I type a search. The UI is ugly. The ranking is noisy. A top-10 sorted by outlier score mixes real outliers with normal-ish videos and I can't tell where the real signal ends. My eyes see three obvious winners. My brain cannot name the algorithm that says the same thing.
So I typed this:
can u improve the interface ? [screenshot]
not readable, and i'd want a one-click
copy of the best outlier titles
(the top 3 here really stand out,
idk how you calculate that,
where it really jumps mathematically)
Three asks. UI fix. Copy button. And the one that matters: idk how you calculate that, where it really jumps.
That last bit is the whole article.
Here is what I would have typed on a bad day, the kind of day where I try to sound like an engineer: implement a function that flags the top N statistical outliers using z-score or percentile thresholding, with proper edge case handling and unit tests.
That prompt gets z-scores. Z-scores don't work on this data. The distribution is nowhere near normal, outlier rankings are power-law at best, and I'd end up with a function that flags ten results on a smooth curve and zero on a jagged one. Useless.
What I typed instead was the problem, not the solution. "Where does it really jump."
Claude came back with three candidate algorithms (z-score, gap ratio, percentile threshold), explained why gap ratio fits this kind of data best, and wrote it:
for each pair of consecutive scores (sorted desc):
ratio = score[i] / score[i+1]
keep track of the largest ratio seen
mark the cut at that position
if the best ratio is below 1.5x: only the first result is an outlier
Ten lines. Run it on [45, 28, 20, 3.2, 2.8, 2.1], the cut falls between 20 and 3.2 (a 6.25x gap), and the first three are flagged as real outliers. Exactly what my eyes saw. Nothing I could have specced myself.
Describe the problem, not the solution. The model has read ten thousand papers on ranking and segmentation. You've read zero.
Three More Prompts, Same Shape
YOUTUBE_API_KEY not configured / it's in infisical
The dev server had crashed on a missing secret. I type the error message plus one hint: it's in Infisical. Infisical is a secrets manager. It keeps API keys out of plain .env files. The kind of hygiene I only started caring about after the third time I leaked one. Claude wrapped the launch command with infisical run, the app booted, we moved on. Two seconds. This is how you tell an AI where the secret is without pasting the secret.
no find the titles that overperform on a given topic [long paste of the forum thread]
Claude had drifted into an adjacent feature. Looked cool, wasn't what I wanted. I killed it with "no" and pasted the original spec from the forum, verbatim. Recenter by repetition, not by explanation. Do not argue with the model. Re-anchor.
the blue on black isn't readable [screenshot] and it's good to copy min 5 results to identify patterns
Second UI pass. Screenshot of unreadable blue links, plus a business rule smuggled into the feedback: min 5 results to spot a pattern. Claude adjusted the contrast, killed the blue, highlighted the real outliers with a coral accent, grayed the rest. The rule became the default for the export button: minimum 5, bumps up if the gap detector finds more. One turn.
The ones I skipped were ok, A, C, ok, ok. Validations and multiple-choice answers to Claude's own design questions. Plus one cost check mid-session (is the Google API free?) because you don't want to finish building something you can't afford to run. The rhythm is all information, no ceremony.
24 Hours Later, I Wanted It Inside Claude Code
App shipped. Running locally. Worked. And already I hated using it.
Because the flow was: open the browser, type the query, wait two seconds, click "copy top N", switch back to Claude, paste, ask for angles. Six steps for something I do twenty times a week. Unacceptable.
The next day. One commit. 234 lines. I moved the algorithm into my personal MCP server (the one that already holds my Medium stats tools, my YouTube transcript puller, my article archive search). A Convex action wrapping the same findOutliers logic, exposed as a tool called find_youtube_outliers.
If MCP is new to you, think of it as a standard protocol that lets Claude Code call your functions as if they were built into the assistant. The function lives on your server. Claude decides when to call it based on the user's message.
Now when I'm brainstorming with Claude I just type "find outliers for X and suggest 5 angles." Claude calls the tool itself, reads the titles, proposes hooks, often chains straight into get_youtube_transcript on the top result to sample the hook pattern, then into search_articles against my own archive to check if I've already covered the angle.
No browser. No TSV. No paste. No context switch.
One caveat. I went MCP here because I'm the only user and Claude Code is the only caller. For a one-user-one-machine tool, MCP is fine. If you plan to call your outlier finder from cron jobs, another script, a Discord bot, anywhere else, a plain CLI is usually the better shape. I went deeper on the tradeoffs between CLI and MCP integration elsewhere.
The real shift here isn't MCP vs CLI. It's that your tool stops being an app you open and becomes a capability the model reaches for. Same code. Different gravity. 🛠️
Why Every "1of10 Alternative" Article Is an Ad
I googled "1of10 alternative" before building mine. Every article on page 1 was an affiliate piece. Same template every time:
1of10 is great BUT it's expensive. Here's [affiliate link], only $9/month, much better value.
These aren't alternatives. They're cheaper SaaS on the same business model. You pay every month, the data stays on their servers, and you wait for a product team to ship the filter you need. Which they won't, because their roadmap is driven by what gets them on Product Hunt, not by your brainstorming loop.
The real alternative is "I wrote the thing and I own it." Nobody writes that article because there is no affiliate link in it.
Same move I made when Anthropic killed my $200/month scraping setup and I rebuilt it for $15. Same principle every time: the SaaS is almost always a UI on top of something free. If you can describe what you want to a model, you're paying for the skin.
Which, if you can't code, is fair. If you can code but prefer paying to prompting, also fair. But if you enjoy the build, know the paywall is optional. And getting more optional every month.
Six months from now there will be fifteen new "1of10 alternative, only $9/month!" on Product Hunt. All built on the same free public API. All with a dashboard, a dark mode toggle, an onboarding call nobody wants to do.
And then there are the ones who open their editor. Who use AI to build tools the exact shape of their workflow. Like Japanese artisans, making their own chisels for the wood they carve.
No churn. No AI pivot. Just a for loop.
Sources
(*) The cover is AI-generated. I tried prompting "a viral thumbnail outlier", just to see. The model returned a conference poster.
Top comments (0)