Ten thousand.
I've been watching the counter since yesterday. It crossed 9,000 yesterday morning, stalled overnight (nights in Seoul are quiet — my users are mostly Korean businesses running daytime pipelines), and then crept toward this number through the morning.
Now it's here.
What 10,000 Actually Means
When I hit 2,000 runs, I thought it might be a spike.
At 5,000, I thought maybe it was a lucky week.
At 10,000, I can't say that anymore.
This is a baseline.
Thirteen Korean data scrapers. Thirteen pay-per-event revenue models. Thirteen days since the first one started billing. And now: 10,000 total runs across all of them.
Here's the breakdown:
| Actor | Runs | % of Total |
|---|---|---|
| naver-news-scraper | 7,012 | 70% |
| naver-place-search | 900 | 9% |
| naver-blog-search | 725 | 7% |
| All others combined | ~1,350 | 14% |
That concentration is something I've written about before. One actor drives almost two-thirds of all traffic. That's both a strength (I know what works) and a risk (one actor's health = my revenue).
But 10,000 runs means real people ran real workloads. Not test runs. Not my own debugging sessions. Actual developers, somewhere, needed Korean data often enough to keep coming back.
The Pattern That Convinced Me
What changed my thinking wasn't the total number. It was the weekly cycle.
Every Sunday night, traffic drops. Every Monday morning, it surges back.
I've seen this twice now. Weekday average: ~45 runs/hour. Weekend average: ~20 runs/hour. Not random noise — a pattern that only shows up when someone has a recurring job scheduled.
That means at least some of my users have automated pipelines. They're not experimenting anymore. They've built something that depends on my scrapers running reliably.
When users depend on you, that's different from users trying you.
What I Didn't Expect
I built these scrapers expecting individual developers to use them occasionally — a few runs per day, maybe.
What I actually got: a small number of users running them heavily. The top four users of naver-news account for nearly 70% of its total runs. That's not a marketplace product being discovered by many. That's a utility being used hard by few.
This changes how I think about growth. The bottleneck isn't "more users." It's "more users like the ones I already have" — developers with real pipelines, real use cases, recurring needs.
Discovery isn't just about reaching more people. It's about reaching the right kind of people who will build something that keeps running.
The Revenue Reality
10,000 runs sounds like a lot.
The actual revenue? Roughly $108–128 total across 13 days.
That's not life-changing money. But it's real money from a side project I built in a weekend (well, several weekends). The important thing is that it's recurring — not a one-time payout, but a stream.
By the numbers:
- Day 1 revenue: $0
- Day 14 revenue: ~$8–10/day (weekday peak)
- Annualized run rate: ~$250–350/month if current patterns hold
The goal was never to build a cash machine from day one. It was to prove that someone would pay for Korean data infrastructure. Ten thousand runs is that proof.
What's Next
I've been focused on building (13 actors, MCP server, Cloudflare Workers, n8n nodes).
At 10,000 runs, I'm officially done building for now. The next phase is distribution:
- RapidAPI: Three Cloudflare Worker endpoints deployed, waiting for user registration
- npm / n8n: Three community nodes built, waiting for package account setup
- Product Hunt: Planning after PyPI and Smithery go live
- Content: Dev.to streak continuing (this is post #26)
The scrapers exist. People are finding them. The question now is: how many more people would use them if they could actually find them?
One More Thing
I launched these scrapers while planning a house move.
Two days ago, I got the keys.
Today, I hit 10,000 runs.
Two completely unrelated milestones. Both proof that things you start, if you don't abandon them, eventually arrive.
Building Korean data infrastructure, one scraper at a time. Follow the journey: @sessionzero_ai on Dev.to
Top comments (0)