I've been watching my 13 Korean scrapers since March 13th. Most of the time, the numbers blur together — total runs, total users, growth percentages.
But this week I noticed something that changed how I think about what I'm actually building.
Two of my most-used actors couldn't be more different.
The Numbers Side by Side
naver-news-scraper: 8,483 total runs. 6 users.
naver-place-search: 1,113 total runs. 22 users.
On the surface, naver-news looks like the winner. 7.6x more runs. More API calls, more usage, more revenue per month.
But look at the user column again.
naver-news: 6 users. 8,483 runs. That's ~1,414 runs per user.
naver-place-search: 22 users. 1,113 runs. That's ~51 runs per user.
These aren't just different numbers. They're different businesses.
The News Actor: One Heavy User Running the Show
If I lose my top naver-news user, I probably lose 60-70% of my naver-news revenue overnight. Maybe more.
I don't know who they are. I've never seen their face or their use case. But I can read the pattern: large-batch runs, consistent timing, hundreds of calls per session. This is a data pipeline, not someone experimenting.
They've built something that depends on this actor. They'll be back tomorrow.
That's the fragile beauty of a concentrated-use actor. High revenue per user. High dependency risk.
The Place Search Actor: Distributed and Resilient
naver-place-search has 22 users spread across different industries, time zones, and use cases. Some are researchers. Some are businesses monitoring competitors. Some are developers testing integrations.
If I lose any single one, I lose 4-5% of volume. The actor keeps running.
But here's the other side: no single user needs me enough to care if I disappear. The barrier to switching is low. The attachment is shallow.
Wide. Resilient. But harder to retain.
What This Means for an API Business
Most API businesses fear the same thing: "what if no one uses this?"
The actual danger splits into two opposite failure modes:
Failure Mode 1: Too shallow. Lots of casual users, nobody depending on you seriously. Volume never builds. Revenue stays small. One bad month and half your users churn.
Failure Mode 2: Too concentrated. One or two heavy users carrying everything. They leave — maybe because they built their own solution, maybe because a competitor undercut you — and the business collapses.
The ideal is somewhere between naver-news and naver-place-search: enough heavy users to drive meaningful volume, enough distributed users to survive churn.
I don't have that balance yet. But now I know what to aim for.
The Pattern in the Data
Looking at all 13 actors together:
- 4 actors follow the "concentrated" pattern (news, blog-reviews, place-reviews, blog-search in its early days)
- 9 actors follow the "distributed" pattern (place-search, kin, webtoon, musinsa, daangn, bunjang, melon, yes24, photos)
The concentrated actors generate most of the revenue. The distributed actors represent most of the resilience.
If I want to grow, I need more concentrated users — devs and businesses who integrate my actors into their pipelines.
If I want to survive, I need to keep building the distributed base — the long tail of users who find these tools useful enough to come back occasionally.
What I'm Watching This Week
Today is Monday. The naver-news runs will spike again — the weekly cycle is consistent now, 45/h on weekdays vs 19/h on weekends.
What I want to watch: does naver-place-search have a similar Monday spike? Or does distributed usage mean the cycle is flatter?
Two actors. Two business models. Running in parallel, telling me different things about who actually needs Korean data infrastructure.
I'll report back.
This is post 32 in my Building Korean Data APIs on Apify series — a transparent log of building and monetizing Korean data scrapers on Apify Store.
If you're building something that needs Korean data — Naver, Melon, Daangn, Musinsa — the actors are live and pay-per-event. Check the Apify Store profile.
Top comments (0)