10 Users, 10,792 Runs: The Automation Pattern Hiding Inside My Quietest Korean Scraper
I have 13 Korean data scrapers on Apify. They've collectively crossed 14,000 runs from 122 users.
On the surface, naver-news-scraper looks unremarkable: 10 users, no complaints, running quietly in the background. It's not my most popular actor by user count. But last month it logged 10,792 runs.
That's 1,079 runs per user.
Compare that to naver-place-search — my most popular actor by users (27), which logged 840 runs in the same period. That's 31 runs per user.
The same API. Completely different usage patterns. Here's what that gap reveals.
The Two Archetypes Hidden in Your User Count
When you sell APIs, your instinct is to track user count. More users = more adoption = more revenue. But user count alone misses a critical distinction: how users run your actors.
Looking across my 13 actors, two clear archetypes emerge:
Automation users — they integrate once, then schedule forever.
- naver-news-scraper: 10 users, 10,792 runs/month → 1,079 runs/user
- naver-blog-search: 18 users, 678 runs/month → 38 runs/user
- naver-place-reviews: 18 users, 478 runs/month → 27 runs/user
Query users — they run when they need data, not on a schedule.
- naver-place-search: 27 users, 840 runs/month → 31 runs/user
- naver-kin-scraper: 6 users, 81 runs/month → 13 runs/user
- naver-webtoon-scraper: 6 users, 27 runs/month → 4 runs/user
The outlier is stark. A news scraper runs hourly or more — someone built an automated pipeline. A place search scraper runs when someone needs to find a restaurant.
What Made the Difference
News data has a shelf life measured in hours. You don't manually trigger a news scraper when you need to monitor a Korean brand — you set it to run every hour and forget about it.
Search data has a shelf life measured by the question. Someone scraping Naver Place searches for "barbecue near Hongdae" runs it once, gets their answer, and comes back next month for a different query.
This isn't a product decision I made. It emerged from the data type. I just built the scraper and watched what happened.
The lesson: the data's natural freshness cycle determines the user's automation pattern. News → hourly automation. Reviews → weekly batch. Search → on-demand query.
Why This Changes How I Think About Revenue
An automation user is worth more than their user count implies. 10 automation users generating 10,000 monthly runs contributes significantly more per user than 27 query users generating 840 runs — and they're more predictable.
But query users are easier to acquire. They discover your actor, try a search, see results. No infrastructure to set up. The barrier is one run, not a scheduled pipeline.
So the acquisition funnel actually works backwards from what you'd expect:
- Query users discover you (low friction, high volume)
- Some of them have recurring needs and automate
- The automated users become your baseline revenue
My naver-place-search actor with 27 users is probably my best acquisition channel. My naver-news-scraper with 10 users is probably my most reliable revenue source.
Designing for Both
Once I saw this pattern, I started thinking about actor design differently.
For automation-oriented data (news, scheduled monitoring):
- Make input schemas support recurring queries (saved searches, keyword lists)
- Return structured output that feeds directly into monitoring pipelines
- Document cron-schedule examples in the README
For query-oriented data (place search, product lookup):
- Make the first run as fast as possible — reduce time-to-value
- Return enough data that a single run is useful without needing a follow-up
- Document one-liner CLI examples prominently
I haven't fully implemented either of these yet. But the data told me where to invest.
The Metric That Matters
User count is how you measure discovery. Runs per user is how you measure stickiness.
If your runs per user is low across the board, you have a discovery channel but not a retention mechanism. If it's high for some actors and low for others, you have two different businesses inside one portfolio.
I had 10 users generating 10,000 runs right in front of me. I just wasn't measuring it.
I build Korean data scrapers on Apify — Naver, Daangn, Bunjang, Musinsa and more. All actors are in the Apify Store.
Top comments (0)