On Day 11, I finally got the data I was missing.
I had been measuring my Korean scraper traffic in snapshots — always during business hours, always getting the flattering daytime numbers. Then at midnight Seoul time I ran a measurement, and the number that came back was 2.8 runs per hour.
Daytime: 45.6/h.
Night: 2.8/h.
A 15:1 ratio. That changes everything about how I model monthly revenue.
The Traffic Profile Across 24 Hours
After 11 days of running 13 Korean data scrapers on Apify, here is what the actual usage pattern looks like:
| Time Window | Traffic | Who Is Using It |
|---|---|---|
| Weekday daytime (9am–11pm KST) | 40–55/h | Enterprise automation, SMBs |
| Weekday overnight (11pm–9am KST) | 2–3/h | Scheduled pipelines only |
| Weekend daytime | 15–20/h | SMBs, freelancers |
| Weekend overnight | ~2/h | Automated only |
One actor — naver-blog-search — was the only one showing meaningful overnight activity (+15 in 6 hours). It is embedded in an automated pipeline that runs regardless of time. The other 12 actors? Near-zero overnight.
Why This Is a Korean Market Signal
This is not just "users sleep at night." It is a structural signal:
The primary users are Korean businesses running daytime workflows.
They pull Naver data during business hours because that is when their teams process it. They are not running global 24/7 pipelines — they are running scheduled jobs that fit Korean working hours.
Compare this to US-based API users targeting US data: you would see more distributed global traffic because teams work across time zones. Korean data → Korean users → Korean business hours. The signal is clean.
Revising the Monthly Revenue Model
My earlier estimates were extrapolated from daytime rates. Corrected:
| Period | Rate | Hours/Day | Daily Runs |
|---|---|---|---|
| Weekday active (14h) | 40/h | 14 | 560 |
| Weekday overnight (10h) | 3/h | 10 | 30 |
| Weekday total | ~590 | ||
| Weekend active (14h) | 15/h | 14 | 210 |
| Weekend overnight (10h) | 2/h | 10 | 20 |
| Weekend total | ~230 |
Monthly estimate: ~15,000 runs — more conservative than my earlier 17K–25K projection.
At current PPE rates, that is roughly $150–200/month at current user count. If users keep growing (they have been, consistently), the ceiling is higher.
The Outlier: naver-news-scraper
One actor does not follow this pattern at all.
At midnight on Day 11, naver-news-scraper was sitting at exactly 4,998 total runs — one tick away from 5,000 — and had completely stopped.
Not slowed down. Stopped.
That is because it is operated by a small number of users running large scheduled batches. When the batch is done, it is done. No trickle traffic. This creates a fundamentally different revenue profile: spiky, predictable, and high-value per user.
Those users are worth more per head than a dozen casual users. Overnight silence from this actor is not a warning — it is evidence of intentional, scheduled usage.
What This Means for Building Korean Data APIs
If you are building scrapers targeting Korean platforms, design for Korean business hours:
- Optimize reliability for 9am–6pm KST — that is where 80%+ of revenue comes from
- Watch for batch users — they appear as sudden spikes followed by complete silence
- Overnight persistence = sticky revenue signal — anything running at 2am KST is embedded in a real production pipeline
- Test during Korean hours — running performance tests in the middle of a Korean night gives you misleadingly low baseline numbers
What Is Coming Next
Tomorrow, the 13th and final actor — musinsa-ranking-scraper — completes its Pay-Per-Event setup. That means every single one of my 13 Korean scrapers will be generating revenue on every run.
And the total run count is sitting at ~7,633 as of this writing. The 10,000 milestone is about 2,400 runs away.
At current pace, that is tomorrow or the day after.
When it hits, it will hit during Korean business hours.
This is part of an ongoing series documenting the build-and-revenue journey of 13 Korean data scrapers on Apify. New posts every few days with real data.
Top comments (0)