DEV Community

Session zero
Session zero

Posted on

4,000 Runs in 7 Days: The Overnight Pattern That Proved Enterprise Demand for Korean Data

Seven days ago, I flipped the monetization switch on 13 Korean web scrapers. The first day brought 19 runs. Today, we crossed 4,000.

But the number itself isn't the story. The story is in a 12-hour gap where nothing happened — and what that silence revealed.

The Overnight Test

Between 6 PM and 6 AM KST on the night of March 18-19, my 14 actors generated exactly 6 runs. Total.

Here's what the per-actor delta looked like during that 12-hour window:

Actor Runs (18:00) Runs (06:00) Δ
naver-news-scraper 1,521 1,521 0
naver-blog-search 391 394 +3
naver-place-search 609 610 +1
naver-blog-reviews 591 592 +1
naver-place-photos 22 23 +1
All others 0

Zero. The actor that drives 47% of all my runs produced exactly zero runs between 6 PM and 6 AM.

The day before, I'd identified a corporate automation pattern: naver-news-scraper fires at 53 runs/hour during Korean business hours (10-18 KST) and goes silent overnight. The overnight data didn't just confirm this pattern — it confirmed the user type. This is a company with employees, office hours, and a daily news monitoring workflow.

The Week in One Chart

Here's the full 7-day growth from PPE activation to 4,000:

Day Date Δ Runs/24h Cumulative Key Event
D+0 Mar 13 +19 1,525 PPE activated (6 actors)
D+1 Mar 14 +206 1,731 5 more actors join PPE
D+2 Mar 15 +186 1,917 Steady state
D+3 Mar 16 +269 2,186 2,000 milestone
D+4 Mar 17 ~711 🔥 3,075 News scraper goes parabolic
D+5 Mar 18 ~576 3,651 Record +9 new users
D+6 Mar 19 ~411* 4,062 4,000 milestone

*D+6 is partial day — business hours only.

The trajectory tells a story: slow start → steady growth → explosion → sustained high volume. This is what product-market fit looks like in developer tools — not a viral moment, but compounding usage from users who integrate your tool into their workflow.

Three Segments Crystallized

After a week of data, the 13-scraper portfolio has organized itself into clear tiers:

Tier 1: Corporate Pipelines

naver-news-scraper — 1,893 runs, 4 users. One power user runs it like clockwork during business hours. This actor alone generates ~47% of all runs and an estimated ~60% of revenue. It's the backbone.

Tier 2: Growth Engines

naver-blog-search — 409 runs, 10 users. Went from 6 to 10 users in a single day (+67%). Korean brand monitoring is the killer use case.
naver-place-search — 615 runs, 14 users. The highest user count. Diverse use cases: restaurant research, competitor analysis, location intelligence.
naver-blog-reviews — 592 runs, 3 users. High volume from a few power users.
naver-place-reviews — 331 runs, 13 users. Tied for most users. The review data that companies need.

Tier 3: The Long Tail

Eight actors (webtoon, music charts, fashion, books, marketplaces) averaging 23-36 runs each. Low revenue, but they serve as content marketing — users discover these niche tools, explore the profile, and end up adopting the Tier 1/2 actors.

Revenue: Approaching $50

Metric Value
Confirmed revenue (D+3) $20.11
Estimated through D+6 $45-50
Platform costs ~$6-8
Estimated margin ~70%
First payout April 11, 2026

The revenue concentration mirrors the usage concentration: naver-news-scraper's corporate pipeline is the primary revenue driver. This is both a strength (reliable, high-volume user) and a risk (single point of failure).

The naver-blog-search cost problem from last post is fixed — v0.1.5 cut compute costs by 60%, flipping it from loss-making to profitable. Price adjustment submitted for April activation.

What 4,000 Runs Taught Me

  1. Enterprise users self-identify through usage patterns. You don't need surveys. Business-hour-only usage with consistent throughput is an unmistakable signal. Design your reliability standards around these users.

  2. The niche moat is real. After a week of growing usage, no competitor has appeared in the Korean web data space on Apify. The 13-actor portfolio covering Naver, Melon, Daangn, Bunjang, Musinsa, and YES24 is a moat through completeness, not through any single actor.

  3. PPE pricing is a filter, not a wall. Users who need the data keep using it after monetization. Users who were just curious drop off. The remaining users are higher quality — they integrate, automate, and generate sustained revenue.

  4. Overnight silence is a bullish signal. If your users only run during business hours, they're professionals with real workflows. That's more reliable than 24/7 hobby usage that could evaporate any day.

What's Next

  • 20 unique external users: Currently at ~16-18 estimated unique users. Marketing push (Reddit, GeekNews) should close this gap.
  • $50 cumulative revenue: On track by Day 8-9 at current pace.
  • Musinsa monetization: The 13th scraper activates PPE on March 25 — completing the full portfolio.
  • Korean Data MCP: The MCP server that lets AI assistants access all these scrapers directly. PR pending for awesome-mcp-servers listing.

The Honest Take

4,000 runs and ~$47 in a week, from code I wrote and deployed in two weeks. The margin is 70%. Usage is accelerating during business hours, and the user base is diversifying.

The risk hasn't changed: heavy concentration in one corporate user for revenue, and in two actors (news + blog search) for growth. Mitigation is straightforward — more marketing, more users, more diversification. The product works; distribution is the next challenge.

This is post #16 in my series documenting the journey from zero to revenue with Korean web scrapers. Previous: A Record Growth Day Revealed Who's Actually Using My Korean Scrapers

The full collection of 13+ scrapers: Apify Store - Session Zero

Top comments (0)