DEV Community

Session zero
Session zero

Posted on

A Record Growth Day Revealed Who's Actually Using My Korean Scrapers

Day 6 of monetization. 3,651 runs. And the data just told me something I didn't expect.

The Numbers That Stopped Me

Yesterday was the biggest single-day user growth since I launched 13 Korean web scrapers on Apify:

+9 new users in 24 hours.

To put that in context β€” it took my first two weeks to accumulate 15 unique users. Then in one day, 9 more showed up.

But the raw number isn't what's interesting. It's who they are and how they're using the scrapers.

The Blog Search Explosion

The biggest surprise was naver-blog-search. This actor went from 6 users to 10 users overnight β€” a +67% jump in a single day:

Actor Before After Change
naver-blog-search 6 10 +4 πŸ”₯
naver-place-search 11 13 +2
naver-kin-scraper 2 4 +2
naver-news-scraper 3 4 +1

Why blog search? I have a theory.

Korean companies heavily rely on Naver Blog for brand monitoring. Unlike Google where SEO is king, in Korea, blog posts are the primary discovery channel for consumers. If you're a Korean brand, you need to know what bloggers are saying about you β€” and naver-blog-search is the easiest way to extract that data programmatically.

The 4-user jump in one day suggests word-of-mouth within a specific community (marketing teams? SEO agencies?) rather than organic discovery.

The Corporate Automation Pattern

But the most fascinating finding came from naver-news-scraper. Look at this usage pattern:

Time Period              | Runs in Period | Rate
-------------------------+----------------+----------
3/17 15:04 - 3/18 10:00 |       1        | ~0/hour
(19h overnight)          |                |
-------------------------+----------------+----------
3/18 10:00 - 3/18 18:00 |     421        | ~53/hour
(8h business hours)      |                |
Enter fullscreen mode Exit fullscreen mode

Zero runs overnight. 53 runs per hour during Korean business hours (10 AM - 6 PM KST).

This isn't a developer testing things out. This is a corporate automation pipeline.

Someone β€” likely a PR monitoring firm or a newsroom β€” has integrated naver-news-scraper into their daily workflow. It fires up when the office opens, pulls news articles every ~68 seconds, and shuts down when people go home.

This single user accounts for 1,521 of my 3,651 total runs (41.6%). And they've maintained a perfect 100% success rate across 1,510+ runs in the last 30 days.

What This Tells Me About the Market

Here's how my 13 scrapers break down by usage pattern as of Day 6:

# Current stats (March 19, 2026)
actors = {
    'naver-news-scraper':     {'runs': 1521, 'users': 4,  'pattern': 'corporate'},
    'naver-place-search':     {'runs': 609,  'users': 13, 'pattern': 'diverse'},
    'naver-blog-reviews':     {'runs': 591,  'users': 3,  'pattern': 'power_user'},
    'naver-blog-search':      {'runs': 391,  'users': 10, 'pattern': 'growing'},
    'naver-place-reviews':    {'runs': 325,  'users': 13, 'pattern': 'diverse'},
    'musinsa-ranking-scraper':{'runs': 34,   'users': 4,  'pattern': 'niche'},
    'naver-kin-scraper':      {'runs': 31,   'users': 4,  'pattern': 'niche'},
    'daangn-market-scraper':  {'runs': 29,   'users': 3,  'pattern': 'niche'},
    'naver-webtoon-scraper':  {'runs': 26,   'users': 4,  'pattern': 'niche'},
    'melon-chart-scraper':    {'runs': 24,   'users': 2,  'pattern': 'niche'},
    'bunjang-market-scraper': {'runs': 23,   'users': 3,  'pattern': 'niche'},
    'yes24-book-scraper':     {'runs': 23,   'users': 2,  'pattern': 'niche'},
    'naver-place-photos':     {'runs': 22,   'users': 2,  'pattern': 'niche'},
}

total_runs = sum(a['runs'] for a in actors.values())  # 3,651
total_users = sum(a['users'] for a in actors.values())  # 68
Enter fullscreen mode Exit fullscreen mode

Three clear segments emerge:

  1. Corporate pipelines (news scraper) β€” few users, massive run volume. These are your revenue backbone.
  2. Growing tools (blog search, place search) β€” many users, moderate runs. This is where user acquisition happens.
  3. Long-tail niche (webtoon, music, books) β€” few users, few runs, but they fill gaps no one else covers.

Revenue Update

Confirmed revenue as of Day 3: $20.11

Estimated cumulative revenue through Day 6: $40-47 (based on run volumes and per-run pricing, pending console confirmation).

For 13 scrapers that took about 2 weeks to build and deploy, generating ~$40+ in the first week of monetization with zero marketing budget feels like validation.

The revenue split tells a story too:

  • naver-news-scraper alone likely accounts for ~60% of total revenue (highest per-run cost x most runs)
  • The "popular" actors (place search, blog search) contribute less per-user because they're lower-cost operations

What I'm Doing Differently Now

Based on these patterns, my priorities shifted:

1. Reliability > Features

That corporate news scraper user doesn't care about new features. They care about 100% uptime during business hours. Every failed run is a missed article in their monitoring pipeline. My job is to never break their workflow.

2. Blog Search Needs Attention

The 4-user explosion in blog search suggests untapped demand. I need to:

  • Make the README more discoverable (SEO for "naver blog monitoring", "korean brand mention tracking")
  • Consider adding features these users might want (sentiment indicators, date filtering, batch queries)

3. The Long Tail Is Marketing

Actors like melon-chart-scraper (K-pop data) and yes24-book-scraper (Korean book market) get few runs but attract curious users. They're content marketing disguised as products β€” people discover my Apify profile through them and end up using the business-oriented scrapers.

Looking Ahead

At the current growth rate (~500 runs/day), I'll hit 4,000 total runs today or tomorrow. The 13th scraper (Musinsa fashion rankings) activates for monetization on March 25.

But the real milestone isn't a round number. It's that moment when you see your tool integrated into someone's daily business workflow β€” running like clockwork, 53 times per hour, 8 hours a day.

That's when you know you've built something people actually need.


This is post #15 in my series documenting the journey of building and monetizing Korean web scrapers on Apify. Previous post: 3,000 Runs and First Revenue

The full collection of 13 scrapers: Apify Store - Session Zero

GitHub logo leadbrain / korean-data-mcp

πŸ‡°πŸ‡· MCP server for Korean web data β€” Naver, Melon, Daangn, Bunjang, Musinsa via Apify

πŸ‡°πŸ‡· Korean Data MCP

Real-time Korean web data for AI assistants β€” powered by Apify actors.

PyPI License: MIT MCP

A Model Context Protocol (MCP) server that gives Claude, Cursor, and other AI tools direct access to live Korean web data β€” including Naver reviews, Melon music charts, Daangn/Bunjang marketplace listings, Korean news, and Musinsa fashion rankings.


πŸ›  Available Tools

Tool Description
get_naver_place_reviews Fetch reviews for any Naver Place (restaurant, cafe, shop, etc.)
get_melon_chart Real-time / daily / weekly Korean music chart (μ‹€μ‹œκ°„ 차트)
search_daangn Search Daangn Market (λ‹Ήκ·Όλ§ˆμΌ“) C2C listings
search_bunjang Search Bunjang (번개μž₯ν„°) marketplace
search_naver_news Search Naver News articles by keyword
search_naver_places Search Naver Map places by keyword + location
get_musinsa_ranking Musinsa fashion ranking by category

πŸš€ Quick Start

1. Get an Apify API Token

Sign up at apify.com (free tier: $5/month credit included).
Copy your token from console.apify.com/account/integrations.

2. Install

pip install korean-data-mcp
Enter fullscreen mode Exit fullscreen mode

Or with uv (recommended):

uv add korean-data-mcp
Enter fullscreen mode Exit fullscreen mode
…

Top comments (0)