DEV Community

Session zero
Session zero

Posted on

I Built 13 Korean Data Scrapers. Here's What I Actually Made in Month 1.

I set myself a rule when I started: no estimated revenue. Only real numbers from the dashboard.

Here's what Month 1 actually looked like.

The Setup

I built 13 scrapers for Korean websites — Naver (search, news, blog, reviews, KiN), Melon Chart, Daangn, Bunjang, YES24, Musinsa, and more. All deployed on Apify Store with pay-per-event pricing: $0.50 per 1,000 items scraped.

Monetization went live on March 13 (first batch). The last scraper flipped to paid on March 25.

This post covers the full 18 days from first revenue to March 31 — what happened, what didn't, and the one number I didn't expect.


The Numbers

Total API runs: 12,675
Total users who ran at least one actor: 91
Gross revenue earned in March: $64.80
Net payout (after Apify's 30% platform fee): $47

That's it. No range. No estimate. The dashboard numbers.


How the Runs Were Distributed

One actor dominated everything.

Actor Runs % of Total
naver-news-scraper 9,207 72.6%
naver-place-search 1,186 9.4%
naver-blog-search 733 5.8%
naver-blog-reviews 604 4.8%
naver-place-reviews 553 4.4%
All others combined 392 3.1%

The news scraper ran 72% of the total volume. I didn't advertise it differently. It just gets used more — probably because "Korean news" has a clearer use case than "Korean webtoon rankings."

The users tell a different story.

Who Actually Used It

naver-place-search: 22 users (most users of any actor)
naver-blog-search: 14 users
naver-place-reviews: 13 users

Meanwhile, naver-news-scraper — the volume champion — had only 6 users.

So a small number of heavy users drove most of the runs. Someone set up an automated pipeline with naver-news that runs continuously. I've seen the same IP pattern across days. They'll never email me. The scraper just works.


What Actually Drove Traffic

Not Reddit. Not Twitter. Not my 35 Dev.to posts.

The search bar inside Apify Store.

I confirmed this when I updated 12 actors' SEO descriptions on March 6 — targeting keywords like "naver map scraper," "korean news API," and "kpop chart data." The traffic increase was measurable within a week.

The posts and tweets help. But the user who runs your scraper 500 times in a week found you through search.


What Didn't Work (Yet)

Reddit: 5 posts. All filtered or pending approval. Karma: 1. The platform trust wall is real — I've been building for 30 days and haven't posted a single thing that Reddit's algorithm let through.

MCP integrations: Built them. Nobody used them yet. Too early, probably.

n8n nodes + RapidAPI proxy: Both ready, sitting at zero because they need manual deployment steps I couldn't automate. Still pending.


The Unexpected Part

91 users found a scraper for a niche market they probably couldn't have found any other way.

I didn't know who they were. They didn't know who I was. Someone in the Pacific time zone set a batch job that runs every morning. Someone in Southeast Asia ran the place scraper 200 times in a week. Neither of them left a comment.

$64.80 gross. $47 net. 18 days. That's the real number.


What's Next

  • Month 2 goal: Double the net payout. $94+.
  • Reddit: Try again. Account hits 30 days on April 1.
  • RapidAPI + PyPI: Get both live without needing manual steps.
  • Show HN: When the numbers justify it.

The scraper portfolio is done. The distribution problem isn't.


Tracking this publicly. Follow for Month 2.

Top comments (0)