Yesterday I wrote about hitting 2,000 runs in 3 days. Thirty hours later, we blew past 3,000. Here's the full Week 1 breakdown ā including the first confirmed revenue, an actor that went from profitable to losing money, and the code fix that saved it.
The 3,000 Milestone
3,075 total runs as of March 17, 2026. That's +889 in 30 hours (normalized to ~711/day) ā a 2.6x acceleration over the previous best day.
For context, here's the daily trajectory since PPE activation:
| Day | Ī Runs/24h | Cumulative |
|---|---|---|
| D+0 (Mar 13) | +19 | 1,525 |
| D+1 (Mar 14) | +206 | 1,731 |
| D+2 (Mar 15) | +186 | 1,917 |
| D+3 (Mar 16) | +269 | 2,186 |
| D+4 (Mar 17) | ~711 š„ | 3,075 |
That D+4 number isn't a typo. Something fundamentally changed.
What's Driving It: naver-news-scraper Goes Parabolic
The answer is one actor: naver-news-scraper. It went from 504 runs to 1,098 in 30 hours ā +594 runs, a 5x acceleration from its previous pace of ~96/day.
Here's the thing that makes this interesting: PPE pricing went live on this actor on March 15. Every run after that costs real money. My working assumption was that some users would drop off. Instead, usage exploded.
| Period | Runs/24h | Notes |
|---|---|---|
| Pre-PPE (D+0) | +101 | Free era |
| Post-PPE D+1 | +96 | Slight dip ā expected |
| Post-PPE D+2-3 | ~475 | 5x acceleration |
939 out of 939 external runs succeeded (100% success rate over 30 days). Someone ā or multiple someones ā decided this scraper is worth paying for and started integrating it heavily.
Takeaway: PPE conversion didn't just "not kill" usage ā it validated demand. If users are willing to pay per event and usage increases, you've found product-market fit for that actor.
naver-blog-search: Two Days of Sustained Explosion
In my last post, I highlighted naver-blog-search's sudden +82/day spike. That could have been a one-off. It wasn't.
Day 2: +159 runs in 30 hours (normalized ~127/day). That's 1.5x the already-explosive previous day. The actor went from 133 total runs to 292.
Two consecutive days of acceleration means this isn't someone testing ā it's someone who built an automated pipeline around my scraper and is running it in production. This is exactly how PPE revenue compounds: one power user can generate more value than dozens of casual ones.
The Revenue: $20.11 Confirmed
Let me share the actual numbers from the Apify Console:
- Revenue: $20.11
- Platform costs: $5.99
- Profit: $14.12
- Margin: 70.2%
This was confirmed on March 16. With the naver-news-scraper explosion since then, estimated cumulative revenue is likely $30-35 by now (exact API confirmation pending).
Apify pays out monthly on the 11th with a $20 minimum for PayPal. At $20.11, I just crossed that threshold. First payout: April 11, 2026.
Is $20 life-changing? Obviously not. But it's proof that the model works. Code I wrote generates revenue while I sleep. The margin is 70%. And usage is accelerating, not declining.
The Problem: naver-blog-search Was Losing Money
Here's the part most "passive income" posts skip: not every actor is profitable.
When I checked the unit economics for naver-blog-search, I found it was running at a loss of -$0.22. The actor's compute costs exceeded the PPE revenue it generated per run.
This is the hidden trap of PPE pricing: if your actor is compute-heavy relative to the price you set, more usage means more losses. That 159-run explosion? Each run was costing me money.
The Fix: v0.1.5 and 60% Cost Reduction
Instead of raising prices (which might kill the momentum), I optimized the code:
naver-blog-search v0.1.5 shipped with:
- Reduced unnecessary API calls per search query
- Smarter pagination ā stop fetching when results are exhausted instead of always hitting the maximum page count
- Leaner data extraction ā only parse the fields users actually need
Result: ~60% reduction in compute cost per run.
The actor went from losing money per run to being profitable ā without changing what users pay. This is the unsexy but critical work of running a scraper business: your code efficiency is your margin.
The Full Scoreboard: D+4
| Actor | Total Runs | Ī30h | Users | PPE |
|---|---|---|---|---|
| naver-news-scraper | 1,098 | +594 š„ | 3 | ā |
| naver-blog-reviews | 590 | +5 | 3 | ā |
| naver-place-search | 580 | +59 | 11 | ā |
| naver-place-reviews | 315 | +53 | 13 | ā |
| naver-blog-search | 292 | +159 š„ | 6 | ā |
| musinsa-ranking | 32 | +2 | 4 | ā³ Mar 25 |
| daangn-market | 28 | +1 | 3 | ā |
| naver-kin | 26 | +8 | 2 | ā |
| naver-webtoon | 25 | +2 | 4 | ā |
| melon-chart | 23 | +2 | 2 | ā |
| bunjang-market | 22 | +2 | 3 | ā |
| naver-place-photos | 21 | +1 | 2 | ā |
| yes24-book | 21 | +1 | 2 | ā |
58 total users (with overlap), estimated ~15 unique external users ā all organic, zero marketing spend.
What I've Learned in Week 1
1. The PPE fear is overblown. I spent days worrying about the free-to-paid transition. The result? My highest-traffic actor accelerated after PPE activation. Users who need Korean data need it regardless of price.
2. Power users make the economics work. Three actors drive 83% of growth. Two or three users drive most of the runs. This is normal for developer tools ā you're not building for the masses, you're building for the few who build on top of you.
3. Revenue without margin is a trap. naver-blog-search was my fastest-growing actor and simultaneously my only money-loser. If I hadn't checked unit economics, I'd be celebrating growth while bleeding cash.
4. Code optimization > price optimization. The 60% cost reduction from v0.1.5 was worth more than any price increase would have been. Raising prices risks killing adoption; making your code leaner is pure upside.
5. Niche is a moat. Nobody else is building Korean-specific scrapers with this coverage on Apify. The 13-actor portfolio covers Naver, Melon, Daangn, Bunjang, Musinsa, and YES24. For anyone needing Korean web data, there aren't many alternatives.
What's Next
- 4,000 runs: At current pace, 1-2 days away
- Price optimization: naver-blog-search pricing needs adjustment (submitted for review, effective ~April 11)
- Marketing: Reddit, GeekNews, and continued Dev.to coverage. Organic discovery is working, but it has a ceiling
- MCP integration: The Korean Data MCP server makes these scrapers available to AI agents. Pending registration on Glama.ai and Smithery.ai
The Honest Take
Week 1 produced $20 in confirmed revenue, 3,000+ runs, ~15 organic users, and one code fix that saved an actor from losing money. The trajectory is good ā daily runs are accelerating, not decaying.
But the concentration risk is real. Two actors (naver-news and naver-blog-search) account for the majority of recent growth. If those power users churn, the numbers drop fast. Diversifying the user base through marketing is the next priority.
I'm documenting this in real-time because most "I built a SaaS" stories skip the messy middle. This is the messy middle. It's $20, not $20,000. But the growth curve is pointing up, and the unit economics work.
All 13 scrapers are open-source on Apify Store. Previous posts: individual guides, MCP server, D-Day, 2,000 runs.
Top comments (1)
The naver-blog-search cost discovery is the part of this post that most builders skip over. I'm dealing with the exact same unit economics problem from the other side ā I sell Claude Skills (AI automation tools) on Gumroad, and the equivalent trap is building a feature-heavy skill that takes 45 minutes of context window to run when the user only needed the core 5 minutes. More features = more perceived value, but also more compute cost eating your margin.
Your point about code optimization > price optimization maps perfectly. When I was building a financial data analyzer skill that pulls from Yahoo Finance for 8,000+ tickers, the first version fetched every available data point. Trimming it to just the fields users actually query (similar to your "leaner data extraction" fix) cut execution time by roughly 60% too ā same number you hit. That ratio seems to be a natural ceiling for first-pass optimization on data-heavy tools.
The concentration risk you flagged is real and I'd push back slightly on the marketing plan being the fix. For niche tools like Korean scrapers or financial data, the user base ceiling might just be small ā and that's fine. The moat is that nobody else bothers to serve 13 Korean platforms with this coverage. I'd watch whether those 2-3 power users stabilize into predictable monthly spend rather than chasing a larger user count that dilutes your support bandwidth. Curious whether you've considered tiered pricing for those heavy users instead of flat PPE?