I shipped my first Apify actor in late February. Two weeks later, I had 10 live scrapers on the Apify Store. The one I was most excited about? Zero traction. The one I almost didn't build? It's my top performer.
Here's the honest story of what happened when I went from "I should build some scrapers" to running a small portfolio on Apify's marketplace.
Why Apify
I'd been building scraping tools for a while, but always as one-off scripts. The appeal of Apify was simple: they handle billing, infrastructure, and distribution. You build the actor, they run it on their cloud, users pay per usage. 80% revenue share for creators.
The alternative was self-hosting everything on my own server (which I also do at frog03-20494.wykr.es — free API, no signup required). But Apify gives you access to 130K+ users who are already looking for scrapers. That's a distribution channel you can't easily replicate.
What I Built
Ten scrapers across three rough categories:
Social & Content Platforms
- Bluesky Posts Scraper
- Substack Article Scraper
- Reddit Scraper
- Telegram Channel Scraper
- Hacker News Scraper (two variants — full scraper + lightweight top stories)
Reviews & Discovery
- Trustpilot Review Scraper
- Product Hunt Scraper
Commerce
- eBay Product Scraper
Plus the original crypto signals actor that started the whole thing.
You can see them all at apify.com/cryptosignals.
3 Things That Actually Worked
1. Detailed READMEs with real examples
Every actor page on Apify has a README section. Most creators write a few sentences and call it done. I wrote proper documentation: what the scraper does, what fields it returns, example output in JSON, common use cases, and limitations.
Why? Because Apify's search is largely keyword-based. A detailed README means more keyword matches. But more importantly, users deciding between two similar scrapers will pick the one where they can actually see what they're getting.
2. Building for underserved platforms
This was the real insight. When I looked at the Apify Store, the top actors are predictable: Google Maps (297K users), Instagram (191K), Amazon, LinkedIn. These are dominated by established creators with years of reviews and optimization.
But Telegram? When I published my Telegram Channel Scraper, there was essentially zero competition for public channel scraping. Bluesky was similarly underserved — the AT Protocol is open and doesn't require authentication for public data, which makes it ideal for scraping. Substack's RSS feeds make it straightforward to build a reliable scraper.
The lesson: don't compete with the Google Maps scraper that has 297K users. Find the platforms nobody has bothered with yet.
3. Writing companion articles
Every scraper I published got a companion article on dev.to explaining the use case. "How to scrape Bluesky posts for sentiment analysis." "Monitoring Telegram channels for crypto signals." These articles serve double duty: they're genuinely useful content AND they drive traffic to the actor pages.
The articles also helped with something less obvious — they forced me to articulate why someone would actually want this data. If I couldn't write a convincing 800-word article about the use case, maybe the scraper wasn't worth building.
3 Things That Didn't Work
1. Anti-bot systems are no joke
I had plans for a G2 Reviews scraper. DataDome killed it. I spent two days trying different approaches — rotating proxies, residential IPs, browser fingerprint randomization — and couldn't get reliable results. Some sites simply aren't worth the engineering effort when the anti-bot system is actively maintained and evolving.
Same story with a few other targets I experimented with. The gap between "I can scrape this from my laptop" and "I can scrape this reliably at scale from datacenter IPs" is enormous.
2. Reddit from datacenter IPs is painful
Reddit aggressively blocks datacenter IP ranges. My Reddit scraper works, but it's flaky. Users get inconsistent results depending on which proxy pool Apify routes their requests through. Two users, six runs — the numbers tell the story.
I should have either committed to requiring residential proxies (which increases cost for users) or skipped Reddit entirely. Half-measures don't work for platforms with aggressive bot detection.
3. Apify Store discoverability is opaque
This one surprised me. I expected that publishing a scraper for an underserved niche would automatically surface it when users search for that platform. That's... partially true.
Apify's search algorithm factors in things like number of users, runs, recency, and rating. New actors with zero users start at a disadvantage. There's a cold-start problem that the detailed READMEs help with (keyword matching), but don't fully solve.
I still don't fully understand what makes one actor rank above another. Some of mine show up on the first page of search results, others are buried. The ranking factors aren't documented publicly.
The Billing Reality
Apify is transitioning its creator billing model. The old "rental" model (where users pay a flat monthly fee to use your actor) is being sunset on April 1, 2026. The replacement is pay-per-event pricing using Actor.charge().
What this means in practice: instead of hoping users subscribe, you define billable events (like "one search completed" or "100 results scraped") and charge micro-amounts. My crypto signals actor charges $0.01 per scan and $0.005 per analysis.
The pay-per-event model is theoretically better for creators building usage-based tools. But it also means your revenue is directly proportional to actual usage, with no subscription floor. If nobody runs your actor, you earn nothing.
Honest Numbers
Let me be real about where things stand:
- 10 actors live on the Apify Store
- ~38 total users across all actors (many are the default 2 from Apify's own test accounts)
- ~400 total runs across all actors
- Top performer: Bluesky scraper (7 users, 117 runs)
- Revenue so far: Negligible. We're talking single-digit dollars.
This is not a "I made $10K in my first month" story. This is a "I built a portfolio of tools that are slowly gaining traction" story.
The Bluesky and Substack scrapers are showing real organic growth. The Hacker News scraper gets consistent usage. The eBay and Product Hunt scrapers are essentially dead.
What I'd Do Differently
Start with 3 scrapers, not 10. I spread myself too thin. If I'd spent two weeks perfecting just Bluesky, Substack, and Telegram — the three with the best market positioning — I'd probably have more users than I do across all ten combined.
Validate demand before building. I built the eBay scraper because "everyone needs e-commerce data." But the Apify Store already has mature eBay scrapers with hundreds of users. I added nothing new to the market. I should have checked existing competition more carefully.
Invest in the cold-start problem. The first 10 users are the hardest. I should have spent more time promoting the actors outside of Apify — in relevant communities, on social media, through targeted outreach to potential users. Organic Apify Store discovery isn't enough for new creators.
Build what you actually need. My best actors are the ones I built because I personally needed the data. The Bluesky scraper started because I wanted to analyze AT Protocol adoption. The crypto signals actor exists because I was already monitoring those channels. When you're your own first user, you build better tools.
The Bottom Line
Publishing 10 scrapers on Apify in two weeks taught me more about the scraping marketplace than six months of building one-off scripts. The economics aren't life-changing yet, but the portfolio approach means I have multiple shots on goal.
If you're thinking about building on Apify: pick underserved platforms, write great documentation, be honest about anti-bot limitations, and don't expect overnight success. The Apify Store is a real marketplace with real users, but like any marketplace, it rewards patience and iteration.
I'll keep building. The scrapers that nobody uses today might be exactly what someone needs tomorrow. And in the meantime, each one is a tiny experiment in what the market actually wants.
All my scrapers are open on the Apify Store. If you need raw scraping power without any signup, I also run a free API on a tiny server that handles basic requests.
Top comments (0)