Why we built ScraperNode
We kept running into the same problem: we needed structured data from platforms like LinkedIn, Instagram, and TikTok, and every time we'd write a custom scraper, it would break within weeks. Layout changes, rate limits, auth flows — maintaining scrapers for major platforms is a full-time job.
So we turned it into an actual product. ScraperNode is a scraping API with maintained scrapers for 11 platforms. You send a request, you get structured data back.
What we cover
22 scrapers across 11 platforms:
| Platform | What you can extract |
|---|---|
| Profiles, companies, posts, job listings, people search | |
| Profiles, posts, reels, hashtag search | |
| TikTok | Profiles, posts, hashtag search |
| YouTube | Channels, videos, comments |
| Twitter/X | Profiles, tweets |
| Profiles, pages, posts, groups, marketplace | |
| Indeed | Job listings, company info |
| Glassdoor | Reviews, salaries, job listings |
| Yelp | Business listings, reviews |
| GitHub | Repository data |
| Crunchbase | Company data |
The hard parts
The reason we built this as a service instead of an open-source library: these scrapers need constant maintenance. LinkedIn alone changes their markup every few weeks. Instagram's auth flow has changed three times in the last year. TikTok's anti-bot detection gets more aggressive every month.
We handle the infrastructure — proxy rotation, browser fingerprinting, session management, rate limiting — so you don't have to.
How it works
REST API
You send a request with the data you want, you get JSON back.
curl https://api.scrapernode.com/v1/linkedin/profile \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"url": "https://linkedin.com/in/someone"}'
You get back structured data — name, headline, experience, education, skills — not raw HTML you have to parse.
n8n community node
If you use n8n for workflow automation, we have a community node:
- Go to Settings > Community Nodes
- Install
n8n-nodes-scrapernode - Add your API key in credentials
The node has three operations:
- Create — start a scraping job
- Get — check job status
- Get Results — retrieve the structured data
A typical workflow looks like: Trigger → ScraperNode Create → Wait → Get Results → Process data
The node is also AI agent compatible, so you can use it with n8n's AI workflow nodes to build intelligent scraping pipelines.
Pricing
Credits-based, pay per scrape. No monthly minimum. Most scrapers cost 1-2 credits per result.
What people use it for
- Lead generation — pull company and people data from LinkedIn for outreach
- Market research — track competitors on social platforms
- Job market analysis — aggregate listings from Indeed and Glassdoor
- Content monitoring — track brand mentions across platforms
- AI training data — feed structured social data into models and workflows
- n8n automations — plug into automated workflows without writing scraper code
Other things we've built
We also put together awesome-n8n-templates — 8,697 n8n workflow templates organized by category, integration, and use case. We use n8n heavily for our own data pipelines and this repo makes it easier to find relevant templates.
Links
- scrapernode.com — main site
- API docs — full reference
- n8n community node — npm package
- awesome-n8n-templates — 8,697 workflow templates
Top comments (0)