I got tired of manually researching company contact info.
Not because I'm lazy — because it doesn't scale. If you need emails and phone numbers for 20 companies, sure, spend an afternoon clicking through websites. But when you need contact data for 1,000 companies? 10,000? That's weeks of mind-numbing work that a script can do in minutes.
So I built a tool that does exactly that. You give it a list of URLs, it visits each website, finds the contact and team pages automatically, and gives you back structured data: emails, phone numbers, names, job titles, and social links. One clean record per company.
The Problem With Manual Contact Research
A human researcher follows a predictable pattern:
- Open the company website
- Look for a "Contact" or "About" page
- Write down any emails, phone numbers, and names
- Move to the next company
- Repeat 999 more times
At maybe 10 companies per hour, that's 100 hours of work for 1,000 companies. At a conservative $20/hour, that's $2,000 in labor — for data that goes stale within months.
The scraper does the same thing, but processes 1,000 websites in about 15 minutes for roughly $5 in compute.
What You Get Back
For each website, you get a single structured record:
{
"url": "https://buffer.com",
"domain": "buffer.com",
"emails": ["hello@buffer.com"],
"phones": ["+1-555-0123"],
"contacts": [
{ "name": "Joel Gascoigne", "title": "Founder CEO" },
{ "name": "Caro Kopprasch", "title": "Chief of Staff" },
{ "name": "Jenny Terry", "title": "VP of Finance & Operations" }
],
"socialLinks": {
"linkedin": "https://www.linkedin.com/company/bufferapp",
"twitter": "https://x.com/buffer",
"facebook": "https://www.facebook.com/bufferapp",
"instagram": "https://www.instagram.com/buffer"
},
"pagesScraped": 2,
"scrapedAt": "2026-02-06T23:48:25.255Z"
}
That Buffer example pulled 48 team members with names and titles from their about page. Every email is deduplicated, every phone number is validated, and social links are extracted from across the site. One row per company, ready for your spreadsheet or CRM.
Quick Start — Python
from apify_client import ApifyClient
client = ApifyClient(token="YOUR_API_TOKEN")
run = client.actor("ryanclinton/website-contact-scraper").call(
run_input={
"urls": [
"https://stripe.com",
"https://basecamp.com",
"https://buffer.com",
],
"maxPagesPerDomain": 5,
}
)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"{item['domain']}: {item['emails']}")
Quick Start — JavaScript
import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('ryanclinton/website-contact-scraper').call({
urls: [
'https://stripe.com',
'https://basecamp.com',
'https://buffer.com',
],
maxPagesPerDomain: 5,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(item => {
console.log(`${item.domain}: ${item.emails.join(', ')}`);
});
The Real Power: Google Maps → Contact Data Pipeline
Scraping three websites is a demo. The real value is chaining this with other data sources to build a full lead pipeline.
Say you need leads for every dentist in Austin, TX:
from apify_client import ApifyClient
client = ApifyClient(token="YOUR_API_TOKEN")
# Step 1: Get businesses from Google Maps
maps_run = client.actor("compass/crawler-google-places").call(
run_input={"searchStringsArray": ["dentists in Austin, TX"]}
)
# Step 2: Extract website URLs
websites = []
for biz in client.dataset(maps_run["defaultDatasetId"]).iterate_items():
if biz.get("website"):
websites.append(biz["website"])
print(f"Found {len(websites)} business websites")
# Step 3: Scrape contact info from all of them
contacts_run = client.actor("ryanclinton/website-contact-scraper").call(
run_input={"urls": websites, "maxPagesPerDomain": 5}
)
# Step 4: Full lead database
for item in client.dataset(contacts_run["defaultDatasetId"]).iterate_items():
emails = ", ".join(item["emails"]) if item["emails"] else "no email"
names = len(item["contacts"])
print(f"{item['domain']}: {emails} | {names} contacts found")
You go from "dentists in Austin" to a spreadsheet with names, emails, phone numbers, and social profiles in under 20 minutes. The total cost for 200 businesses is under $2.
Other Pipelines That Work Well
Found names but no emails? Feed the results into Email Pattern Finder. It detects the company's email format (first.last@, f.last@, etc.) and generates email addresses for every team member.
Want to prioritize your leads? Run the results through B2B Lead Qualifier. It scores each company on 30+ business quality signals so you contact the best prospects first.
Want the whole thing automated? B2B Lead Generation Suite chains all three steps — contact scraping, email generation, and lead scoring — into a single run. One input, one output.
Performance and Cost
| Websites | Time | Cost |
|---|---|---|
| 10 | ~10 seconds | < $0.01 |
| 100 | ~2 minutes | ~$0.05 |
| 1,000 | ~15 minutes | ~$0.50 |
| 10,000 | ~2.5 hours | ~$5.00 |
The first 100 websites are free. No credit card required to test.
It's fast because it uses HTTP requests instead of spinning up a browser for every page. Most business websites serve their contact info as plain HTML, so you don't need a browser to read it.
What It Won't Do
I want to be upfront about limitations:
- JavaScript-heavy sites — If a site renders its contact page entirely with client-side JavaScript (React SPAs with no server rendering), you won't get results. This works for the vast majority of business sites, but not all.
- Contact forms only — Some sites have no visible email, just a form. You'll get names and social links but no emails.
- Login-protected pages — Public pages only. No authentication or cookie handling.
For bulk lead generation, hitting 85% of sites at a fraction of the cost of browser automation is the right tradeoff. For the remaining 15%, you'd need a browser-based solution.
Who's Using This
The tool has been used for:
- Sales teams building prospect lists from industry directories
- Recruiters finding hiring managers at target companies
- Market researchers building competitive intelligence databases
- Agencies enriching client CRM data in bulk
- Freelancers finding decision-makers for cold outreach
If you're manually copying emails from websites into spreadsheets, you're doing it wrong.
Try It
The tool is live on the Apify Store. First 100 websites are free — just paste your URLs and hit Start.
For anything more than a one-off run, the API examples above let you integrate it directly into your workflow. Schedule it, chain it with other tools, or pipe the output straight into your CRM.
We build data infrastructure and trading analytics at Nydar. Follow along on dev.to for more on what we're building.
Top comments (0)