DEV Community

Cover image for Stop Paying $200/Month for Rank Tracking — Automate It with Apify in 30 Minutes
Vhub Systems
Vhub Systems

Posted on

Stop Paying $200/Month for Rank Tracking — Automate It with Apify in 30 Minutes

If you're tracking keyword rankings for a site or client, you've hit the same wall: rank trackers are expensive. SEMrush starts at $130/month. Ahrefs at $99. Moz at $99.

But here's what those tools are actually doing: fetching Google HTML and parsing it. That's a solved engineering problem — and you can automate it yourself for about $1/month.

I built this pipeline to track 47 keywords across 3 small client sites. It runs every Monday without me touching it and logs results to a Google Sheet. This article shows you exactly how to replicate it.


The Real Pain

You have a site. You want to know if it's ranking for 20–50 keywords. You don't need competitive intelligence dashboards or backlink graphs. You just need: "Where does my site show up for this query today?"

Manual checking doesn't work. You forget. You get inconsistent results based on your location, login state, and device. After keyword #10, you stop doing it entirely.

Scheduled automation fixes all three problems.


What We're Building

A simple 3-part pipeline:

[Apify Actor] → runs on schedule → [webhook] → [n8n workflow] → [Google Sheet]
Enter fullscreen mode Exit fullscreen mode
  1. Apify's Google SERP scraper fetches real search results on a cron schedule
  2. Apify webhook fires when the run succeeds and triggers n8n
  3. n8n workflow fetches the dataset, finds your domain's rank, and appends a row to Google Sheets

No servers. No databases. No DevOps. All free or near-free.


Step 1: Set Up the Actor

The actor: lanky_quantifier/google-serp-scraper — community-built, 98.3% success rate across 172+ external runs in the last 30 days. Reliable enough to trust for weekly tracking.

Input JSON:

{
  "queries": [
    "best python web scraping tutorial",
    "apify vs scrapy",
    "automate google search python"
  ],
  "countryCode": "US",
  "languageCode": "en",
  "maxPagesPerQuery": 1
}
Enter fullscreen mode Exit fullscreen mode

Run it once manually to verify your keywords return data. Check that your domain appears in organicResults. If you're not in the top 10, organicResults won't contain your URL — that's expected and useful signal.


Step 2: Understand the Output

Each result looks like:

{
  "searchQuery": { "term": "best python web scraping tutorial" },
  "organicResults": [
    { "position": 1, "url": "https://realpython.com/...", "title": "..." },
    { "position": 2, "url": "https://yourdomain.com/...", "title": "..." }
  ]
}
Enter fullscreen mode Exit fullscreen mode

To find your rank across all keywords:

def find_rankings(results, your_domain):
    """
    Returns a list of (keyword, position_or_None) tuples.
    position is None if domain not found in top 10.
    """
    rankings = []
    for item in results:
        keyword = item["searchQuery"]["term"]
        position = None
        for result in item.get("organicResults", []):
            if your_domain in result.get("url", ""):
                position = result["position"]
                break  # Found it — stop scanning this keyword's results
        rankings.append((keyword, position))
    return rankings
Enter fullscreen mode Exit fullscreen mode

Note: The outer loop iterates keywords; the inner loop scans positions within that keyword's results. The break is inside the inner loop — we stop scanning positions once we find the domain, then move to the next keyword. Don't nest your return inside the outer loop or you'll only ever process the first keyword.


Step 3: Automate with n8n + Google Sheets

3a. Set up the Google Sheet

Create a sheet called Rankings with these headers:

Date Keyword Position URL

3b. Create the n8n workflow

n8n is free to self-host. The workflow has 4 nodes:

  1. Webhook trigger — receives the Apify notification
  2. HTTP Request — fetches the run's dataset from Apify API:
   GET https://api.apify.com/v2/datasets/{{$json.resource.defaultDatasetId}}/items
Enter fullscreen mode Exit fullscreen mode
  1. Code node — finds your domain's position (same logic as the Python snippet above, in JavaScript)
  2. Google Sheets node — appends rows

3c. Wire the Apify webhook

In Apify → Actor → IntegrationsWebhooks:

  • Event type: ACTOR.RUN.SUCCEEDED
  • URL: Your n8n webhook trigger URL
  • Payload template (default is fine — n8n gets resource.defaultDatasetId from it)

That's it. Now every time the scheduled run completes successfully, the sheet gets updated.


Step 4: Schedule the Actor

In Apify → Actor → Schedules, create a schedule using cron syntax:

0 9 * * 1
Enter fullscreen mode Exit fullscreen mode

Every Monday at 9am UTC. Your data arrives before the workweek begins.


What This Costs

Component Cost
Apify free tier $5/month credit included
~50 keyword checks/week ~$0.50–$1.00/month
Google Sheets Free
n8n (self-hosted) Free
Total ~$1/month

vs. $99–$200/month for a SaaS rank tracker.

The math works because PAY_PER_EVENT billing means you only pay for actual compute — not for the product team, sales team, and VC returns baked into SaaS pricing.


What You See After 4 Weeks

Date Keyword Position URL
Mar 25 best python scraping tutorial 7 https://...
Mar 18 best python scraping tutorial 9 https://...
Mar 11 best python scraping tutorial 12 https://...
Mar 4 best python scraping tutorial Not in top 10

Position 12 → 7 in 3 weeks. That's actionable signal. You can chart it in Sheets in 2 minutes, or set up a Sheets formula to alert you when any keyword drops below position 15.


Caveats Worth Knowing

Geo variance: Google results differ by location. The countryCode input pins the location, but results can still vary slightly. Good enough for trend tracking; not reliable for precise position reporting to clients who check manually.

Top 10 only: maxPagesPerQuery: 1 returns ~10 organic results per keyword. If you're not on page 1, you'll see null — which is still useful ("not ranking yet").

Google's anti-bot measures: The actor handles proxy rotation. You don't need to manage this, but it's why running this at home without proxies tends to fail. The Apify platform's residential proxy pool is what makes the 98.3% success rate possible.


The Pattern Scales Further

The same architecture works for other data types:

  • Price monitoring → swap for lanky_quantifier/amazon-product-scraper, track competitor prices weekly
  • Job market signals → use lanky_quantifier/linkedin-job-scraper to track skill demand in your market
  • Content gap analysis → run SERP checks on competitor brand keywords to find topics they're ranking for that you're not

The Apify platform handles proxies, browser rendering, and rate limiting. You write the business logic.


Get Started

  1. Create a free Apify account — $5/month credit, no card required
  2. Open lanky_quantifier/google-serp-scraper and run it with your keywords
  3. Set up the Google Sheet + n8n workflow (~20 minutes)
  4. Add the schedule and webhook

The first week you'll catch a ranking change you'd have missed. By month two, you'll have a trend line that tells you whether your content efforts are actually working.


Stack preferences vary — the same pattern works with Airtable, Notion, or a plain SQLite file on a VPS. The actor output is JSON. Pipe it wherever you want.

Top comments (0)