I used to pay $119/month for an SEO rank tracking tool. It worked fine. Every morning I got an email showing where my pages sat for 30 target keywords. But here is what bothered me: I was not paying for sophistication. I was paying for a cron job wrapped in a nice UI and a database I could not access.
So I tore it down and rebuilt it myself. The new stack costs $5.30 per month, sends me richer alerts, and stores everything in my own Postgres database. The two ingredients are n8n (self-hosted workflow automation) and SerpBase (the Google SERP API that charges $0.30 per 1,000 searches).
This post is a walkthrough of exactly what I built, what it costs, and where it falls short.
The problem with all-in-one SEO tools
Rank trackers like Ahrefs, SEMrush, or even specialized tools like AccuRanker are excellent for agencies. They offer historical trends, competitor landscapes, backlink analysis, and slick dashboards. But if you are an indie hacker or a small SaaS founder, you are often paying for 90% features you never open.
I only needed three things:
- Daily position checks for 150 keywords across 3 countries.
- An alert when my ranking drops by 3+ positions or a competitor enters the top 10.
- A cheap way to store and query that data.
The tool I was using charged $119/month for 500 keywords. I was using 150. Wasteful.
The new stack: n8n + SerpBase + Postgres
Here is the architecture:
- n8n runs on a $5/month Hetzner VPS (1 vCPU, 2GB RAM). It handles scheduling, logic, and notifications.
- SerpBase provides the search results. I use their Starter Boost ($3 for 10,000 searches) plus a $10 Starter pack (20,000 searches, never expires).
- Postgres stores keyword definitions, daily snapshots, and position history. I run it on the same VPS via Docker Compose.
Total monthly cost: $5.30 for the server. The search credits are prepaid and consumed irregularly. At 150 keywords × 30 days = 4,500 searches/month, a $3 Starter Boost covers two months. Even at the regular $0.50/1k rate, we are talking $2.25/month.
Building the workflow step by step
Step 1: Define keywords in Postgres
I created a simple table:
CREATE TABLE keywords (
id SERIAL PRIMARY KEY,
keyword TEXT NOT NULL,
country TEXT DEFAULT 'us',
language TEXT DEFAULT 'en',
target_url TEXT,
alert_threshold INT DEFAULT 3
);
I seeded it with my 150 keywords. Target_url is the page I want to track (e.g., my pricing page). Alert_threshold is how many positions a drop must exceed before I get notified.
Step 2: The n8n workflow
The workflow triggers every day at 6 AM UTC. Here is the node chain:
Schedule Trigger: Cron expression
0 6 * * *.Postgres node:
SELECT * FROM keywords. Returns all 150 rows.Split In Batches: Processes 10 keywords per batch. I learned the hard way that firing 150 concurrent HTTP requests to SerpBase trips the rate limit (HTTP 1029). Batching is essential.
HTTP Request node (SerpBase): This is the core. SerpBase uses a POST endpoint, not GET. For each keyword, I call:
POST https://api.serpbase.dev/google/search
Content-Type: application/json
X-API-Key: {{ $env.SERPBASE_API_KEY }}
The body is JSON:
{
"q": "{{ $json.keyword }}",
"gl": "{{ $json.country }}",
"hl": "{{ $json.language }}",
"page": 1
}
Response time averages 1.4 seconds. I set a 10-second timeout and let n8n retry twice on failure. One important detail: SerpBase returns HTTP 200 even on some errors. You must check the status field in the JSON body. status: 0 means success. 1020 means you are out of credits.
IF node (Validate response): Checks
{{ $json.status }} === 0. If not, I log the error and skip to the next batch. Early on I had a bug where I was burning credits on malformed requests and did not notice because the HTTP status was 200.Code node (Position extraction): I parse the organic array to find where my target_url appears. If it is not in the top 100, position is recorded as 0.
const organic = $input.first().json.organic || [];
const target = $input.first().json.target_url;
const match = organic.find(r => r.link && r.link.includes(target));
const position = match ? match.rank : 0;
const pageTitle = organic[0]?.title || 'no results';
return [{ json: { position, page_title: pageTitle } }];
Note the field names from SerpBase: rank (not position) and link (not url). I got this wrong in my first draft and wondered why all positions were zero.
- Postgres node (Insert snapshot): Store the result.
INSERT INTO rankings (keyword_id, position, checked_at)
VALUES ({{ $json.id }}, {{ $json.position }}, NOW());
- Postgres node (Compare to yesterday): A query finds the previous day's position for the same keyword.
SELECT position FROM rankings
WHERE keyword_id = {{ $json.id }}
ORDER BY checked_at DESC LIMIT 1 OFFSET 1;
IF node (Alert logic): If the drop is >= threshold, or if a competitor URL (defined in a separate competitors table) appears in the top 10, route to alert.
Telegram node: Sends me a message like:
"ALERT: 'affordable serp api' dropped from #4 to #8 (US). Competitor serpapi.com now at #3."Merge node: Recombines batches. The workflow ends silently if nothing is wrong.
Step 3: The dashboard (optional)
I did not build a dashboard. I query directly with psql or Metabase (which I already run for other projects). A typical query:
SELECT keyword, position, checked_at
FROM rankings r
JOIN keywords k ON r.keyword_id = k.id
WHERE checked_at >= NOW() - INTERVAL '7 days'
ORDER BY keyword, checked_at;
If I need a chart, I paste the CSV into Google Sheets. Crude, but it takes 30 seconds and costs $0.
What this stack actually costs in practice
Month 1 (setup + heavy testing):
- VPS: $5.35
- SerpBase Starter Boost: $3 (10,000 searches)
- SerpBase Starter pack: $10 (20,000 searches, never expires)
- Total: $18.35
Month 2 (steady state, 4,500 searches):
- VPS: $5.35
- SerpBase: $0 (used remaining credits)
- Total: $5.35
Month 3 (launched a landing page, expanded to 220 keywords):
- VPS: $5.35
- SerpBase: $3 (bought another Starter Boost)
- Total: $8.35
Three-month average: $10.68/month. The old tool was $119/month. Over a year, the difference is roughly $1,300. That is a month of runway for a bootstrapped product.
Where SerpBase specifically shines in this setup
I tested three SERP APIs before settling on SerpBase. Here is why it won:
Price without traps. Some APIs advertise low rates but require $50 minimum deposits or monthly subscriptions. SerpBase lets me buy $3 of credits and actually use them. No expiration on standard packs means I am not racing against a billing cycle.
Geolocation actually works. I track keywords in the US, UK, and Australia. Setting gl=us or gl=gb in the POST body returns consistently localized results. With a previous provider, UK results sometimes bled US listings, which made my position data useless.
Structured JSON, no parsing. I get organic results as an array of objects with rank, title, link, display_link, snippet. No regex against HTML. No headless browser to maintain. The n8n Code node extracts my target URL in four lines of JavaScript.
Resilience. I have not had a single request fail due to CAPTCHA or bot detection in four months. SerpBase handles session rotation internally. I used to run my own proxy pool for scraping; that alone was $40/month and a maintenance nightmare. Killing it was almost as satisfying as canceling the rank tracker.
Clear status codes. The status field in the response body tells you exactly what went wrong. 0 is success. 1020 is out of credits. 1029 is rate limited. 1502 is an upstream parsing error. This beats guessing from HTTP status codes alone.
The honest downsides of this DIY approach
You build it, you own it. When a keyword returns position 0 unexpectedly, I debug it. Usually it is a SerpBase timeout (status 1504) or a malformed URL in my target_url field. There is no support ticket to open. I check n8n execution logs, inspect the JSON, fix the data, and rerun the node.
Feature gap vs. enterprise tools. I do not get SERP feature tracking (featured snippets, local packs, image carousels) unless I write the parsing logic myself. I do not get competitor traffic estimates or keyword difficulty scores. If you need those, pay for Ahrefs. This stack is for people who just want position data.
n8n has a learning curve. My first version of this workflow did not batch requests. It fired 150 parallel HTTP calls and SerpBase rightfully throttled me with status 1029. Fixing it meant learning how Split In Batches works. Took 40 minutes. Zapier would have handled rate limits automatically. Self-hosting means self-solving.
SerpBase Starter Boost expires. The $3/10k pack is monthly. Forget to use it, and it is gone. I set a calendar reminder. The regular packs never expire, so I keep a $10 buffer and buy Boosts when I know I will use them.
Who should actually build this?
If you are an agency managing 50 client sites, buy a proper rank tracker. The reporting features and white-label dashboards are worth it.
If you are an indie hacker, a technical founder, or a developer who wants to track 50–500 keywords without a recurring subscription, this stack is ideal. It costs under $10/month, stores data where you control it, and scales by adding prepaid credits rather than upgrading pricing tiers.
The real unlock is not just the money saved. It is the composability. Because SerpBase returns clean JSON and n8n can route that data anywhere, I have started using the same search data for other workflows: content brief generation, competitor page monitoring, and even a weekly "SERP features" audit that checks if my pages are winning featured snippets.
One API. One workflow engine. Infinite combinations. That is the point.
Try the minimal version in 30 minutes
If you want to test this without committing:
- Sign up at SerpBase. Grab the 100 free searches. No credit card.
- Install n8n locally:
docker run -it --rm --name n8n -p 5678:5678 n8nio/n8n - Build one workflow: Schedule → HTTP Request (SerpBase POST to
https://api.serpbase.dev/google/search) → Telegram (send yourself the top result). - Run it once. Inspect the JSON. Check that
statusis0. Decide if you want more.
That is how I started. One keyword, one notification, one evening. Six months later it runs 150 keywords silently every morning while I drink coffee.
Top comments (0)