<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SerpBase</title>
    <description>The latest articles on DEV Community by SerpBase (@serpbase).</description>
    <link>https://dev.to/serpbase</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/serpbase"/>
    <language>en</language>
    <item>
      <title>MetaSearchMCP: A Metasearch Backend Built for AI Agents</title>
      <dc:creator>SerpBase</dc:creator>
      <pubDate>Tue, 05 May 2026 01:49:34 +0000</pubDate>
      <link>https://dev.to/serpbase/metasearchmcp-a-metasearch-backend-built-for-ai-agents-4a9b</link>
      <guid>https://dev.to/serpbase/metasearchmcp-a-metasearch-backend-built-for-ai-agents-4a9b</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;While traditional metasearch engines are still optimizing for browser tabs, MetaSearchMCP has already redefined the paradigm of machine-consumable search.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Overlooked Problem
&lt;/h2&gt;

&lt;p&gt;Most developers think of metasearch and immediately picture SearXNG — an excellent project, but one fundamentally designed for human eyes in a browser window. When you're building an AI agent that needs to retrieve information autonomously, SearXNG's HTML output becomes a liability. You end up writing scrapers, handling anti-bot measures, managing timeouts, and then normalizing a mess of heterogeneous results into something an LLM can actually reason about.&lt;/p&gt;

&lt;p&gt;That's not a knock on SearXNG. It was built for humans. AI agents need something entirely different: &lt;strong&gt;a machine-consumable search API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gefsikatsinelou/MetaSearchMCP" rel="noopener noreferrer"&gt;MetaSearchMCP&lt;/a&gt; was built for exactly this gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is It?
&lt;/h2&gt;

&lt;p&gt;MetaSearchMCP is an open-source metasearch backend that exposes both an &lt;strong&gt;HTTP API&lt;/strong&gt; and an &lt;strong&gt;MCP server&lt;/strong&gt;. It unifies &lt;strong&gt;20+ search providers&lt;/strong&gt; — Google, DuckDuckGo, Brave, arXiv, GitHub, Stack Overflow, and more — behind a single standardized interface that returns structured JSON.&lt;/p&gt;

&lt;p&gt;Quick stats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;39 Stars&lt;/strong&gt; | Python 99.9% | MIT licensed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;6 search categories&lt;/strong&gt;: Web, Knowledge, Developer, Academic, Financial, and a dedicated Google chain&lt;/li&gt;
&lt;li&gt;Native &lt;strong&gt;MCP protocol&lt;/strong&gt; support (works out of the box with Claude Desktop, Cline, and Continue)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Core Design Philosophy
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Concurrent Multi-Provider Aggregation
&lt;/h3&gt;

&lt;p&gt;The traditional approach: pick one search API and hope it stays up. MetaSearchMCP's approach: &lt;strong&gt;query multiple engines simultaneously and take the best of what comes back&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;POST&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;search&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fastapi vs django performance 2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tags&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;web&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;  &lt;span class="c1"&gt;# Auto-selects DuckDuckGo + Bing + GitHub + Stack Overflow
&lt;/span&gt;  &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_results&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A built-in &lt;strong&gt;deduplication engine&lt;/strong&gt; merges results pointing to the same URL across providers, re-ranks them by relevance, and returns a uniform schema.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Provider-Level Failure Isolation
&lt;/h3&gt;

&lt;p&gt;One engine times out or throws an error? The rest keep running. Each provider gets its own timeout budget (default 10s). Partial failures are handled gracefully — you still get results from the providers that responded.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Agent-Friendly Payload Control
&lt;/h3&gt;

&lt;p&gt;AI agents have limited context windows. MetaSearchMCP enforces &lt;code&gt;max_results_per_provider&lt;/code&gt; caps to prevent dumping entire pages of HTML into a prompt. The output is clean, structured, and sized for LLM consumption.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. MCP-First Architecture
&lt;/h3&gt;

&lt;p&gt;Beyond the HTTP API, the core of the project is an &lt;strong&gt;MCP server&lt;/strong&gt; that exposes tools over stdio:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;search_web&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;General web search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;search_google&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Google-dedicated search chain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;search_academic&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Academic paper search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;search_github&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Code and repository search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;compare_engines&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Compare results across multiple engines&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This means you can say to Claude Desktop: &lt;em&gt;"Find me recent papers on RAG evaluation"&lt;/em&gt; — and the search happens inline, without leaving the conversation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Supported Search Providers
&lt;/h2&gt;

&lt;p&gt;MetaSearchMCP's provider coverage is impressively broad:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web Search&lt;/strong&gt;: Google (direct scraping / SerpBase / Serper), DuckDuckGo, Bing, Brave, Yahoo, Yandex, Baidu, Ecosia, Qwant, Startpage, Mojeek, Mwmbl&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer Search&lt;/strong&gt;: GitHub, GitLab, Stack Overflow, Hacker News, Reddit, npm, PyPI, crates.io, Docker Hub, Go Packages&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Academic Search&lt;/strong&gt;: arXiv, PubMed, Semantic Scholar, CrossRef&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge Bases&lt;/strong&gt;: Wikipedia, Wikidata, Internet Archive, Open Library&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Financial Data&lt;/strong&gt;: Yahoo Finance, Alpha Vantage, Finnhub&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/gefsikatsinelou/MetaSearchMCP
&lt;span class="nb"&gt;cd &lt;/span&gt;MetaSearchMCP
python scripts/install.py &lt;span class="nt"&gt;--dev&lt;/span&gt; &lt;span class="nt"&gt;--test&lt;/span&gt; &lt;span class="nt"&gt;--run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run the HTTP API
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; metasearchmcp.server
&lt;span class="c"&gt;# Default: localhost:8000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure Providers
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SERPBASE_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_key"&lt;/span&gt;    &lt;span class="c"&gt;# Preferred for Google search chain&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;BRAVE_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_key"&lt;/span&gt;       &lt;span class="c"&gt;# Fallback web search&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_token"&lt;/span&gt;      &lt;span class="c"&gt;# Developer search&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;ENABLED_PROVIDERS&lt;/code&gt; env var gives you whitelist control over which engines are active.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Integration (Claude Desktop)
&lt;/h3&gt;

&lt;p&gt;Add to &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"metasearch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-m"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metasearchmcp.broker"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Architecture at a Glance
&lt;/h2&gt;

&lt;p&gt;The project is modular and lean:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Module&lt;/th&gt;
&lt;th&gt;Responsibility&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;contracts.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Pydantic request/response models, unified schema&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;catalog.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Provider discovery and selection (by name or semantic tag)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;orchestrator.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Concurrent execution, response assembly, timeout handling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;merge.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;URL normalization and cross-engine deduplication&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;server.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;FastAPI HTTP entrypoint&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;broker.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MCP stdio entrypoint&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No unnecessary abstraction layers. Each module does exactly one thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  When to Use It
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AI Research Agents&lt;/strong&gt;: Autonomously retrieve papers, code, and documentation to generate literature reviews&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG Pipelines&lt;/strong&gt;: Replace a single search engine with diversified context sources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Competitive Monitoring&lt;/strong&gt;: Track keywords and content changes across multiple engines simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer Tooling&lt;/strong&gt;: Search Stack Overflow and GitHub inline from your IDE via MCP&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Roadmap
&lt;/h2&gt;

&lt;p&gt;The author's public todo list includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Caching layer&lt;/strong&gt; with provider-aware query deduplication (reduce redundant API spend)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-provider ranking signals&lt;/strong&gt; — not just deduplication, but weighted relevance scoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streaming aggregation responses&lt;/strong&gt; (SSE push for better agent UX)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provider health telemetry&lt;/strong&gt; (automatically downgrade unstable engines)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;More first-party API integrations&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;MetaSearchMCP solves a real but rarely discussed problem: &lt;strong&gt;what search infrastructure should AI agents actually use?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's not another SearXNG clone. It's a ground-up redesign — MCP-native, structured output, concurrent aggregation, failure isolation. If you're building any system that needs to retrieve information autonomously, this belongs on your evaluation shortlist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project&lt;/strong&gt;: &lt;a href="https://github.com/gefsikatsinelou/MetaSearchMCP" rel="noopener noreferrer"&gt;github.com/gefsikatsinelou/MetaSearchMCP&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;SerpBase is one of the Google search providers built into MetaSearchMCP, delivering structured JSON SERP data.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>serp</category>
      <category>search</category>
    </item>
    <item>
      <title>I Built a Fully Automated SEO Monitoring Stack for $5.30/Month with n8n and SerpBase</title>
      <dc:creator>SerpBase</dc:creator>
      <pubDate>Mon, 04 May 2026 17:48:48 +0000</pubDate>
      <link>https://dev.to/serpbase/i-built-a-fully-automated-seo-monitoring-stack-for-530month-with-n8n-and-serpbase-3e42</link>
      <guid>https://dev.to/serpbase/i-built-a-fully-automated-seo-monitoring-stack-for-530month-with-n8n-and-serpbase-3e42</guid>
      <description>&lt;p&gt;I used to pay $119/month for an SEO rank tracking tool. It worked fine. Every morning I got an email showing where my pages sat for 30 target keywords. But here is what bothered me: I was not paying for sophistication. I was paying for a cron job wrapped in a nice UI and a database I could not access.&lt;/p&gt;

&lt;p&gt;So I tore it down and rebuilt it myself. The new stack costs $5.30 per month, sends me richer alerts, and stores everything in my own Postgres database. The two ingredients are n8n (self-hosted workflow automation) and SerpBase (the Google SERP API that charges $0.30 per 1,000 searches).&lt;/p&gt;

&lt;p&gt;This post is a walkthrough of exactly what I built, what it costs, and where it falls short.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with all-in-one SEO tools
&lt;/h2&gt;

&lt;p&gt;Rank trackers like Ahrefs, SEMrush, or even specialized tools like AccuRanker are excellent for agencies. They offer historical trends, competitor landscapes, backlink analysis, and slick dashboards. But if you are an indie hacker or a small SaaS founder, you are often paying for 90% features you never open.&lt;/p&gt;

&lt;p&gt;I only needed three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Daily position checks for 150 keywords across 3 countries.&lt;/li&gt;
&lt;li&gt;An alert when my ranking drops by 3+ positions or a competitor enters the top 10.&lt;/li&gt;
&lt;li&gt;A cheap way to store and query that data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The tool I was using charged $119/month for 500 keywords. I was using 150. Wasteful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The new stack: n8n + SerpBase + Postgres
&lt;/h2&gt;

&lt;p&gt;Here is the architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;n8n runs on a $5/month Hetzner VPS (1 vCPU, 2GB RAM). It handles scheduling, logic, and notifications.&lt;/li&gt;
&lt;li&gt;SerpBase provides the search results. I use their Starter Boost ($3 for 10,000 searches) plus a $10 Starter pack (20,000 searches, never expires).&lt;/li&gt;
&lt;li&gt;Postgres stores keyword definitions, daily snapshots, and position history. I run it on the same VPS via Docker Compose.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total monthly cost: $5.30 for the server. The search credits are prepaid and consumed irregularly. At 150 keywords × 30 days = 4,500 searches/month, a $3 Starter Boost covers two months. Even at the regular $0.50/1k rate, we are talking $2.25/month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the workflow step by step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Define keywords in Postgres
&lt;/h3&gt;

&lt;p&gt;I created a simple table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;keywords&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;keyword&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;country&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="s1"&gt;'us'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;language&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="s1"&gt;'en'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;target_url&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;alert_threshold&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I seeded it with my 150 keywords. Target_url is the page I want to track (e.g., my pricing page). Alert_threshold is how many positions a drop must exceed before I get notified.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: The n8n workflow
&lt;/h3&gt;

&lt;p&gt;The workflow triggers every day at 6 AM UTC. Here is the node chain:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Schedule Trigger&lt;/strong&gt;: Cron expression &lt;code&gt;0 6 * * *&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Postgres node&lt;/strong&gt;: &lt;code&gt;SELECT * FROM keywords&lt;/code&gt;. Returns all 150 rows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Split In Batches&lt;/strong&gt;: Processes 10 keywords per batch. I learned the hard way that firing 150 concurrent HTTP requests to SerpBase trips the rate limit (HTTP 1029). Batching is essential.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP Request node (SerpBase)&lt;/strong&gt;: This is the core. SerpBase uses a POST endpoint, not GET. For each keyword, I call:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;   POST https://api.serpbase.dev/google/search
   Content-Type: application/json
   X-API-Key: {{ $env.SERPBASE_API_KEY }}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The body is JSON:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"q"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{{ $json.keyword }}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"gl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{{ $json.country }}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"hl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{{ $json.language }}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"page"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response time averages 1.4 seconds. I set a 10-second timeout and let n8n retry twice on failure. One important detail: SerpBase returns HTTP 200 even on some errors. You must check the &lt;code&gt;status&lt;/code&gt; field in the JSON body. &lt;code&gt;status: 0&lt;/code&gt; means success. &lt;code&gt;1020&lt;/code&gt; means you are out of credits.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IF node (Validate response)&lt;/strong&gt;: Checks &lt;code&gt;{{ $json.status }} === 0&lt;/code&gt;. If not, I log the error and skip to the next batch. Early on I had a bug where I was burning credits on malformed requests and did not notice because the HTTP status was 200.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code node (Position extraction)&lt;/strong&gt;: I parse the organic array to find where my target_url appears. If it is not in the top 100, position is recorded as 0.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;organic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;$input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;first&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;organic&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;target&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;$input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;first&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target_url&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;organic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;link&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;link&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;position&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rank&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pageTitle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;organic&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]?.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no results&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;json&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;page_title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;pageTitle&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the field names from SerpBase: &lt;code&gt;rank&lt;/code&gt; (not &lt;code&gt;position&lt;/code&gt;) and &lt;code&gt;link&lt;/code&gt; (not &lt;code&gt;url&lt;/code&gt;). I got this wrong in my first draft and wondered why all positions were zero.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Postgres node (Insert snapshot)&lt;/strong&gt;: Store the result.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;   &lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;rankings&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;keyword_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;position&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;checked_at&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;({{&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="p"&gt;}},&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;position&lt;/span&gt; &lt;span class="p"&gt;}},&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Postgres node (Compare to yesterday)&lt;/strong&gt;: A query finds the previous day's position for the same keyword.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;   &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;position&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;rankings&lt;/span&gt;
   &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;keyword_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
   &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;checked_at&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt; &lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;OFFSET&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IF node (Alert logic)&lt;/strong&gt;: If the drop is &amp;gt;= threshold, or if a competitor URL (defined in a separate competitors table) appears in the top 10, route to alert.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Telegram node&lt;/strong&gt;: Sends me a message like:&lt;br&gt;
"ALERT: 'affordable serp api' dropped from #4 to #8 (US). Competitor serpapi.com now at #3."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Merge node&lt;/strong&gt;: Recombines batches. The workflow ends silently if nothing is wrong.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: The dashboard (optional)
&lt;/h3&gt;

&lt;p&gt;I did not build a dashboard. I query directly with psql or Metabase (which I already run for other projects). A typical query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;keyword&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;position&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;checked_at&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;rankings&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;
&lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;keywords&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keyword_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;checked_at&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'7 days'&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;keyword&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;checked_at&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If I need a chart, I paste the CSV into Google Sheets. Crude, but it takes 30 seconds and costs $0.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this stack actually costs in practice
&lt;/h2&gt;

&lt;p&gt;Month 1 (setup + heavy testing):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPS: $5.35&lt;/li&gt;
&lt;li&gt;SerpBase Starter Boost: $3 (10,000 searches)&lt;/li&gt;
&lt;li&gt;SerpBase Starter pack: $10 (20,000 searches, never expires)&lt;/li&gt;
&lt;li&gt;Total: $18.35&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Month 2 (steady state, 4,500 searches):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPS: $5.35&lt;/li&gt;
&lt;li&gt;SerpBase: $0 (used remaining credits)&lt;/li&gt;
&lt;li&gt;Total: $5.35&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Month 3 (launched a landing page, expanded to 220 keywords):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPS: $5.35&lt;/li&gt;
&lt;li&gt;SerpBase: $3 (bought another Starter Boost)&lt;/li&gt;
&lt;li&gt;Total: $8.35&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three-month average: $10.68/month. The old tool was $119/month. Over a year, the difference is roughly $1,300. That is a month of runway for a bootstrapped product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where SerpBase specifically shines in this setup
&lt;/h2&gt;

&lt;p&gt;I tested three SERP APIs before settling on SerpBase. Here is why it won:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Price without traps.&lt;/strong&gt; Some APIs advertise low rates but require $50 minimum deposits or monthly subscriptions. SerpBase lets me buy $3 of credits and actually use them. No expiration on standard packs means I am not racing against a billing cycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Geolocation actually works.&lt;/strong&gt; I track keywords in the US, UK, and Australia. Setting &lt;code&gt;gl=us&lt;/code&gt; or &lt;code&gt;gl=gb&lt;/code&gt; in the POST body returns consistently localized results. With a previous provider, UK results sometimes bled US listings, which made my position data useless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured JSON, no parsing.&lt;/strong&gt; I get organic results as an array of objects with &lt;code&gt;rank&lt;/code&gt;, &lt;code&gt;title&lt;/code&gt;, &lt;code&gt;link&lt;/code&gt;, &lt;code&gt;display_link&lt;/code&gt;, &lt;code&gt;snippet&lt;/code&gt;. No regex against HTML. No headless browser to maintain. The n8n Code node extracts my target URL in four lines of JavaScript.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resilience.&lt;/strong&gt; I have not had a single request fail due to CAPTCHA or bot detection in four months. SerpBase handles session rotation internally. I used to run my own proxy pool for scraping; that alone was $40/month and a maintenance nightmare. Killing it was almost as satisfying as canceling the rank tracker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clear status codes.&lt;/strong&gt; The &lt;code&gt;status&lt;/code&gt; field in the response body tells you exactly what went wrong. &lt;code&gt;0&lt;/code&gt; is success. &lt;code&gt;1020&lt;/code&gt; is out of credits. &lt;code&gt;1029&lt;/code&gt; is rate limited. &lt;code&gt;1502&lt;/code&gt; is an upstream parsing error. This beats guessing from HTTP status codes alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The honest downsides of this DIY approach
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;You build it, you own it.&lt;/strong&gt; When a keyword returns position 0 unexpectedly, I debug it. Usually it is a SerpBase timeout (status 1504) or a malformed URL in my target_url field. There is no support ticket to open. I check n8n execution logs, inspect the JSON, fix the data, and rerun the node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature gap vs. enterprise tools.&lt;/strong&gt; I do not get SERP feature tracking (featured snippets, local packs, image carousels) unless I write the parsing logic myself. I do not get competitor traffic estimates or keyword difficulty scores. If you need those, pay for Ahrefs. This stack is for people who just want position data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;n8n has a learning curve.&lt;/strong&gt; My first version of this workflow did not batch requests. It fired 150 parallel HTTP calls and SerpBase rightfully throttled me with status 1029. Fixing it meant learning how Split In Batches works. Took 40 minutes. Zapier would have handled rate limits automatically. Self-hosting means self-solving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SerpBase Starter Boost expires.&lt;/strong&gt; The $3/10k pack is monthly. Forget to use it, and it is gone. I set a calendar reminder. The regular packs never expire, so I keep a $10 buffer and buy Boosts when I know I will use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who should actually build this?
&lt;/h2&gt;

&lt;p&gt;If you are an agency managing 50 client sites, buy a proper rank tracker. The reporting features and white-label dashboards are worth it.&lt;/p&gt;

&lt;p&gt;If you are an indie hacker, a technical founder, or a developer who wants to track 50–500 keywords without a recurring subscription, this stack is ideal. It costs under $10/month, stores data where you control it, and scales by adding prepaid credits rather than upgrading pricing tiers.&lt;/p&gt;

&lt;p&gt;The real unlock is not just the money saved. It is the composability. Because SerpBase returns clean JSON and n8n can route that data anywhere, I have started using the same search data for other workflows: content brief generation, competitor page monitoring, and even a weekly "SERP features" audit that checks if my pages are winning featured snippets.&lt;/p&gt;

&lt;p&gt;One API. One workflow engine. Infinite combinations. That is the point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try the minimal version in 30 minutes
&lt;/h2&gt;

&lt;p&gt;If you want to test this without committing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up at SerpBase. Grab the 100 free searches. No credit card.&lt;/li&gt;
&lt;li&gt;Install n8n locally: &lt;code&gt;docker run -it --rm --name n8n -p 5678:5678 n8nio/n8n&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Build one workflow: Schedule → HTTP Request (SerpBase POST to &lt;code&gt;https://api.serpbase.dev/google/search&lt;/code&gt;) → Telegram (send yourself the top result).&lt;/li&gt;
&lt;li&gt;Run it once. Inspect the JSON. Check that &lt;code&gt;status&lt;/code&gt; is &lt;code&gt;0&lt;/code&gt;. Decide if you want more.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is how I started. One keyword, one notification, one evening. Six months later it runs 150 keywords silently every morning while I drink coffee.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>serpbase</category>
      <category>n8n</category>
    </item>
    <item>
      <title>Google Maps API for Lead Scoring: Search, Place Details, and Fresh Local Signals</title>
      <dc:creator>SerpBase</dc:creator>
      <pubDate>Sun, 03 May 2026 15:55:55 +0000</pubDate>
      <link>https://dev.to/serpbase/google-maps-api-for-lead-scoring-search-place-details-and-fresh-local-signals-52pd</link>
      <guid>https://dev.to/serpbase/google-maps-api-for-lead-scoring-search-place-details-and-fresh-local-signals-52pd</guid>
      <description>&lt;h1&gt;
  
  
  Google Maps API for Lead Scoring: Search, Place Details, and Fresh Local Signals
&lt;/h1&gt;

&lt;p&gt;Most Google Maps lead-generation guides stop at the same place: export a list of businesses, put it in a spreadsheet, and start outreach.&lt;/p&gt;

&lt;p&gt;That is a start, but it is not the part that creates an advantage. Raw exports usually contain duplicates, weak-fit businesses, closed locations, thin listings, and companies that are not worth contacting yet.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;, the goal is not just to give developers raw Google Maps data. The goal is to make Google Maps Search, Place Details, and broader Google search data easier to plug into real workflows.&lt;/p&gt;

&lt;p&gt;The real value starts when you turn local data into a scoring layer.&lt;/p&gt;

&lt;p&gt;A practical Google Maps API workflow should help you answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which businesses are visible but still under-optimized?&lt;/li&gt;
&lt;li&gt;Which locations have demand signals but weak local presence?&lt;/li&gt;
&lt;li&gt;Which companies have enough reviews to care about reputation, but not enough to dominate?&lt;/li&gt;
&lt;li&gt;Which listings have missing websites, weak categories, stale reviews, or inconsistent local context?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is why Maps Search and Place Details are stronger together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start with Maps Search
&lt;/h2&gt;

&lt;p&gt;Maps Search is the discovery layer. You give it a query and location, and it returns matching local businesses in a structured format.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"q"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dentists in Austin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"gl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From this first step, a developer usually wants fields such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;business name&lt;/li&gt;
&lt;li&gt;address&lt;/li&gt;
&lt;li&gt;category&lt;/li&gt;
&lt;li&gt;rating&lt;/li&gt;
&lt;li&gt;review count&lt;/li&gt;
&lt;li&gt;coordinates&lt;/li&gt;
&lt;li&gt;place id&lt;/li&gt;
&lt;li&gt;business status&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is enough to build a first-pass candidate list, but it is not enough to decide which leads deserve attention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; is designed for this kind of workflow: request Maps Search data, parse clean JSON, and feed the results into your own scoring, enrichment, or automation layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Place Details for Stronger Signals
&lt;/h2&gt;

&lt;p&gt;Place Details is where the candidate list becomes useful.&lt;/p&gt;

&lt;p&gt;A search result tells you that a business exists. A detail request tells you what kind of opportunity it might be.&lt;/p&gt;

&lt;p&gt;Depending on your workflow, Place Details can help you capture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;phone number&lt;/li&gt;
&lt;li&gt;website&lt;/li&gt;
&lt;li&gt;opening hours&lt;/li&gt;
&lt;li&gt;full address&lt;/li&gt;
&lt;li&gt;photos&lt;/li&gt;
&lt;li&gt;review metadata&lt;/li&gt;
&lt;li&gt;coordinates&lt;/li&gt;
&lt;li&gt;categories&lt;/li&gt;
&lt;li&gt;business status&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For local SEO, these fields are not just contact data. They are signals.&lt;/p&gt;

&lt;p&gt;A business with no website may be a web design opportunity. A business with a high rating and low review count may need reputation growth. A business with an outdated or incomplete listing may need local SEO cleanup.&lt;/p&gt;

&lt;p&gt;The point is simple: a good Maps API workflow should not only collect businesses. It should help rank them.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; Place Details support, developers can move from broad discovery to deeper local context without stitching together a separate manual process.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Simple Lead Scoring Model
&lt;/h2&gt;

&lt;p&gt;You do not need machine learning to make Maps data useful. A basic scoring model is often enough for agencies, sales teams, and internal tools.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+20 points: business has no website
+15 points: rating above 4.2
+10 points: review count between 10 and 80
+10 points: category matches your ideal customer profile
+10 points: business is open
+5 points: phone number is available
-20 points: business is permanently closed
-10 points: review count above 1,000 if you only target smaller operators
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The exact weights should change by niche. A dental SEO agency, a SaaS selling booking software, and a B2B research product will not score the same business in the same way.&lt;/p&gt;

&lt;p&gt;But the workflow stays the same:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search for businesses by keyword and location.&lt;/li&gt;
&lt;li&gt;Fetch place details for each candidate.&lt;/li&gt;
&lt;li&gt;Normalize the data.&lt;/li&gt;
&lt;li&gt;Score each business against your ICP.&lt;/li&gt;
&lt;li&gt;Deduplicate by place id, phone, website, or address.&lt;/li&gt;
&lt;li&gt;Push the best records into a CRM, spreadsheet, n8n workflow, or outreach queue.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; fits into the first two steps: Maps Search for discovery, then Place Details for enrichment. The scoring logic stays fully under your control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fresh Local Signals Matter
&lt;/h2&gt;

&lt;p&gt;Local data changes faster than many teams expect.&lt;/p&gt;

&lt;p&gt;Businesses open, close, move, update hours, collect reviews, change categories, and launch new websites. That means a one-time export gets stale quickly.&lt;/p&gt;

&lt;p&gt;A stronger system reruns the same searches on a schedule and tracks changes over time.&lt;/p&gt;

&lt;p&gt;Useful change signals include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;new businesses entering a category and city&lt;/li&gt;
&lt;li&gt;review count growth&lt;/li&gt;
&lt;li&gt;rating drops&lt;/li&gt;
&lt;li&gt;business status changes&lt;/li&gt;
&lt;li&gt;website field added or removed&lt;/li&gt;
&lt;li&gt;category changes&lt;/li&gt;
&lt;li&gt;new competitors appearing in a local market&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For local SEO and lead generation, these changes are often more valuable than the raw record itself.&lt;/p&gt;

&lt;p&gt;A newly opened business, a listing that just added a website, or a company with fast review growth can be a better lead than a static business that has looked the same for years.&lt;/p&gt;

&lt;p&gt;Because &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; uses a simple API model, this kind of scheduled refresh can be handled by a cron job, background worker, n8n workflow, or internal data pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Google Search Still Fits
&lt;/h2&gt;

&lt;p&gt;Maps data is strongest for local entities, but it should not live alone.&lt;/p&gt;

&lt;p&gt;For many workflows, you can combine Maps with Google Search data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maps Search finds the local businesses.&lt;/li&gt;
&lt;li&gt;Place Details enriches each business.&lt;/li&gt;
&lt;li&gt;Google Search checks whether the business ranks for important local queries.&lt;/li&gt;
&lt;li&gt;News or web results help detect recent activity around the company or market.&lt;/li&gt;
&lt;li&gt;Images and videos can support visual checks for certain verticals.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; becomes more useful than a single-purpose Maps tool. It supports broader Google data workflows across Search, Maps, News, Images, and Videos, so developers can build one pipeline around the data they actually need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Matters When Workflows Run Every Day
&lt;/h2&gt;

&lt;p&gt;Lead scoring sounds cheap when you test ten records. It gets expensive when you run hundreds of keywords across dozens of cities and refresh the data every week.&lt;/p&gt;

&lt;p&gt;That is why cost, latency, and reliability matter as product features, not just vendor claims.&lt;/p&gt;

&lt;p&gt;A production workflow needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;low enough pricing to run repeatable searches&lt;/li&gt;
&lt;li&gt;stable response times for automation jobs&lt;/li&gt;
&lt;li&gt;structured JSON that does not need browser parsing&lt;/li&gt;
&lt;li&gt;credits that do not disappear before a project is ready&lt;/li&gt;
&lt;li&gt;auto top-up when usage becomes steady&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; is built around that model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;low-cost Google data APIs&lt;/li&gt;
&lt;li&gt;fast and stable responses&lt;/li&gt;
&lt;li&gt;non-expiring standard credits&lt;/li&gt;
&lt;li&gt;auto top-up support&lt;/li&gt;
&lt;li&gt;Google Search, Maps, News, Images, and Videos APIs in one place&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For lead scoring workflows, that matters because usage is rarely perfectly predictable. Some teams test a few cities. Others refresh thousands of local records every week. A pricing model with non-expiring credits and auto top-up is easier to fit into both cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Developer Workflow
&lt;/h2&gt;

&lt;p&gt;A simple local lead intelligence pipeline can look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;keyword + city
  -&amp;gt; [SerpBase](https://www.serpbase.dev) Maps Search
  -&amp;gt; [SerpBase](https://www.serpbase.dev) Place Details
  -&amp;gt; lead scoring
  -&amp;gt; dedupe
  -&amp;gt; enrichment
  -&amp;gt; CRM or n8n workflow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For an agency, this can power weekly prospect lists. For a SaaS product, it can feed an onboarding flow or local market dashboard. For an AI agent, it can provide live local context instead of guessing from old training data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Takeaway
&lt;/h2&gt;

&lt;p&gt;Google Maps data is not just a list source. Used well, it becomes a local intelligence layer.&lt;/p&gt;

&lt;p&gt;Raw Maps exports tell you who exists. Maps Search plus Place Details can help you understand who is worth prioritizing, why they matter, and what changed since the last time you checked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; gives developers a practical way to build that workflow with Google Maps Search, Place Details, and broader Google data APIs in one stack.&lt;/p&gt;

&lt;p&gt;If you are building local SEO tools, B2B lead generation systems, market research dashboards, or AI agents that need local context, start with the scoring model. The API is only the collection layer. The advantage comes from how you turn fresh local signals into decisions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>SERP API Mega Comparison: Why SerpBase Should Be on Your Shortlist</title>
      <dc:creator>SerpBase</dc:creator>
      <pubDate>Thu, 23 Apr 2026 15:48:17 +0000</pubDate>
      <link>https://dev.to/serpbase/serp-api-mega-comparison-why-serpbase-should-be-on-your-shortlist-20lg</link>
      <guid>https://dev.to/serpbase/serp-api-mega-comparison-why-serpbase-should-be-on-your-shortlist-20lg</guid>
      <description>&lt;h2&gt;
  
  
  Short Verdict
&lt;/h2&gt;

&lt;p&gt;If you need a search results API for AI agents, SEO tools, SERP monitoring, or data pipelines, SerpBase is highly competitive: &lt;strong&gt;$0.30-$0.50 per 1,000 requests with 1.4s average latency&lt;/strong&gt;. SerpBase credits never expire, and the entry package is &lt;strong&gt;$3 for 10,000 requests&lt;/strong&gt;, valid for one month. Among the mainstream SERP APIs covered here, SerpBase sits in the lowest pricing tier while still offering latency suitable for real-time product experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;Starting Cost&lt;/th&gt;
&lt;th&gt;Estimated SERP Price&lt;/th&gt;
&lt;th&gt;Free / Entry Allowance&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;th&gt;API&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;a href="https://serpbase.dev" rel="noopener noreferrer"&gt;&lt;strong&gt;SerpBase&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;PAYG; $3 entry package&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.30-$0.50 / 1k&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$3 / 10,000 requests, entry package valid for 1 month; regular credits never expire&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.4s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Benchmark option: low price, low latency, and non-expiring regular credits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;a href="https://serper.dev/" rel="noopener noreferrer"&gt;Serper.dev&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;From $50 top-up&lt;/td&gt;
&lt;td&gt;$0.30-$1.00 / 1k&lt;/td&gt;
&lt;td&gt;2,500 free queries; 50k credits entry package&lt;/td&gt;
&lt;td&gt;1.2s&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Very strong latency; lowest price usually requires a large credit package&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;&lt;a href="https://dataforseo.com/apis/serp-api/pricing" rel="noopener noreferrer"&gt;DataForSEO&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;$50 minimum deposit&lt;/td&gt;
&lt;td&gt;$0.60-$2.00 / 1k&lt;/td&gt;
&lt;td&gt;$1 trial credit&lt;/td&gt;
&lt;td&gt;Not publicly standardized&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Cheap in queue mode; Live Mode costs more&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;&lt;a href="https://mapleserp.io/" rel="noopener noreferrer"&gt;MapleSerp&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;PAYG&lt;/td&gt;
&lt;td&gt;From $0.70 / 1k&lt;/td&gt;
&lt;td&gt;100 free credits&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Low-cost newer option, with less public operating history&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;&lt;a href="https://avesapi.com/pricing/" rel="noopener noreferrer"&gt;AvesAPI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free; from $50&lt;/td&gt;
&lt;td&gt;$0.70-$2.00 / 1k&lt;/td&gt;
&lt;td&gt;1k free searches&lt;/td&gt;
&lt;td&gt;About 2s&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Pay per successful request; emphasizes Top-100 results&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.serphouse.com/pricing" rel="noopener noreferrer"&gt;SERPHouse&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free; from $29.99/mo&lt;/td&gt;
&lt;td&gt;About $0.75 / 1k&lt;/td&gt;
&lt;td&gt;400 free credits; 40k credits entry plan&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Google, Bing, and Yahoo support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;&lt;a href="https://hasdata.com/google-serp-api" rel="noopener noreferrer"&gt;HasData&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free; from $49/mo&lt;/td&gt;
&lt;td&gt;From $0.83 / 1k&lt;/td&gt;
&lt;td&gt;100 free; 20k/mo entry plan&lt;/td&gt;
&lt;td&gt;1.75s&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Broad Google SERP, Maps, Images, and News coverage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;&lt;a href="https://serppost.com/pricing/" rel="noopener noreferrer"&gt;SerpPost&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;From $18&lt;/td&gt;
&lt;td&gt;From $0.90 / 1k&lt;/td&gt;
&lt;td&gt;20k credits entry package&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Credits valid for 6 months&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;&lt;a href="https://oxylabs.io/products/scraper-api/web/pricing" rel="noopener noreferrer"&gt;Oxylabs Web Scraper API&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free trial; from $49/mo&lt;/td&gt;
&lt;td&gt;Google from $0.90-$1.00 / 1k&lt;/td&gt;
&lt;td&gt;Up to 2,000 trial results&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;More of an enterprise data collection platform&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;&lt;a href="https://brightdata.com/pricing/serp" rel="noopener noreferrer"&gt;Bright Data SERP API&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;PAYG&lt;/td&gt;
&lt;td&gt;$1.00-$1.50 / 1k&lt;/td&gt;
&lt;td&gt;Free trial&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Strong infrastructure; pay per successful request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.searchapi.io/pricing" rel="noopener noreferrer"&gt;SearchApi.io&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;From $40/mo&lt;/td&gt;
&lt;td&gt;$1.00-$4.00 / 1k&lt;/td&gt;
&lt;td&gt;100 free requests&lt;/td&gt;
&lt;td&gt;About 2s&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Pricing decreases with larger plans&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;&lt;a href="https://trajectdata.com/serp/value-serp-api/pricing/" rel="noopener noreferrer"&gt;Value SERP&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;PAYG; from $50/mo&lt;/td&gt;
&lt;td&gt;$1.00-$2.50 / 1k&lt;/td&gt;
&lt;td&gt;100 free searches&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Traject Data product; supports PAYG and monthly plans&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;&lt;a href="https://trajectdata.com/pricing/scaleserp-api" rel="noopener noreferrer"&gt;Scale SERP&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;From $23/mo&lt;/td&gt;
&lt;td&gt;About $1.00-$23 / 1k&lt;/td&gt;
&lt;td&gt;125 free/mo; 1k/mo entry plan&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Low-volume plans have high effective unit costs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;&lt;a href="https://trajectdata.com/serp/serp-wow-api/" rel="noopener noreferrer"&gt;SerpWow&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;From $125/mo&lt;/td&gt;
&lt;td&gt;About $1.44-$12.50 / 1k&lt;/td&gt;
&lt;td&gt;Free trial&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Covers many Google verticals&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.webscrapingapi.com/pricing/serp-api" rel="noopener noreferrer"&gt;WebScrapingAPI SERP&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free trial; from $28/mo&lt;/td&gt;
&lt;td&gt;$2.20-$2.80 / 1k&lt;/td&gt;
&lt;td&gt;100 trial; 10k/mo entry plan&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Dedicated Google SERP pricing page&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.scrapingbee.com/pricing/" rel="noopener noreferrer"&gt;ScrapingBee&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free trial; from $49/mo&lt;/td&gt;
&lt;td&gt;Google about $3-$4 / 1k&lt;/td&gt;
&lt;td&gt;1,000 free credits&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Google requests consume credits; better viewed as a general scraping API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;&lt;a href="https://zenserp.com/pricing-plans/" rel="noopener noreferrer"&gt;Zenserp&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free; from $49.99/mo&lt;/td&gt;
&lt;td&gt;About $4.17-$10 / 1k&lt;/td&gt;
&lt;td&gt;50 free/mo; 5k/mo entry plan&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Broad SERP type coverage, but not cheap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;&lt;a href="https://serpapi.com/pricing" rel="noopener noreferrer"&gt;SerpApi&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free; from $25/mo&lt;/td&gt;
&lt;td&gt;$5-$25 / 1k&lt;/td&gt;
&lt;td&gt;250 free/mo; 1k/mo entry plan&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Strong brand and coverage, but expensive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;&lt;a href="https://serpstack.com/product" rel="noopener noreferrer"&gt;Serpstack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free; from $29.99/mo&lt;/td&gt;
&lt;td&gt;About $6 / 1k&lt;/td&gt;
&lt;td&gt;100 free/mo; 5k/mo entry plan&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Older Google SERP API with a small free tier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;&lt;a href="https://spaceserp.com/pricing" rel="noopener noreferrer"&gt;SpaceSerp&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;From $14.99/mo&lt;/td&gt;
&lt;td&gt;About $8-$15 / 1k&lt;/td&gt;
&lt;td&gt;1k/mo entry plan&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Cheap entry price, high effective unit cost&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.scraperapi.com/solutions/structured-data/google-search-scraper/" rel="noopener noreferrer"&gt;ScraperAPI Google Search&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Free trial; from $49/mo&lt;/td&gt;
&lt;td&gt;About $12.25 / 1k Google SERP&lt;/td&gt;
&lt;td&gt;5,000 API credits trial&lt;/td&gt;
&lt;td&gt;Not public&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Google SERP costs about 25 credits per request&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. SerpBase combines low price with real-time latency
&lt;/h3&gt;

&lt;p&gt;Many SERP APIs are either cheap but queue-based, or fast but expensive. SerpBase offers a clearer combination: &lt;strong&gt;$0.30-$0.50 / 1k requests and 1.4s average latency&lt;/strong&gt;. That matters for AI agents, live search enrichment, SEO monitoring, and competitive intelligence workflows where every request cost compounds quickly. Its regular credits also never expire, which is useful for teams with uneven usage.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Serper.dev is the closest direct competitor
&lt;/h3&gt;

&lt;p&gt;Serper.dev can also reach &lt;strong&gt;$0.30 / 1k&lt;/strong&gt;, but that rate is tied to the Ultimate credit package. The lower entry package is &lt;strong&gt;$50 for 50k credits&lt;/strong&gt;, or &lt;strong&gt;$1.00 / 1k&lt;/strong&gt;. In off-peak tests, Serper.dev averaged around &lt;strong&gt;1.2s&lt;/strong&gt;, which is excellent. For smaller or mid-volume teams, however, SerpBase's &lt;strong&gt;$3 for 10,000 requests&lt;/strong&gt; entry package is easier to test, and its &lt;strong&gt;$0.30-$0.50 / 1k&lt;/strong&gt; pricing range is easier to evaluate.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. DataForSEO is inexpensive, but the latency model matters
&lt;/h3&gt;

&lt;p&gt;DataForSEO's Standard Queue can be very cheap at &lt;strong&gt;$0.60 / 1k&lt;/strong&gt;, but it is queue-based. The Live Mode price is closer to &lt;strong&gt;$2.00 / 1k&lt;/strong&gt;. If your product needs immediate responses after a user action, comparing only the lowest queue price can be misleading.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. General scraping platforms are not always the cleanest SERP API choice
&lt;/h3&gt;

&lt;p&gt;Oxylabs, Bright Data, ScraperAPI, ScrapingBee, and similar vendors have strong infrastructure, but they are built for broader web scraping use cases. For pure SERP data, their pricing can become harder to predict because Google requests may consume multiple credits or vary by proxy, rendering, geo-targeting, or parsing requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Established brands are stable, but often expensive
&lt;/h3&gt;

&lt;p&gt;SerpApi, Serpstack, Zenserp, and SpaceSerp may work well for teams that value brand history, coverage, and mature documentation. But if the main metric is cost per 1,000 SERP requests, they are usually not the best value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendation
&lt;/h2&gt;

&lt;p&gt;SerpBase should be near the top of the shortlist if you are building:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI agents that need live search results as context&lt;/li&gt;
&lt;li&gt;SEO tools that monitor keyword rankings at scale&lt;/li&gt;
&lt;li&gt;RAG or market research pipelines that continuously collect SERP data&lt;/li&gt;
&lt;li&gt;Products that need Google organic results, news, maps, or shopping data&lt;/li&gt;
&lt;li&gt;Systems where predictable API cost matters more than complex credit multipliers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Conclusion
&lt;/h2&gt;

&lt;p&gt;In this comparison, SerpBase lands in the top pricing tier at &lt;strong&gt;$0.30-$0.50 per 1,000 requests&lt;/strong&gt;. Its &lt;strong&gt;1.4s average latency&lt;/strong&gt; is also fast enough for real-time products. Since these latency numbers come from off-peak tests or public references, peak-hour latency may vary, but the overall value proposition remains clear: SerpBase combines low cost, practical latency, non-expiring regular credits, and straightforward API access in one package.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;: user-provided pricing and latency data, $0.30-$0.50 / 1k, $3 / 10,000-request entry package, regular credits never expire, 1.4s average latency&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serper.dev/" rel="noopener noreferrer"&gt;Serper.dev pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serpapi.com/pricing" rel="noopener noreferrer"&gt;SerpApi pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.searchapi.io/pricing" rel="noopener noreferrer"&gt;SearchApi.io pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dataforseo.com/apis/serp-api/pricing" rel="noopener noreferrer"&gt;DataForSEO SERP API pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hasdata.com/google-serp-api" rel="noopener noreferrer"&gt;HasData Google SERP API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://brightdata.com/pricing/serp" rel="noopener noreferrer"&gt;Bright Data SERP API pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://oxylabs.io/products/scraper-api/web/pricing" rel="noopener noreferrer"&gt;Oxylabs Web Scraper API pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trajectdata.com/serp/value-serp-api/pricing/" rel="noopener noreferrer"&gt;Value SERP pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://zenserp.com/pricing-plans/" rel="noopener noreferrer"&gt;Zenserp pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://spaceserp.com/pricing" rel="noopener noreferrer"&gt;SpaceSerp pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.scraperapi.com/solutions/structured-data/google-search-scraper/" rel="noopener noreferrer"&gt;ScraperAPI Google Search Scraper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.scrapingbee.com/pricing/" rel="noopener noreferrer"&gt;ScrapingBee pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.webscrapingapi.com/pricing/serp-api" rel="noopener noreferrer"&gt;WebScrapingAPI SERP pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trajectdata.com/serp/serp-wow-api/" rel="noopener noreferrer"&gt;SerpWow API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trajectdata.com/pricing/scaleserp-api" rel="noopener noreferrer"&gt;Scale SERP pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serpstack.com/product" rel="noopener noreferrer"&gt;Serpstack pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.serphouse.com/pricing" rel="noopener noreferrer"&gt;SERPHouse pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://avesapi.com/pricing/" rel="noopener noreferrer"&gt;AvesAPI pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mapleserp.io/" rel="noopener noreferrer"&gt;MapleSerp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serppost.com/pricing/" rel="noopener noreferrer"&gt;SerpPost pricing&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Why We’re Turning Our Ad Budget Into Developer Credits</title>
      <dc:creator>SerpBase</dc:creator>
      <pubDate>Wed, 22 Apr 2026 16:09:35 +0000</pubDate>
      <link>https://dev.to/serpbase/why-were-turning-our-ad-budget-into-developer-credits-3kgh</link>
      <guid>https://dev.to/serpbase/why-were-turning-our-ad-budget-into-developer-credits-3kgh</guid>
      <description>&lt;p&gt;At &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;, we’ve been thinking a lot about how developer tools usually grow.&lt;/p&gt;

&lt;p&gt;The default playbook is familiar:&lt;br&gt;
buy ads, push impressions, optimize funnels, repeat.&lt;/p&gt;

&lt;p&gt;But we don’t think that’s the best use of budget right now.&lt;/p&gt;

&lt;p&gt;Instead of putting more money into paid distribution, we decided to redirect part of that budget back to the people who actually build with our product.&lt;/p&gt;

&lt;p&gt;So we launched a long-term developer reward program for public GitHub projects that integrate &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Why we’re doing this&lt;br&gt;
Developer products are strongest when they spread through real usage.&lt;/p&gt;

&lt;p&gt;Not because someone saw a banner.&lt;br&gt;
Not because an ad followed them around the internet.&lt;br&gt;
But because a tool solved a real problem inside a real project.&lt;/p&gt;

&lt;p&gt;We’d rather support that directly.&lt;/p&gt;

&lt;p&gt;If someone is willing to integrate &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; into a public repository, document it, ship it, and build something useful with it, that matters more to us than another round of paid clicks.&lt;/p&gt;

&lt;p&gt;The idea&lt;br&gt;
If you integrate &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; into a public GitHub repository and include your &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; username somewhere in the repository code, comments, config, or docs, you can apply for &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; reward credits.&lt;/p&gt;

&lt;p&gt;The reward amount is based on the repository’s public star count at the time of review:&lt;/p&gt;

&lt;p&gt;100,000+ stars: $1000&lt;br&gt;
10,000+ stars: $500&lt;br&gt;
1,000+ stars: $200&lt;br&gt;
100+ stars: $50&lt;br&gt;
&amp;lt;100 stars: $10&lt;br&gt;
We also want to encourage new experiments, not just existing large repos.&lt;/p&gt;

&lt;p&gt;If you create a new public repository specifically for &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;, you may also qualify for a $50 reward.&lt;/p&gt;

&lt;p&gt;A few rules&lt;br&gt;
We want this to stay useful and fair, so the program follows a few basic principles:&lt;/p&gt;

&lt;p&gt;Rewards are reviewed manually.&lt;br&gt;
A repository is generally rewarded once.&lt;br&gt;
If a repository’s star count grows significantly after a previous reward, you may apply again for an upgraded reward.&lt;br&gt;
Credits redeemed from these gift codes never expire.&lt;br&gt;
If &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;-related integration content is later removed, or if abuse/fraud is identified, we reserve the right to revoke the gift code or reclaim the issued credits.&lt;br&gt;
How to apply&lt;br&gt;
To apply, send us:&lt;/p&gt;

&lt;p&gt;your public repository link&lt;br&gt;
your &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; username&lt;br&gt;
a short note about where &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; is integrated&lt;br&gt;
You can reach us in either of these ways:&lt;/p&gt;

&lt;p&gt;email: &lt;a href="mailto:support@serpbase.dev"&gt;support@serpbase.dev&lt;/a&gt;&lt;br&gt;
support ticket through our website&lt;br&gt;
Why this matters to us&lt;br&gt;
We want &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; to be known for being useful, easy to integrate, and worth building on.&lt;/p&gt;

&lt;p&gt;This program is a simple reflection of that belief.&lt;/p&gt;

&lt;p&gt;If we have budget to spend, we’d rather spend more of it helping developers build, experiment, and ship.&lt;/p&gt;

&lt;p&gt;If you’re building with &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;, we’d love to see what you’re making.&lt;/p&gt;

&lt;p&gt;Website: &lt;a href="https://serpbase.dev" rel="noopener noreferrer"&gt;https://serpbase.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Shorter dev.to-style alternative title ideas&lt;/p&gt;

&lt;p&gt;We’d Rather Reward Developers Than Buy More Ads&lt;br&gt;
Our New GitHub Reward Program for Developers Building With &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;&lt;br&gt;
Why &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt; Is Redirecting Ad Budget Into Developer Credits&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building something simple</title>
      <dc:creator>SerpBase</dc:creator>
      <pubDate>Sun, 19 Apr 2026 14:19:46 +0000</pubDate>
      <link>https://dev.to/serpbase/building-something-simple-2bdb</link>
      <guid>https://dev.to/serpbase/building-something-simple-2bdb</guid>
      <description>&lt;p&gt;At some point, I decided to just build a minimal version for my own use.&lt;/p&gt;

&lt;p&gt;The goal wasn’t to compete with existing tools — just to have something that:&lt;/p&gt;

&lt;p&gt;works reliably&lt;br&gt;
has predictable pricing&lt;br&gt;
is easy to plug into my own scripts&lt;/p&gt;

&lt;p&gt;After using it for a bit, I realized it might actually be useful for other developers dealing with the same issue.&lt;/p&gt;

&lt;p&gt;So I cleaned it up and turned it into a small project called &lt;a href="https://www.serpbase.dev" rel="noopener noreferrer"&gt;SerpBase&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
