DEV Community

Cover image for I Built an SEO API That Runs 14 Analysis Modules for $0.003 Per Call
Br0ski777
Br0ski777

Posted on

I Built an SEO API That Runs 14 Analysis Modules for $0.003 Per Call

Every time I needed to check a page's SEO, I ended up paying $100+/month for tools that gave me raw data and no clear direction. Semrush, Ahrefs, Screaming Frog -- they are built for agencies with budgets. As a developer, I just wanted an API I could call with a URL and get back a score, a list of what is broken, and what to fix first.

So I built one.

SEO Page Analyzer AI is a REST API that takes any URL, scrapes the rendered page with Playwright, runs it through 14 analysis modules in parallel, and sends the combined metrics to Claude AI for intelligent scoring and actionable recommendations. One call, one response, everything you need.

https://rapidapi.com/Br0ski777/api/seo-page-analyzer-ai

What the 14 modules cover

The analysis is not surface-level meta tag checking. Here is what runs on every request:

  1. Meta Tags -- title, description, OG, Twitter Card, canonical
  2. Headings -- H1-H6 hierarchy validation
  3. Images -- alt text coverage, lazy loading, modern formats (WebP/AVIF)
  4. Links -- internal/external ratio, nofollow audit, empty anchors
  5. Technical -- HTTPS, robots.txt, sitemap, page size, DOM element count
  6. Mobile -- viewport, font readability, tap target sizing
  7. Schema/JSON-LD -- structured data detection and validation
  8. Semantic HTML -- landmark elements, div soup ratio, HTML5 semantics
  9. Performance -- render-blocking resources, resource hints, image optimization
  10. Internal Linking -- anchor diversity, deep link coverage, orphan signals
  11. Keywords -- density, bigrams, TF-IDF, stuffing detection
  12. E-E-A-T -- Experience, Expertise, Authoritativeness, Trustworthiness scoring
  13. GEO/AI Visibility -- AI crawler access, citability score, llms.txt detection
  14. Backlinks and Authority -- outbound trust signals, authority estimation

The E-E-A-T and GEO modules are what make this different from anything else on RapidAPI. No other SEO API scores your page for AI visibility or checks whether GPTBot and ClaudeBot can crawl your content.

How it works under the hood

The stack is Bun + Hono + Playwright + Claude Haiku + Upstash Redis. Playwright renders the page with JavaScript (no cheating with raw HTML), all 14 analyzers run in parallel against the DOM, then the combined metrics go to Claude Haiku which produces a weighted score (0-100), a letter grade (A+ to F), prioritized issues with specific fixes, and an upgrade roadmap showing exactly what to do to reach the next grade.

Results are cached in Redis for 24 hours, so repeated analyses of the same URL are instant.

Code examples

curl:

curl -X POST https://seo-page-analyzer-api-production.up.railway.app/api/v1/analyze \
-H "Content-Type: application/json" \
-H "X-RapidAPI-Key: YOUR_KEY" \
-H "X-RapidAPI-Host: seo-page-analyzer-ai.p.rapidapi.com" \
-d '{"url": "https://yoursite.com"}'

JavaScript (fetch):

const response = await fetch(
'https://seo-page-analyzer-ai.p.rapidapi.com/api/v1/analyze',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-RapidAPI-Key': 'YOUR_KEY',
'X-RapidAPI-Host': 'seo-page-analyzer-ai.p.rapidapi.com'
},
body: JSON.stringify({ url: 'https://yoursite.com' })
}
);
const data = await response.json();
console.log(Score: ${data.score}/100 (${data.grade}));
console.log(Critical issues: ${data.issues.critical.length});

Python (requests):

import requests

response = requests.post(
'https://seo-page-analyzer-ai.p.rapidapi.com/api/v1/analyze',
headers={
'Content-Type': 'application/json',
'X-RapidAPI-Key': 'YOUR_KEY',
'X-RapidAPI-Host': 'seo-page-analyzer-ai.p.rapidapi.com'
},
json={'url': 'https://yoursite.com'}
)
data = response.json()
print(f"Score: {data['score']}/100 ({data['grade']})")
for issue in data['issues']['critical']:
print(f" [{issue['category']}] {issue['issue']} -> {issue['fix']}")

Example response (abbreviated)

{
"url": "https://example-store.com",
"score": 72,
"grade": "B",
"load_time_ms": 641,
"issues": {
"critical": [
{
"category": "eeat",
"issue": "No author information or credentials found",
"impact": "high",
"fix": "Add author bio with relevant credentials and link to author page"
},
{
"category": "geo",
"issue": "ClaudeBot and GPTBot blocked in robots.txt",
"impact": "high",
"fix": "Allow AI crawlers in robots.txt to improve AI search visibility"
}
],
"warnings": [
{
"category": "images",
"issue": "6 images missing alt attributes",
"impact": "medium",
"fix": "Add descriptive alt text to all images"
},
{
"category": "semantic",
"issue": "Div soup ratio 78% -- low semantic HTML usage",
"impact": "medium",
"fix": "Replace generic divs with semantic elements (nav, main, article, section)"
}
],
"passed": [
{ "category": "technical", "check": "HTTPS active" },
{ "category": "meta", "check": "Title present (54 characters)" },
{ "category": "mobile", "check": "Viewport configured correctly" }
]
},
"ai_summary": "This page scores 72/100 (B) with strong technical foundations but weak E-E-A-T signals and blocked AI crawlers. Priority: add author credentials and unblock GPTBot/ClaudeBot in robots.txt for immediate gains.",
"roadmap": [
{
"target_grade": "A",
"target_score": "80-89",
"actions": [
"Add author bio with credentials",
"Unblock AI crawlers in robots.txt",
"Add FAQ schema markup",
"Fix 6 missing image alt attributes",
"Improve semantic HTML ratio above 50%"
],
"estimated_impact": "+8-12 points"
},
{
"target_grade": "A+",
"target_score": "90-100",
"actions": [
"Add llms.txt file for AI discoverability",
"Implement complete E-E-A-T signals (reviews, citations)",
"Optimize all images to WebP/AVIF",
"Add comprehensive internal linking"
],
"estimated_impact": "+10-18 points"
}
]
}

The upgrade roadmap is the feature I am most proud of. Instead of just telling you what is wrong, it tells you exactly what to fix to go from B to A, and from A to A+, with estimated point gains.

Pricing and free tier

The free tier gives you 50 calls/month with 6 analysis modules -- enough to integrate and test. Paid plans start at $29/month for 500 calls with 9 modules and AI scoring. The Pro plan at $99/month unlocks all 14 modules including E-E-A-T, GEO, keywords, and the upgrade roadmap.

For context, Semrush API starts at $500/month. This gives you comparable on-page analysis at 1/5th to 1/50th the cost, with AI recommendations they do not offer at all.

It also ships with an MCP server, so if you are building AI agents or using Claude Code, you can plug the SEO analyzer directly into your agent workflow.

Try it free: https://rapidapi.com/Br0ski777/api/seo-page-analyzer-ai

Top comments (0)