DEV Community

Cover image for I Stopped Paying $99/Month for SEO Tools. So I Built My Own.
Abdul Rehman Khan
Abdul Rehman Khan

Posted on

I Stopped Paying $99/Month for SEO Tools. So I Built My Own.

This is not a tutorial. This is the story of how I got tired of being outranked by worse content — and what I did about it.


Do this right now.

Open a new tab. Google your last published article topic.

Find your article in the results.

Now look at what's ranking above it.

Read it.

Is it better than what you wrote?

Probably not.

You spent hours on a technically accurate, well-structured article. And some shallow 800-word post from 2022 with zero code examples is sitting at position 1.

This used to drive me insane. Then I figured out why it happens.

And then I built something that fixes it.


Why This Keeps Happening

The articles ranking above yours weren't better written.

They were researched differently — before the first word was typed.

There's a layer of preparation that separates developer blogs that get traffic from ones that don't. It has nothing to do with writing quality, posting frequency, or how technically accurate your content is.

It's about knowing — before you open a blank document:

  • What keywords developers are actually searching right now
  • What heading structure Google is rewarding for your specific topic
  • What questions are appearing in the PAA box that you could own
  • What angles your competitors haven't covered yet

The tools that give you this data cost between $99 and $499 per month. And every single one of them was built for marketing agencies, not developers. They treat your Next.js tutorial like a product listing. They have no concept of what a developer is actually looking for when they search.

So I stopped paying for them. And built my own.


What I Built

I type a topic. I wait about 10 seconds.

What comes back isn't AI-generated fluff — it's a complete content strategy built on live data pulled directly from what developers are searching right now.

Let me show you what that actually looks like using React hooks best practices as a real example.


The Keyword Data

Not keywords from a database last updated three months ago.

Keywords scraped live from Google's own autocomplete signals — right now, for your exact topic.

Here's a sample of what comes back:

Keyword Intent Volume Signal Difficulty
react hooks best practices 2026 Informational 1K–10K Medium
useEffect cleanup best practices Informational 100–1K Low ⭐
custom hooks best practices typescript Informational 100–1K Low ⭐
react hooks testing best practices jest Informational 100–1K Low ⭐
when to use useCallback vs useMemo Informational 1K–10K Medium
useReducer vs useState performance Commercial 100–1K Low ⭐

Four Low-difficulty keywords in one report. You can rank for those without massive domain authority — you just need a well-structured article that covers them properly.
This isn't 8 keywords. It's 100+. The mechanism behind how it generates that many — and why it surfaces results that paid tools miss — is one of the more interesting parts of the build. I documented it in the playbook.


The Article Outline

A complete heading hierarchy built around those live keywords:

# React Hooks Best Practices: The Complete 2026 Developer Guide

## 1. Understanding Hook Rules and When They Apply
   ### 1.1 The Rules of Hooks — What They Mean and Why They Exist
   ### 1.2 Common Violations (With the Exact Errors They Cause)
   ### 1.3 ESLint Plugin Setup That Enforces Rules Automatically

## 2. useEffect Best Practices and Cleanup Patterns
   ### 2.1 The Dependency Array: What Actually Goes In It
   ### 2.2 Cleanup Functions: When and How to Write Them
   ### 2.3 Race Conditions in useEffect: Real Examples and Fixes

## 3. Performance Optimization: useCallback vs useMemo
   ### 3.1 When useCallback Actually Helps (And When It Hurts)
   ### 3.2 useMemo for Expensive Computations: Real Benchmarks
   ### 3.3 The useMemo Trap: Why Most Uses Are Premature Optimization

## 4. Custom Hooks Best Practices in TypeScript
   ### 4.1 When to Extract Logic Into a Custom Hook
   ### 4.2 Typing Custom Hooks Correctly
   ### 4.3 Testing Custom Hooks with React Testing Library

... continues for 10–15 H2 sections
Enter fullscreen mode Exit fullscreen mode

Every heading has LSI keywords embedded naturally — not stuffed in, structurally placed where they signal topical depth to Google.


The People Also Ask Data

Real PAA questions scraped live from Google's actual results for your topic — not AI predictions of what people might ask:

→ What are the rules for using React hooks?
→ Should I use useEffect or useLayoutEffect?
→ How do you avoid infinite loops in useEffect?
→ When should you create a custom hook?
→ Are React hooks better than class components?
→ How do you test custom React hooks properly?
→ What is the difference between useState and useReducer?
→ Can you use hooks inside conditions or loops?

Enter fullscreen mode Exit fullscreen mode

Screenshot of a tool's
Build your FAQ section around these and you're answering questions Google already knows developers want answered. That's how you start appearing in PAA boxes.


The Developer Angles

This is where generic SEO tools completely fail technical content.

Ask a marketing tool for content angles and you get: write a listicle, add more keywords, include statistics.

Here's what comes back from DevSEO:

→ Performance benchmarking:
   "useCallback vs no useCallback — real render counts
    measured with React DevTools Profiler on a 1,000-item list"

→ Production debugging:
   "React hooks bugs I've hit in production —
    the errors, the debugging process, and exactly how I fixed them"

→ Architecture patterns:
   "Custom hooks as a service layer: separating
    business logic from UI in large React apps"

→ Testing strategy:
   "Testing custom hooks: patterns that work
    vs patterns that look right but don't"
Enter fullscreen mode Exit fullscreen mode

These are angles a developer would actually find interesting. The kind that make your article worth reading when 50 others cover the same topic.


Everything Else

Beyond those four, each report also includes:

Feature What You Get
Title Variations 5 options, each using a different click trigger
Internal Link Map Specific anchor text with semantic context
Blog Structure Word count, code examples, CTA placement
Blog Preview Full editorial mockup before you write a word
Export One-click PDF or Markdown

Try It Free

Everything above — live, right now, no signup:

🔧 devtechinsights.com/free-seo-keyword-tool

Type your next article topic. See what comes back. Judge for yourself.


The Journey (The Honest Version)

This didn't work on the first try.

I went through four different technical approaches before landing on something that actually shipped. The first failed within minutes of testing. The second worked but produced noticeably worse output. The third was excellent quality but couldn't serve more than a handful of users per day.

The fourth worked.

Each failure taught something specific — about rate limits, about model quality trade-offs, about what "free tier" actually means when you're trying to build something real. I documented all of it, including the mistakes that seemed like good decisions at the time.

The architectural shift that finally made everything shippable was less obvious than I expected. It wasn't switching models or optimising prompts. It was rethinking the entire backend structure — eliminating something I thought was necessary and discovering the whole thing worked better without it.

That decision and why I made it is Part 5 of the playbook. I won't summarise it here because the reasoning only makes sense with the context of everything that came before it.


The Bugs That Made It Real

The Hydration Mismatch

Warning: A tree hydrated but some attributes of the
server rendered HTML didn't match the client properties.
Enter fullscreen mode Exit fullscreen mode

Root cause wasn't in my code at all. Fix was one line. Took an afternoon to figure out why.


The Invisible Stale Cache

Reports loading with blank sections. No error. No crash. Just empty panels that looked broken.

The bug was silent because it wasn't a runtime error — it was a data shape mismatch between cached reports and the UI expecting fields that didn't exist yet. The fix required adding validation logic I should have built in from the start.


PDF Export: Four Attempts

v1: window.print()          → Print dialog appears    ❌
v2: Capture entire page     → Nav bar included        ❌
v3: Target report div       → Collapsed content missing ❌
v4: Expand → wait → capture → restore                 ✅
Enter fullscreen mode Exit fullscreen mode

The core lesson: client-side PDF generation is fundamentally a screenshot tool. If content isn't visible in the DOM at capture time, it doesn't appear in the PDF. The solution required a specific sequence of DOM operations with a timing dependency I didn't expect.

The exact implementation — and why each step is necessary — is in Part 6.


What's In The Playbook

68 pages. Everything documented honestly — including the approaches that failed.

Part 1 — Why I built this and how the project was structured
Part 2 — The first backend: what worked and what killed it
Part 3 — The AI engine: four models, the rate limit battles,
          and the prompt technique that changed the output quality
Part 4 — The live data pipeline: how real search data gets
          extracted without paying for a single API
Part 5 — The migration: why I eliminated the entire backend
          and what happened when I did
Part 6 — The frontend: design system, components, 
          the PDF export nightmare, and the blog preview renderer
Part 7 — Live data signals: rank checking and competition
          analysis without expensive third-party tools
Part 8 — Bugs, deployment, and the complete Vercel setup guide
Part 9 — Turning it into a SaaS: pricing, launch strategy,
          and the $0 → $2,000/month revenue roadmap
Enter fullscreen mode Exit fullscreen mode

7 Things I'd Tell Myself Before Starting

  1. Rate limits will kill your first choice of AI provider. Plan for it.
  2. Two servers are twice the failure surface. Consolidate early.
  3. Prompt specificity matters more than model size — explicit constraints on a smaller model beat vague prompts on a larger one.
  4. Remove features that feel unreliable. Honesty beats feature count every time.
  5. TypeScript build errors are a gift. Every one is a production crash you avoided.
  6. File-based caching dies on serverless infrastructure. Design for statelessness from day one.
  7. Raw CSS makes your UI look like a product, not a template.

Get The Playbook

Free sample — covers the data pipeline and the prompt engineering section:

📄 drive.google.com/file/d/1KBbq09Cq24WqQLMsqRLZq8yZPK51-uFM

Full playbook ($29) + source code bundle ($49):

📖 arkhan66.gumroad.com/l/seo-tool


One question for the Dev.to community:

Do you research keywords before writing technical
articles — or write first and think about SEO after?

Curious how other developer bloggers actually approach this.


Abdul Rehman Khan — devtechinsights.com


#seo #webdev #nextjs #javascript #typescript #devtools #blogging #programming

Top comments (1)

Collapse
 
digioffly profile image
DigiOffly

This is where a lot of SEO tooling is heading.

People don’t just want dashboards anymore — they want workflows that turn search data into decisions faster. For smaller teams especially, a lean tool that answers one real problem can be more useful than a huge all-in-one platform.