DEV Community

Alamin Sarker
Alamin Sarker

Posted on

How to Build a Provider-Agnostic AI SEO Pipeline in Next.js

I spent an entire sprint ripping out OpenAI calls from a client's Next.js app because they wanted to switch to Anthropic. Every metadata generator, every schema builder, every keyword cluster — all tightly coupled to one SDK. Three days of refactoring for what should have been a config change.

That experience turned into an obsession: build an AI SEO tool pipeline that treats the model as a swappable dependency, not the foundation. In this article I'll show you the exact pattern — a provider-agnostic adapter layer in Next.js that generates SEO metadata, structured schema, and keyword suggestions without locking you into any single vendor.

The Problem with Most AI SEO Integrations

Most tutorials go straight to openai.chat.completions.create(...) inside a Route Handler and call it done. That works until:

  • Your client wants to use their own API key on a different provider
  • OpenAI raises prices and Gemini Flash is suddenly 10x cheaper
  • You want to A/B test output quality across models

The fix isn't clever prompt engineering. It's architecture.

Here's the pattern: define a SeoProvider interface, implement it for each vendor, and inject the right one at runtime via an environment variable. Your Next.js App Router handlers never import a specific SDK — they only talk to the interface.

Step 1: Define the Provider Interface

Create a shared type that all providers must satisfy:

// lib/seo/types.ts

export interface SeoInput {
  url: string;
  pageContent: string;
  targetKeyword?: string;
}

export interface SeoOutput {
  title: string;
  description: string;
  keywords: string[];
  schema: Record<string, unknown>;
}

export interface SeoProvider {
  generate(input: SeoInput): Promise<SeoOutput>;
}
Enter fullscreen mode Exit fullscreen mode

That's your contract. Nothing else in the app needs to know which model is running.

Step 2: Implement Concrete Adapters

Now write one adapter per provider. They all implement the same interface:

// lib/seo/providers/openai.ts
import OpenAI from "openai";
import { SeoProvider, SeoInput, SeoOutput } from "../types";

export class OpenAIProvider implements SeoProvider {
  private client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

  async generate(input: SeoInput): Promise<SeoOutput> {
    const prompt = buildPrompt(input); // shared prompt builder (see below)

    const res = await this.client.chat.completions.create({
      model: "gpt-4o-mini",
      messages: [{ role: "user", content: prompt }],
      response_format: { type: "json_object" },
    });

    return JSON.parse(res.choices[0].message.content ?? "{}");
  }
}
Enter fullscreen mode Exit fullscreen mode
// lib/seo/providers/anthropic.ts
import Anthropic from "@anthropic-ai/sdk";
import { SeoProvider, SeoInput, SeoOutput } from "../types";

export class AnthropicProvider implements SeoProvider {
  private client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

  async generate(input: SeoInput): Promise<SeoOutput> {
    const prompt = buildPrompt(input);

    const res = await this.client.messages.create({
      model: "claude-3-5-haiku-20241022",
      max_tokens: 1024,
      messages: [{ role: "user", content: prompt }],
    });

    const text = res.content[0].type === "text" ? res.content[0].text : "";
    return JSON.parse(text);
  }
}
Enter fullscreen mode Exit fullscreen mode

Both use the same buildPrompt function:

// lib/seo/prompt.ts
import { SeoInput } from "./types";

export function buildPrompt(input: SeoInput): string {
  return `
    Analyze the following web page content and return a JSON object with these fields:
    - title: SEO-optimized page title (50-60 chars)
    - description: meta description (150-160 chars)
    - keywords: array of 8-10 relevant keywords
    - schema: a JSON-LD Article schema object

    Target keyword: ${input.targetKeyword ?? "none specified"}
    URL: ${input.url}

    Page content:
    ${input.pageContent.slice(0, 3000)}

    Return ONLY valid JSON. No markdown, no explanation.
  `;
}
Enter fullscreen mode Exit fullscreen mode

The prompt is provider-agnostic — the same text goes to any model.

Step 3: The Provider Factory + Route Handler

Wire everything together with a factory function and a single API route:

// lib/seo/factory.ts
import { SeoProvider } from "./types";
import { OpenAIProvider } from "./providers/openai";
import { AnthropicProvider } from "./providers/anthropic";

export function getSeoProvider(): SeoProvider {
  const provider = process.env.SEO_AI_PROVIDER ?? "openai";

  switch (provider) {
    case "anthropic":
      return new AnthropicProvider();
    case "openai":
    default:
      return new OpenAIProvider();
  }
}
Enter fullscreen mode Exit fullscreen mode
// app/api/seo/route.ts
import { NextRequest, NextResponse } from "next/server";
import { getSeoProvider } from "@/lib/seo/factory";

export async function POST(req: NextRequest) {
  const body = await req.json();
  const { url, pageContent, targetKeyword } = body;

  if (!url || !pageContent) {
    return NextResponse.json({ error: "Missing required fields" }, { status: 400 });
  }

  try {
    const provider = getSeoProvider();
    const result = await provider.generate({ url, pageContent, targetKeyword });
    return NextResponse.json(result);
  } catch (err) {
    console.error("[SEO Pipeline Error]", err);
    return NextResponse.json({ error: "Generation failed" }, { status: 500 });
  }
}
Enter fullscreen mode Exit fullscreen mode

Switch providers by changing one env var:

# .env.local
SEO_AI_PROVIDER=anthropic   # or "openai", or whatever you add next
Enter fullscreen mode Exit fullscreen mode

No code changes. No refactoring sprint.

Step 4: Consuming It from Your Frontend

A simple hook to call the pipeline from a Next.js page:

// hooks/useSeoGenerator.ts
import { useState } from "react";
import { SeoOutput } from "@/lib/seo/types";

export function useSeoGenerator() {
  const [result, setResult] = useState<SeoOutput | null>(null);
  const [loading, setLoading] = useState(false);

  async function generate(url: string, pageContent: string, targetKeyword?: string) {
    setLoading(true);
    try {
      const res = await fetch("/api/seo", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ url, pageContent, targetKeyword }),
      });
      const data = await res.json();
      setResult(data);
    } finally {
      setLoading(false);
    }
  }

  return { result, loading, generate };
}
Enter fullscreen mode Exit fullscreen mode

Call it from any component:

const { result, loading, generate } = useSeoGenerator();

// On button click or form submit:
await generate("https://example.com/blog/post", articleText, "Next.js SEO");

// result now contains title, description, keywords, schema — ready to inject
Enter fullscreen mode Exit fullscreen mode

You now have a working AI SEO tool pipeline that runs entirely on your infrastructure, with the provider as a runtime config value.

Bonus: Where a Dedicated SEO Package Saves Time

The adapter pattern above handles the AI layer, but you still need to handle things like robots.txt generation, sitemap injection, JSON-LD rendering, and keyword clustering logic. Rolling all of that from scratch adds up.

This is where I started using @power-seo as the pipeline's output consumer — it takes the structured JSON your provider generates and handles the Next.js <Head> injection, schema rendering, and sitemap updates automatically. Worth checking out if you want to skip that plumbing. There's also a deeper writeup on using it with this exact adapter pattern over at ccbd.dev.

What I Learned

  • Couple to the interface, not the SDK. The 30 minutes you spend writing an adapter pays back every time you switch models — and you will switch models.
  • Keep prompts in a shared module. Provider-specific quirks (like Anthropic's JSON formatting behavior) belong in the adapter, not the prompt. One prompt file means one place to improve output quality.
  • Use response_format: json_object (OpenAI) or explicit JSON instructions (Anthropic) — don't parse unstructured text from a model if you need structured data. You'll lose hours chasing intermittent parse errors.
  • Test with the cheapest model first. GPT-4o-mini and Claude Haiku are fast and cheap enough to run on every save in development. Use the heavyweight models only when you're tuning final output quality.

If you want to try this approach, here's the full writeup with repo links: https://ccbd.dev/blog/ai-seo-tool-that-works-with-every-llm-a-developers-guide-in-2026

What's your setup?

Genuinely curious: what is the best AI SEO tool for Next.js developers in 2026? Are you rolling your own pipeline, using a SaaS wrapper, or going full DIY with raw API calls? Drop your stack in the comments — especially if you've found a clever way to handle provider fallback when rate limits hit.

Top comments (0)