My client wanted to switch LLM providers on a Friday afternoon. I lost a full day.
I had an AI-powered SEO pipeline running smoothly — meta descriptions generating on-demand, title tags keyphrase-targeted, the whole thing humming along in a Next.js app. Then the client emailed: "Can we switch from OpenAI to Claude? Their pricing is better."
Reasonable request. Should've been a 10-minute job.
It wasn't. Because I'd built the pipeline on LangChain, and switching providers meant rewriting chain logic, updating import paths, reinstalling packages, and re-testing output formats that behaved differently between providers. Half a day gone.
In this article, I'll walk through how I compared three JavaScript libraries — LangChain, Vercel AI SDK, and power seo ai — for AI-powered SEO tasks, and what I learned the hard way.
The Problem: General-Purpose AI Libraries Don't Know What SEO Is
When you're generating meta descriptions with an LLM, you don't just need text back. You need:
- Character count (Google truncates after ~158 chars)
- Pixel width estimate (~6.2px per character)
- A validity flag — is this actually usable?
LangChain and Vercel AI SDK are excellent general-purpose tools. But neither has any concept of SEO validation. That means every team building an SEO pipeline has to write the same boilerplate from scratch:
// You write this yourself. Every time. In every project.
const charCount = raw.trim().length;
const pixelWidth = Math.round(charCount * 6.2);
const isValid = charCount >= 120 && charCount <= 158;
This is fine — until you have five developers on the team and five slightly different implementations of the same logic. Or until Google updates its character guidance and you have to hunt down every place you copy-pasted this.
How Each Library Handles the Same Task
Let's look at the same task across all three: generate a meta description for a product page.
LangChain
import { ChatAnthropic } from '@langchain/anthropic';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
const model = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'claude-opus-4-6',
});
const chain = ChatPromptTemplate.fromMessages([
['system', 'Write a meta description between 120-158 characters. Include the focus keyphrase.'],
['human', 'Title: {title}\nContent: {content}\nKeyphrase: {focusKeyphrase}'],
]).pipe(model).pipe(new StringOutputParser());
const raw = await chain.invoke({ title, content, focusKeyphrase });
// No built-in SEO validation — you write this yourself
const charCount = raw.trim().length;
const isValid = charCount >= 120 && charCount <= 158;
Works. But: 101.2 KB gzipped bundle, 50+ dependencies, blocked on edge runtimes, and zero SEO-specific output.
Vercel AI SDK
import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const { text } = await generateText({
model: anthropic('claude-opus-4-6'),
system: 'Write a meta description between 120-158 characters. Include the focus keyphrase.',
prompt: `Title: ${title}\nContent: ${content}\nKeyphrase: ${focusKeyphrase}`,
maxTokens: 200,
});
// Still no SEO validation built in
const charCount = text.trim().length;
const pixelWidth = Math.round(charCount * 6.2);
const isValid = charCount >= 120 && charCount <= 158;
Better DX, edge-safe, excellent streaming. But same problem — SEO validation is your job.
@power-seo/ai
import { buildMetaDescriptionPrompt, parseMetaDescriptionResponse } from '@power-seo/ai';
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
// Step 1: Build the prompt — pure function, no network call
const prompt = buildMetaDescriptionPrompt({ title, content, focusKeyphrase, maxLength: 158 });
// Step 2: Send to your LLM of choice
const response = await anthropic.messages.create({
model: 'claude-opus-4-6',
system: prompt.system,
messages: [{ role: 'user', content: prompt.user }],
max_tokens: prompt.maxTokens,
});
const raw = response.content[0].type === 'text' ? response.content[0].text : '';
// Step 3: Structured, validated output — no custom logic needed
const result = parseMetaDescriptionResponse(raw);
console.log(result.charCount); // 142
console.log(result.pixelWidth); // 880
console.log(result.isValid); // true
console.log(result.validationMessage); // "Meta description length is optimal."
The SEO validation is built into parseMetaDescriptionResponse. You don't write it. You don't maintain it. If Google updates its guidance, the library update handles it.
Bundle size: ~4 KB gzipped, zero dependencies, edge-safe.
The Provider Switch Problem
Back to my Friday afternoon. Here's what switching providers looked like with each approach:
With LangChain:
// Before
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({ openAIApiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' });
// After — new package, new imports, new env vars, test chain output format
import { ChatAnthropic } from '@langchain/anthropic';
const model = new ChatAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: 'claude-opus-4-6' });
// Plus: update chain logic, re-validate output parser
That's 4+ file changes, a new package install, and a testing session.
With @power-seo/ai: The prompt builder returns a plain { system, user, maxTokens } object. Switching providers means changing exactly one thing — the LLM client call. The prompt builder and parser don't care which provider you use.
// This never changes regardless of provider
const prompt = buildMetaDescriptionPrompt({ title, content, focusKeyphrase });
// Only this block changes when you switch providers
const response = await anthropic.messages.create({ ... });
// or: await openai.chat.completions.create({ ... });
// or: await gemini.generateContent({ ... });
This is what "provider agnostic" actually means in practice — not just a marketing claim, but a structural guarantee.
When to Use Each One
This isn't a one-size-fits-all answer. Here's when each tool genuinely makes sense:
Use LangChain when your SEO pipeline involves RAG (pulling live content from vector stores), complex multi-step agents, or document loaders. LangChain's ecosystem depth is real and valuable for those use cases. 1.3M weekly downloads don't lie.
Use Vercel AI SDK when your primary need is streaming chat UI in Next.js, or you want the cleanest possible DX for token streaming. The useChat hook and streamText are genuinely excellent. Note: you can combine it with @power-seo/ai — use the SDK for streaming UI, use the SEO library for prompt quality on the server.
Use @power-seo/ai when you're generating SEO content at scale — meta descriptions, title tags, content suggestions — and you want structured, validated output without writing the validation yourself every time. Especially useful if you deploy to edge runtimes where LangChain can't run.
What I Learned
- General-purpose AI libraries are excellent — but SEO has domain-specific requirements that you'll end up building yourself unless you use a purpose-built tool.
- Provider agnosticism matters more than you think. Clients change their minds. Pricing shifts. New models drop. Designing your prompting layer to be independent of your LLM client saves real hours.
- Bundle size affects more than Lighthouse scores. At 101.2 KB gzipped, LangChain is blocked on edge runtimes entirely. That's a hard architectural constraint, not a preference.
- Streaming and SEO validation are different problems. You can solve both at once by combining Vercel AI SDK (transport layer) with @power-seo/ai (prompt + validation layer).
If you want to try the structured SEO approach, the library is open source: Power SEO
Full comparison with performance benchmarks and migration guide: Best AI SEO Library for JavaScript
What's your experience?
Have you run into the provider-switching problem in your own AI pipelines? And if you're building SEO tooling on top of LLMs, how are you handling validation — rolling your own, or found something that works?
Would love to hear how others are approaching this in the comments.

Top comments (0)