DEV Community

Cover image for How I Used JavaScript to Automate SEO Tasks
Mitu Das
Mitu Das

Posted on

How I Used JavaScript to Automate SEO Tasks

I will be honest with you. For the longest time, meta descriptions were the last thing I wrote before publishing. Sometimes I skipped them entirely. Not because I did not care about SEO, but because writing 150 characters of optimized copy for every single page felt like a tax on shipping.

Then I started thinking about it differently. The rules for a good meta description do not change. Include the focus keyphrase. Stay under 158 characters. End with something that makes someone want to click. That is a spec, not creative writing. And if something has a spec, you can automate it.

So I did. Here is exactly how I used JavaScript to automate SEO across my content pipeline, what I learned, and the specific library that made it provider-agnostic from day one.

Why I Wanted to Automate This in JavaScript Specifically

I considered Python. Most SEO tooling lives there. But my content pipeline was already Node.js. My CMS hooks were JavaScript. My CI scripts were JavaScript. Adding a Python subprocess just to generate a meta description felt like the wrong tradeoff.

What I actually needed was a library that would let me build prompts for any LLM I wanted to use, parse the responses into structured data, and drop the whole thing into my existing Node.js workflow without pulling in a new runtime or a heavyweight SDK.

That is what led me to @[power-seo/ai]. It does exactly one thing: gives you prompt builders that return plain objects and response parsers that accept raw strings. No bundled SDK. No opinions about which model you use. Just JavaScript functions.

The First Thing I Automated: Meta Descriptions

I started here because the pain was most obvious. My site had hundreds of pages with either missing or weak meta descriptions. Fixing them manually would have taken weeks.

Here is the code I wrote:

import { buildMetaDescriptionPrompt, parseMetaDescriptionResponse } from '@power-seo/ai';
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

async function generateMetaDescription(title: string, content: string, keyphrase: string) {
  const prompt = buildMetaDescriptionPrompt({
    title,
    content,
    focusKeyphrase: keyphrase,
  });

  const response = await anthropic.messages.create({
    model: 'claude-opus-4-6',
    system: prompt.system,
    messages: [{ role: 'user', content: prompt.user }],
    max_tokens: prompt.maxTokens,
  });

  const raw = response.content[0].type === 'text' ? response.content[0].text : '';
  return parseMetaDescriptionResponse(raw);
}

const result = await generateMetaDescription(
  'How to Choose a Standing Desk',
  'Full article content here...',
  'standing desk buying guide'
);

console.log(result.description);   // the generated text
console.log(result.charCount);     // e.g. 146
console.log(result.pixelWidth);    // e.g. 874
console.log(result.isValid);       // true
Enter fullscreen mode Exit fullscreen mode

Two things surprised me when I first saw the output. First, the charCount field. I expected that. Second, the pixelWidth field. I had not thought about pixel width before, but it matters more than character count for predicting SERP truncation. Google cuts off at roughly 920px on desktop, not at a fixed character count. An "i" and a "W" are not the same width. The parser estimates rendered width so you can catch truncation before it happens.

I ran this across 200 pages in a weekend. Would have taken me two weeks manually.

Then I Tackled Title Tags

My title tags were inconsistent. Some were too long. Some were missing the keyphrase. Some were just the page name with nothing else.

buildTitlePrompt fixed this by generating five variants per page, each targeting a different angle:

import { buildTitlePrompt, parseTitleResponse } from '@power-seo/ai';

const prompt = buildTitlePrompt({
  content: 'Article about the best noise-cancelling headphones under $100...',
  focusKeyphrase: 'noise cancelling headphones under 100',
  tone: 'informative',
});

const raw = await yourLLMClient.complete(prompt.system, prompt.user, prompt.maxTokens);
const titles = parseTitleResponse(raw);

titles.forEach(({ title, charCount, pixelWidth }) => {
  const status = charCount <= 60 ? 'OK' : 'TOO LONG';
  console.log(`[${status}] "${title}" — ${charCount} chars`);
});
Enter fullscreen mode Exit fullscreen mode

Getting five variants back instead of one changed my workflow. I stopped thinking of the AI as a replacement for my judgment and started treating it as a first draft machine. I pick the best variant, sometimes edit it slightly, and move on. The whole process takes ten seconds per page instead of five minutes. This is basically how an AI SEO tool shifts content work from writing to selection and refinement.

The Part I Did Not Expect to Use: Content Suggestions

I added buildContentSuggestionsPrompt almost as an afterthought. I thought it would give me vague advice I already knew.

It did not. It gave me typed, prioritized recommendations across four specific categories: heading structure, paragraph content, keyword placement, and internal linking. Each one came with a priority score.

import { buildContentSuggestionsPrompt, parseContentSuggestionsResponse } from '@power-seo/ai';

const prompt = buildContentSuggestionsPrompt({
  title: 'How to Build a Morning Routine',
  content: pageContent,
  focusKeyphrase: 'morning routine tips',
  analysisResults: 'Current SEO score: 54/100. Thin content on mobile section.',
});

const raw = await yourLLMClient.complete(prompt.system, prompt.user, prompt.maxTokens);
const suggestions = parseContentSuggestionsResponse(raw);

suggestions
  .sort((a, b) => b.priority - a.priority)
  .forEach(({ type, suggestion, priority }) => {
    console.log(`[Priority ${priority}] ${type}: ${suggestion}`);
  });
Enter fullscreen mode Exit fullscreen mode

What I do now is run this against every page that drops in organic traffic. The output becomes a ticket in my content queue. The writer who picks it up knows exactly what to fix and in what order. No editorial meeting needed to figure out why a page is underperforming.

The CI Check That Saved Me Twice

This is the feature I recommend most strongly to other developers. analyzeSerpEligibility is completely deterministic. No LLM. No API call. No cost. It just inspects your schema markup and heading structure and tells you whether your page qualifies for FAQ, HowTo, Product, and Article rich results.

I added it to my deploy pipeline:

import { analyzeSerpEligibility } from '@power-seo/ai';

function checkSerpEligibility(page: { title: string; content: string; schema: string[] }) {
  const results = analyzeSerpEligibility(page);

  const failed = results.filter(r => r.likelihood < 0.5);

  if (failed.length > 0) {
    console.error('SERP eligibility check failed:');
    failed.forEach(({ feature, requirements, met }) => {
      const missing = requirements.filter(r => !met.includes(r));
      console.error(`  ${feature}: missing ${missing.join(', ')}`);
    });
    process.exit(1);
  }
}
Enter fullscreen mode Exit fullscreen mode

It has caught two real regressions. Once when a CMS template change stripped the HowTo schema from a category of pages. Once when a developer refactored the FAQ component and the structured data stopped rendering. Both would have quietly dropped rich results for high-traffic pages. Neither made it to production.

Switching From Claude to OpenAI for One Project

One thing I want to be specific about because I think it matters: when a client asked me to use OpenAI instead of Claude for their pipeline, I changed exactly three lines of code. The prompt builders and parsers stayed identical.

// Before: Anthropic Claude
const response = await anthropic.messages.create({
  model: 'claude-opus-4-6',
  system: prompt.system,
  messages: [{ role: 'user', content: prompt.user }],
  max_tokens: prompt.maxTokens,
});
const raw = response.content[0].type === 'text' ? response.content[0].text : '';

// After: OpenAI
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: prompt.system },
    { role: 'user', content: prompt.user },
  ],
  max_tokens: prompt.maxTokens,
});
const raw = response.choices[0].message.content ?? '';

// parseMetaDescriptionResponse(raw) — unchanged in both cases
Enter fullscreen mode Exit fullscreen mode

That is the whole migration. If you have ever switched LLM providers and had to rewrite your prompt templates from scratch, you will understand why this matters.

What I Would Do Differently

If I were starting this over, I would add analyzeSerpEligibility to CI on day one instead of week four. The LLM-powered features are valuable but they are async and cost money to run at scale. The deterministic eligibility check is free and instant and belongs in every deploy pipeline.

I would also run buildTitlePrompt before publishing instead of after. Fixing title tags retroactively across hundreds of pages is much less pleasant than getting five good options before you hit publish.

Installation

npm install @power-seo/ai
Enter fullscreen mode Exit fullscreen mode

Zero runtime dependencies. Works in Node.js, Next.js, Remix, Vite, Cloudflare Workers, and Vercel Edge. Dual ESM and CJS so it drops into any project without bundler configuration.

Final Thought

Using JavaScript to automate SEO is not about removing human judgment from content. It is about removing the mechanical parts so human judgment can focus on what actually matters. Picking the right angle for a title. Deciding what a piece of content is really about. Building something worth linking to.

The spec work, the character counting, the schema validation, the gap analysis, that is what @power-seo/ai handles. The rest is still yours.

Top comments (0)