Last year I broke SEO on three product pages and didn't find out for four days.
I'd done a layout refactor on a Thursday — nothing dramatic, just cleaning up some shared components. The canonical prop got dropped somewhere in the diff. Meta descriptions were quietly truncating. Nobody caught it in review because, honestly, who checks SEO tags in a layout PR? The TypeScript compiler didn't complain. The build passed. The tests went green.
Monday morning, Search Console. Rankings sliding. I spent the next two hours figuring out what went wrong, and the whole time I kept thinking: why don't we have a check for this in CI?
That question led me down a rabbit hole comparing Next SEO vs Power SEO Audit — and I want to share what I found, because I think a lot of teams are in the same spot I was.
Next SEO writes your meta tags and JSON-LD. Power SEO Audit scores and validates them in CI so broken SEO never reaches Google. They solve different problems. Running both is the setup I now use on every project.
The Mental Model I Was Missing
For a long time, I thought of SEO tooling as one thing: you install a library, you configure your tags, done. Next SEO fit perfectly into that model. Drop in the component, pass your props, move on.
But after that Thursday incident, I started thinking about it differently. There are actually two completely separate jobs:
- Writing the tags — putting the right meta tags, canonical links, and JSON-LD into the HTML
- Verifying the tags are correct — checking that what you wrote actually follows SEO best practices before Google sees it
Next SEO is genuinely excellent at job one. It does nothing at all for job two. And I'd spent two years treating it like it covered both.
The analogy that clicked for me: Next SEO is like writing code. Power SEO Audit is like running tests on that code. Nobody ships without tests. But almost everybody ships without SEO validation.
How I Actually Use Next SEO (And Where It Gets You Stuck)
I've used Next SEO on probably a dozen projects at this point. Pages Router, mostly. The API just makes sense — you think in React components, and <NextSeo /> fits that mental model perfectly.
One thing worth knowing upfront: Next SEO is a React component, which means in the App Router it needs 'use client' or it won't work the way you expect. For App Router projects I now reach for generateMetadata() directly — but for Pages Router, Next SEO is still my first choice.
Here's the setup I use on a product page. Nothing fancy, just the pieces that actually matter:
// pages/products/[slug].tsx
import { NextSeo, ProductJsonLd } from 'next-seo';
type Props = {
title: string;
description: string;
slug: string;
};
export default function ProductPage({ title, description, slug }: Props) {
const url = `https://example.com/products/${slug}`;
return (
<>
<NextSeo
title={title}
description={description}
canonical={url}
openGraph={{
url,
title,
description,
type: 'website',
images: [
{
url: 'https://example.com/images/og.jpg',
width: 1200,
height: 630,
alt: title,
},
],
}}
/>
<ProductJsonLd
productName={title}
description={description}
images={['https://example.com/images/og.jpg']}
brand={{ name: 'Example Brand' }}
offers={[
{
price: '79.99',
priceCurrency: 'USD',
availability: 'https://schema.org/InStock',
url,
seller: { name: 'Example Store' },
},
]}
/>
<main>{/* product content */}</main>
</>
);
}
This works. I like it. It's readable and it ships fast.
Here's the part that used to keep me up at night though: Next SEO renders exactly what you give it, no questions asked. Pass it an empty string for metaDescription? It renders an empty meta description. Forget the canonical prop entirely after a refactor? It quietly renders without one. No warning. No error. Nothing in the console.
That's not a bug in Next SEO — it's just how implementation-only tools work. But it means every SEO mistake you make is silent until Google's crawler finds it for you. And by then, you've already lost rankings.
The Tool That Fixed the Blind Spot: Power SEO Audit
After that Thursday incident, I started looking for something that could catch these issues in CI — before a PR merges, not after Google re-crawls.
I found @power-seo/audit. It's a TypeScript-first audit engine that takes structured page data, runs it through a rule set, and gives back a 0–100 score with every issue tagged by severity. No network calls, no crawler, fully synchronous. Runs anywhere Node.js runs.
Install it alongside tsx so you can run TypeScript scripts directly in CI:
npm i @power-seo/audit
npm i -D tsx
The first time I ran it on a page, I immediately understood what I'd been missing. Here's the basic usage:
// scripts/audit-page.ts
import { auditPage } from '@power-seo/audit';
import type { PageAuditInput, PageAuditResult } from '@power-seo/audit';
const input: PageAuditInput = {
url: 'https://example.com/products/wireless-headphones',
title: 'Best Wireless Headphones 2026 — Free Shipping | Example',
metaDescription:
'Shop noise-cancelling headphones with free shipping over $50. Trusted by 50,000+ customers.',
canonical: 'https://example.com/products/wireless-headphones',
robots: 'index, follow',
openGraph: {
title: 'Best Wireless Headphones 2026',
description: 'Shop with free shipping and easy returns.',
image: 'https://example.com/images/headphones-og.jpg',
},
content: '<h1>Best Wireless Headphones</h1><p>Our top-rated noise-cancelling headphones...</p>',
headings: ['h1:Best Wireless Headphones', 'h2:Key Features', 'h2:Customer Reviews'],
images: [
{ src: '/images/headphones.webp', alt: 'Wireless headphones product photo' },
{ src: '/images/headphones-side.jpg', alt: '' }, // intentionally missing — watch what happens
],
focusKeyphrase: 'wireless headphones',
wordCount: 820,
};
const result: PageAuditResult = auditPage(input);
console.log(`Score: ${result.score}/100`);
// Score: 76/100
console.log(result.categories);
// {
// meta: { score: 90, passed: 9, warnings: 1, errors: 0 },
// content: { score: 78, passed: 6, warnings: 2, errors: 0 },
// structure: { score: 65, passed: 5, warnings: 1, errors: 1 }, // ← alt text caught
// performance: { score: 80, passed: 7, warnings: 1, errors: 0 },
// }
result.recommendations.forEach((rec) => console.log('→', rec));
// → Add alt text to all images (1 image missing)
// → Move focus keyphrase into the first paragraph
That missing alt text? Flagged immediately as an error. Not a quiet render, not a silent omission — an actual error, with a severity level and a human-readable recommendation.
That output made me realize how much I'd been flying blind.
The CI Gate That Actually Changed How My Team Ships
Okay, this is the part I'm most excited to share — because it's the thing that directly solved the Thursday problem.
auditSite() takes an array of page inputs and returns an aggregated score plus per-page results. I run this on every PR. If the site score drops below 75 or any page has a critical error, the deploy gets blocked. Full stop.
Here's the script:
// scripts/seo-gate.ts
import { auditSite } from '@power-seo/audit';
import type { PageAuditInput } from '@power-seo/audit';
// I pull this data from our CMS at build time — MDX frontmatter, Contentful, whatever you use
const pages: PageAuditInput[] = [
{
url: 'https://example.com/products/headphones',
title: 'Best Wireless Headphones 2026 — Free Shipping | Example Store',
metaDescription: 'Shop noise-cancelling headphones with free shipping over $50.',
// canonical accidentally dropped during refactor — this gets caught now
robots: 'index, follow',
wordCount: 820,
},
{
url: 'https://example.com/products/speakers',
// 82-character title — over the 60-char guideline, fires a warning
title: 'Bluetooth Speakers For Home Studio Use — Best Picks For Audio Engineers In 2026',
metaDescription: 'Find the best Bluetooth speakers for studio and home use.',
canonical: 'https://example.com/products/speakers',
robots: 'index, follow',
wordCount: 740,
},
];
const report = auditSite({ pages });
const SCORE_THRESHOLD = 75;
const totalErrors = report.pageResults
.flatMap((p) => p.rules.filter((r) => r.severity === 'error'))
.length;
if (report.score < SCORE_THRESHOLD || totalErrors > 0) {
console.error(`\n[FAIL] SEO audit failed — deploy blocked`);
console.error(` Site score : ${report.score}/100 (minimum: ${SCORE_THRESHOLD})`);
console.error(` Errors : ${totalErrors}`);
report.topIssues.forEach((i) =>
console.error(` → [${i.severity}] ${i.title}`)
);
process.exit(1);
}
console.log(`\n[PASS] SEO audit passed — score: ${report.score}/100`);
And the GitHub Actions workflow — make sure tsx is in your devDependencies so npm ci picks it up:
# .github/workflows/seo-check.yml
name: SEO Quality Gate
on: [pull_request]
jobs:
seo-audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npx tsx scripts/seo-gate.ts
The first time this ran on our repo and blocked a PR, a teammate messaged me: "Why is my PR failing?" I explained what it caught. His response was something like: "Oh. Yeah that would've been bad."
That's the moment I knew this was the right call.
So Which One Should You Use?
Honestly, this isn't really an either/or question — and I think framing it that way is where most comparisons go wrong.
If you're on the Pages Router and want metadata working in 15 minutes: Next SEO. It's mature, the community is enormous, and the component API just makes sense. I still use it.
If you're on the App Router: skip Next SEO and use generateMetadata(). It's the native approach, it's server-side by default, and it keeps SEO out of your client bundles.
If you want to stop discovering SEO regressions in Search Console: add @power-seo/audit to your CI pipeline regardless of which implementation approach you use. It takes maybe an hour to set up and catches the kind of silent mistakes that cost you rankings.
My personal setup right now: Next SEO for Pages Router projects, generateMetadata() for App Router, and @power-seo/audit running in GitHub Actions on both. One layer writes the tags. The other layer makes sure those tags are actually correct before anyone else sees them.
What I'd Tell Myself Before That Thursday
- Implementation is not the same as validation. I spent two years thinking Next SEO covered both. It covers one, very well.
- TypeScript won't save you here. Your types can be perfect and your SEO can still be silently broken. Types check shape. Audit tools check correctness.
- Silent failures are the worst kind. A tool that renders what you pass without checking it feels convenient until it isn't. You want something in your pipeline that actually pushes back.
- CI is the right place for this check. Not a manual audit every quarter. Not a Lighthouse run when you remember. A gate that runs on every PR and blocks mistakes before they ship.
I work on Next.js applications at CyberCraft Bangladesh. We've been burned by silent SEO regressions enough times that we built tooling to prevent them — the full open source is here if you want to dig in: Power SEO
Manual SEO review or automated CI gate — what's your team actually doing? I'm genuinely curious. Are you running Lighthouse? A third-party crawler? Something home-built? Or honestly just hoping nothing breaks? No judgment — that was me eight months ago. Drop it in the comments.
Top comments (0)