DEV Community

Cover image for Fixing Common SEO Mistakes with the Power SEO Toolkit (Developer Guide
Alamin Sarker
Alamin Sarker

Posted on

Fixing Common SEO Mistakes with the Power SEO Toolkit (Developer Guide

I spent 3 hours debugging why Google couldn't crawl my React app. No 404s, no console errors, nothing obviously broken. The culprit? A missing <title> tag, a robots meta set to noindex from a staging environment I'd forgotten to revert, and a canonical URL pointing to http:// instead of https://. Three separate mistakes. Four lines of code to fix them. Zero obvious symptoms while I was building.

If you've ever shipped a site and wondered why it isn't ranking — even when the content is solid — there's a good chance you're living with quiet SEO mistakes right now. This guide walks through the most common ones, how to detect them programmatically, and how to fix them before they cost you traffic.

Mistake #1: Your <head> Is a Disaster and You Don't Know It

The most common SEO mistake in SPAs (React, Next.js, Remix) is inconsistent or missing meta tags across dynamic routes. The homepage looks fine. The blog post at /blog/some-slug? No description, wrong canonical, og:title still says your app name.

You can audit this manually with a quick Node script:

// audit-meta.js — run with Node.js
// Usage: node audit-meta.js https://yoursite.com/some-page

const https = require("https");
const url = process.argv[2];

if (!url) {
  console.error("Usage: node audit-meta.js <url>");
  process.exit(1);
}

https.get(url, (res) => {
  let data = "";
  res.on("data", (chunk) => (data += chunk));
  res.on("end", () => {
    const checks = {
      title: /<title>(.+?)<\/title>/i.exec(data)?.[1] || "❌ MISSING",
      description:
        /name="description" content="(.+?)"/i.exec(data)?.[1] || "❌ MISSING",
      canonical:
        /rel="canonical" href="(.+?)"/i.exec(data)?.[1] || "❌ MISSING",
      robots:
        /name="robots" content="(.+?)"/i.exec(data)?.[1] ||
        "✅ Not set (defaults to index)",
      ogTitle:
        /property="og:title" content="(.+?)"/i.exec(data)?.[1] || "❌ MISSING",
    };

    console.log("\n🔍 SEO Meta Audit for:", url);
    console.table(checks);
  });
}).on("error", (e) => console.error("Request failed:", e.message));
Enter fullscreen mode Exit fullscreen mode

Or if you're on Next.js App Router and want a proper solution that scales, @power-seo/meta gives you SSR-safe meta helpers that work per-route without the boilerplate:

npm install @power-seo/meta @power-seo/react
Enter fullscreen mode Exit fullscreen mode
// app/blog/[slug]/page.jsx
import { buildMeta } from "@power-seo/meta";

export async function generateMetadata({ params }) {
  const post = await getPost(params.slug);

  return buildMeta({
    title: post.title,
    description: post.excerpt,
    canonical: `https://yoursite.com/blog/${params.slug}`,
    openGraph: {
      title: post.title,
      description: post.excerpt,
      image: post.coverImage,
    },
  });
}
Enter fullscreen mode Exit fullscreen mode

Every route now gets correct, complete metadata — and buildMeta handles the edge cases (trailing slashes, http→https, og fallbacks) that you'd otherwise hand-roll and eventually get wrong.

Mistake #2: Broken or Missing Structured Data

Structured data (JSON-LD) is what powers rich results — star ratings, FAQs, breadcrumbs, article publish dates. Most developers skip it entirely, or add it once to the homepage and forget it exists.

The brutal part: malformed JSON-LD doesn't throw an error. It silently fails. You won't know until you check Google Search Console weeks later and notice zero rich results.

Here's the correct raw JSON-LD for an Article page, which you can drop directly into your <head>:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Your Article Title Here",
  "author": {
    "@type": "Person",
    "name": "Your Name"
  },
  "datePublished": "2026-04-25",
  "dateModified": "2026-04-25",
  "publisher": {
    "@type": "Organization",
    "name": "Your Site",
    "logo": {
      "@type": "ImageObject",
      "url": "https://yoursite.com/logo.png"
    }
  },
  "image": "https://yoursite.com/og-image.jpg",
  "description": "A practical developer guide to diagnosing and fixing common SEO mistakes."
}
</script>
Enter fullscreen mode Exit fullscreen mode

Always validate at schema.org/docs/validator before pushing. One missing comma means Google ignores the whole block.

For apps with multiple schema types (Article, BreadcrumbList, FAQPage, Product), managing raw JSON-LD strings gets messy fast. @power-seo/schema has type-safe builders for 23 schema types with React components that inject the script tag server-side:

npm install @power-seo/schema
Enter fullscreen mode Exit fullscreen mode
// app/blog/[slug]/page.jsx
import { ArticleSchema, BreadcrumbSchema } from "@power-seo/schema";

export default async function BlogPost({ params }) {
  const post = await getPost(params.slug);

  return (
    <>
      <ArticleSchema
        headline={post.title}
        description={post.excerpt}
        datePublished={post.publishedAt}
        dateModified={post.updatedAt}
        author={{ name: post.author.name }}
        image={post.coverImage}
      />
      <BreadcrumbSchema
        items={[
          { name: "Home", url: "https://yoursite.com" },
          { name: "Blog", url: "https://yoursite.com/blog" },
          { name: post.title, url: `https://yoursite.com/blog/${params.slug}` },
        ]}
      />
      {/* your page content */}
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

TypeScript will catch invalid fields before they reach production. That alone has saved me from several silent failures.

Mistake #3: You're Not Auditing at the Route Level

Most SEO tools audit your homepage. Maybe a few pages you manually submit. They don't crawl every route of a dynamic app — which is exactly where the problems hide. Product pages, blog posts, user profiles. That's where your actual content lives, and that's where the gaps are.

The fix is running automated SEO audits in CI, so regressions get caught in PRs instead of weeks after deployment. @power-seo/audit is built for this — it checks meta completeness, content structure, performance rules, and more against a list of URLs:

npm install @power-seo/audit --save-dev
Enter fullscreen mode Exit fullscreen mode
// scripts/seo-ci-check.js — add to your CI pipeline
const { auditPage } = require("@power-seo/audit");

const pagesToAudit = [
  "https://yoursite.com/",
  "https://yoursite.com/blog/post-1",
  "https://yoursite.com/products/widget",
  "https://yoursite.com/about",
];

(async () => {
  let failed = false;

  for (const url of pagesToAudit) {
    const result = await auditPage(url, {
      checks: ["meta", "content", "structure", "performance"],
      strict: false,
    });

    if (result.issues.length > 0) {
      console.error(`\n❌ Issues found on: ${url}`);
      result.issues.forEach((issue) =>
        console.error(`  [${issue.severity}] ${issue.message}`)
      );
      failed = true;
    } else {
      console.log(`✅ Clean: ${url}`);
    }
  }

  if (failed) process.exit(1);
})();
Enter fullscreen mode Exit fullscreen mode

Add node scripts/seo-ci-check.js as a step in your GitHub Actions workflow. SEO regressions now get caught the same way lint errors do — before they ship.

Mistake #4: Ignoring Core Web Vitals Until It's Too Late

Google has used Core Web Vitals as a ranking signal since 2021. Most developers check it once with Lighthouse during development, ship the site, and only look again when traffic drops — which is usually 6–8 weeks after the damage was done.

The fix is continuous monitoring, not one-time audits. @power-seo/audit includes performance rules that check LCP, CLS, and FID thresholds as part of your regular audit runs, so you get alerted when a deploy degrades your vitals:

// scripts/seo-ci-check.js — extend your existing audit
const { auditPage } = require("@power-seo/audit");

const result = await auditPage("https://yoursite.com/", {
  checks: ["performance"],
  performanceThresholds: {
    lcp: 2500,  // ms — Google's "Good" threshold
    cls: 0.1,   // score — Google's "Good" threshold
    fid: 100,   // ms — Google's "Good" threshold
  },
});

result.issues.forEach((issue) => {
  if (issue.type === "performance") {
    console.error(`⚠️ Vitals issue: ${issue.message}`);
  }
});
Enter fullscreen mode Exit fullscreen mode

Pair this with your CI pipeline from Mistake #3 and you've got a single script that audits both SEO correctness and performance on every build — no separate tools, no manual Lighthouse runs.

For a full breakdown of how I wired all of this into a single CI pipeline with GitHub Actions, I wrote it up in detail on ccbd.dev/blog/seo-mistakes-fixed-using-the-power-seo-toolkit.

What I Learned

  • Most SEO bugs are invisible — no error logs, no broken layouts, nothing. The only way to catch them is to actively audit every route, not wait for rankings to drop.
  • Treat SEO checks like lint rules — codify your requirements and run them in CI. A missing <title> caught in a PR is better than one discovered after 6 weeks of poor indexing.
  • Structured data is free money — it takes 20 minutes to add valid JSON-LD to your templates. Most devs skip it entirely and leave rich results on the table.
  • Core Web Vitals need continuous monitoring — a one-time Lighthouse run in dev doesn't reflect real-user conditions on mobile, on slow connections, or after a CDN config change.

What About You?

What SEO mistakes have quietly hurt your rankings — and how long did it take you to find them? Drop it in the comments. I'm especially curious whether anyone else has been burned by staging noindex tags making it to production (it happens more than people admit).

Top comments (0)