DEV Community

Alamin
Alamin

Posted on • Originally published at ccbd.dev

My JSON-LD schemas were silently broken — until I found @graph

I spent 3 hours debugging why Google wasn't showing rich results for my blog. My JSON-LD validated fine. The structured data was in the HTML. Google Search Console just... ignored two of my three schemas entirely. The fix turned out to be a single structural change to how the JSON is written. But the real lesson was understanding why multiple <script type="application/ld+json"> tags silently break — and what @graph actually does differently.

The Problem Nobody Warned Me About

Here's what I had. A Next.js App Router blog, three schemas per post page — Article, BreadcrumbList, and Organization. Standard setup. I validated all three with Google's Rich Results Test and everything passed.

Here's the code, nothing unusual:

// app/blog/[slug]/page.tsx
export default async function BlogPost({ params }: { params: { slug: string } }) {
  const post = await fetchPost(params.slug);

  return (
    <>
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{
          __html: JSON.stringify({
            "@context": "https://schema.org",
            "@type": "Article",
            headline: post.title,
            datePublished: post.publishedAt,
            author: { "@type": "Person", name: post.author.name },
          }),
        }}
      />
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{
          __html: JSON.stringify({
            "@context": "https://schema.org",
            "@type": "BreadcrumbList",
            itemListElement: [
              { "@type": "ListItem", position: 1, name: "Home", item: "https://example.com" },
              { "@type": "ListItem", position: 2, name: "Blog", item: "https://example.com/blog" },
              { "@type": "ListItem", position: 3, name: post.title },
            ],
          }),
        }}
      />
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{
          __html: JSON.stringify({
            "@context": "https://schema.org",
            "@type": "Organization",
            name: "My Blog",
            url: "https://example.com",
          }),
        }}
      />
      <article>{/* content */}</article>
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

Looks fine. Validates fine. Breaks silently in production.

My Article schema showed up in Search Console. My BreadcrumbList and Organization schemas never appeared at all.

Why This Is Worse Than It Sounds

Google usually parses multiple <script type="application/ld+json"> blocks on a page. But "usually" is doing a lot of work in that sentence.

Google's own structured data documentation says this directly:

"You can place structured data on the same page using multiple scripts, but we recommend placing all of them in a single @graph."

That word — recommend — is Google politely telling you the safer path. In practice, on pages with three or more separate JSON-LD blocks, Google picks up the first schema it finds and sometimes stops there. Crawl budget is finite. Complex pages with many script blocks don't always get fully parsed.

But there's a second problem that's arguably worse: there's no validation anywhere in this flow.

Look at this schema:

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "My Post"
}
Enter fullscreen mode Exit fullscreen mode

Spot what's missing? datePublished. image. Both are required fields for Google's Article rich result. This builds cleanly, ships to production, and quietly disqualifies every page that uses it. TypeScript doesn't catch it. Your linter doesn't catch it. You find out three weeks later when Search Console finally re-crawls your pages — if you're even checking.

There's no error, no warning, no 404. The schema just silently fails to qualify.

What @graph Actually Does

Instead of three separate <script> tags, @graph combines everything into a single JSON document:

{
  "@context": "https://schema.org",
  "@graph": [
    {
      "@type": "Article",
      "headline": "My Post",
      "datePublished": "2026-01-15",
      "author": { "@type": "Person", "name": "Jane Doe" }
    },
    {
      "@type": "BreadcrumbList",
      "itemListElement": [
        { "@type": "ListItem", "position": 1, "name": "Home", "item": "https://example.com" },
        { "@type": "ListItem", "position": 2, "name": "Blog", "item": "https://example.com/blog" },
        { "@type": "ListItem", "position": 3, "name": "My Post" }
      ]
    },
    {
      "@type": "Organization",
      "name": "My Blog",
      "url": "https://example.com"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

One <script> tag. Google parses it as a single connected document. The schemas can even cross-reference each other using @id — something impossible across separate script blocks.

After I switched to this format, all three schemas appeared in Search Console within two weeks.

How to Fix It in Next.js App Router

Option 1: Do it manually, no library needed

If you have a small number of page types, this is completely fine. Build the @graph object and render it in a Server Component:

// app/blog/[slug]/page.tsx
export default async function BlogPost({ params }: { params: { slug: string } }) {
  const post = await fetchPost(params.slug);

  const graph = {
    "@context": "https://schema.org",
    "@graph": [
      {
        "@type": "Article",
        headline: post.title,
        datePublished: post.publishedAt,
        dateModified: post.updatedAt,
        author: { "@type": "Person", name: post.author.name },
        image: {
          "@type": "ImageObject",
          url: post.coverImage,
          width: 1200,
          height: 630,
        },
      },
      {
        "@type": "BreadcrumbList",
        itemListElement: [
          { "@type": "ListItem", position: 1, name: "Home", item: "https://example.com" },
          { "@type": "ListItem", position: 2, name: "Blog", item: "https://example.com/blog" },
          { "@type": "ListItem", position: 3, name: post.title },
        ],
      },
      {
        "@type": "Organization",
        name: "My Blog",
        url: "https://example.com",
      },
    ],
  };

  // Split on "</" and rejoin with "<\/" to block XSS — this is not optional
  const safeJson = JSON.stringify(graph).split("</").join("<\\/");

  return (
    <>
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{ __html: safeJson }}
      />
      <article>{/* your content */}</article>
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

The XSS escaping step matters. If a post title ever contains </script>, an unescaped string inside dangerouslySetInnerHTML lets the browser treat it as a real closing tag and break the page. Splitting on </ and rejoining with <\/ is the correct fix — it's also JSON-safe because \/ is a valid JSON escape for /.

Option 2: Use a typed schema builder

If you're managing schemas across many page types — articles, products, FAQs, events — writing this manually gets repetitive and hard to audit. The bigger risk is that TypeScript won't tell you when a required field is missing. You'll ship incomplete schemas and not know until Search Console catches it.

I landed on Power-SEO schema after trying a few options. It's a newer library — smaller community than next-seo — but it has built-in @graph support and a validateSchema() function that runs in plain Node.js with no React dependency. Here's the same page, rewritten:

npm install @power-seo/schema
Enter fullscreen mode Exit fullscreen mode
// app/blog/[slug]/page.tsx
import {
  article,
  breadcrumbList,
  organization,
  schemaGraph,
  toJsonLdString,
  validateSchema,
} from "@power-seo/schema";

export default async function BlogPost({ params }: { params: { slug: string } }) {
  const post = await fetchPost(params.slug);

  const articleSchema = article({
    headline: post.title,
    datePublished: post.publishedAt,
    dateModified: post.updatedAt,
    author: { name: post.author.name, url: post.author.url },
    image: { url: post.coverImage, width: 1200, height: 630 },
  });

  // Catches missing required fields before the page renders
  const { valid, issues } = validateSchema(articleSchema);
  if (!valid) {
    console.warn("Schema issues:", issues.map((i) => i.message));
  }

  const graph = schemaGraph([
    articleSchema,
    breadcrumbList([
      { name: "Home", url: "https://example.com" },
      { name: "Blog", url: "https://example.com/blog" },
      { name: post.title },
    ]),
    organization({ name: "My Blog", url: "https://example.com" }),
  ]);

  return (
    <>
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{ __html: toJsonLdString(graph) }}
      />
      <article>{/* your content */}</article>
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

toJsonLdString() handles the XSS escaping internally. TypeScript catches wrong field types at build time. validateSchema() catches missing required fields at runtime — before the page renders, not after Google has already crawled it.

The Part That Actually Changed My Workflow

Because @power-seo/schema is pure TypeScript with zero React dependency, validateSchema() runs in a plain Node.js script. No browser. No render tree. That means you can validate schemas in CI before every deploy:

// scripts/validate-schemas.ts
import { article, faqPage, validateSchema } from "@power-seo/schema";

const schemas = [
  article({
    headline: "How to Brew Coffee",
    datePublished: "2026-01-15",
    author: { name: "Jane Doe" },
  }),
  article({
    headline: "Coffee Grinder Guide",
    // datePublished intentionally missing — caught below
  }),
  faqPage([
    { question: "What grind size for espresso?", answer: "Fine grind." },
  ]),
];

let exitCode = 0;

for (const schema of schemas) {
  const { valid, issues } = validateSchema(schema);
  if (!valid) {
    issues
      .filter((i) => i.severity === "error")
      .forEach((e) => console.error(`✗ [${e.field}] ${e.message}`));
    exitCode = 1;
  }
}

process.exit(exitCode);
// Output: ✗ [datePublished] datePublished is required for Article
Enter fullscreen mode Exit fullscreen mode

Broken schemas don't reach production. You don't find out about missing datePublished fields two weeks after deploy when Search Console finally gets around to re-crawling your pages.

If you're also weighing this against next-seo — bundle size, tree-shaking, App Router vs Pages Router differences — I wrote up a full Power-SEO Schema vs Next-SEO comparison after going through the migration myself.

What I Learned

Multiple <script type="application/ld+json"> tags work most of the time — @graph is what Google actually recommends for multi-schema pages. Don't build on "usually works" for something you can only verify in Search Console weeks later.

Passing Google's Rich Results Test is not the same as Google using your schema. The test validates JSON-LD structure. It has no idea whether Googlebot parsed all your script blocks during an actual crawl.

Missing required fields fail silently. No error, no warning, no build failure. The schema just doesn't qualify for rich results, and you have no idea until you check Search Console — if you're even checking. Validation at build time is the only reliable safety net.

You don't need a library for @graph. Writing it manually works perfectly well. A typed builder just removes the chance of shipping a missing datePublished on page 47 of your blog and never noticing.

If you want to explore the typed approach, the repo is here: Github Power-SEO

Have you ever validated your JSON-LD and had Google still ignore it? Are you using @graph already — or still separate script tags per page?

And the more uncomfortable question: how long did your broken schema sit in production before you caught it? Was it you, or did Google Search Console tell you first?

Top comments (0)