I spent 3 hours debugging why Google couldn't see my React app. The fix was 4 lines of code.
After checking Search Console, running a crawl, and nearly rage-quitting, I realized my beautifully crafted SPA was returning a blank <body> to Googlebot. All that content - loaded by JavaScript after the bot had already moved on. Classic. If you've hit this wall, this article will show you exactly how Next.js solves it, what to configure, and how to verify Google can actually read what you built.
Why Client-Side React Fails for SEO (And What's Really Happening)
Here's the uncomfortable truth: Google can render JavaScript - but it puts JS-rendered pages in a second crawl queue that can take days to weeks. Meanwhile, your competitor's server-rendered page gets indexed in hours.
When a bot hits a plain Create React App page, it sees something like this:
<!DOCTYPE html>
<html>
<head><title>My App</title></head>
<body>
<div id="root"></div> <!-- 👈 Nothing here yet -->
</body>
</html>
No content. No metadata. No structured data. Just a div waiting for a JS bundle to hydrate it.
Next.js changes this by rendering the full HTML on the server before sending it to the client. The bot gets real content on the first byte - no waiting, no second queue.
Setting Up Server-Side Rendering the Right Way
Next.js gives you three rendering strategies. For SEO-critical pages, you want either SSR (getServerSideProps) for dynamic content or SSG (getStaticProps) for content that doesn't change per-request.
Here's a product page using SSR that Google will actually be able to read:
// app/products/[slug]/page.tsx (Next.js 14 App Router)
import { Metadata } from 'next';
type Props = {
params: { slug: string };
};
// This runs on the server - Google gets the full HTML
export async function generateMetadata({ params }: Props): Promise<Metadata> {
const product = await fetchProduct(params.slug);
return {
title: `${product.name} | My Store`,
description: product.description.slice(0, 155),
openGraph: {
title: product.name,
description: product.description,
images: [{ url: product.image, width: 1200, height: 630 }],
},
alternates: {
canonical: `https://yoursite.com/products/${params.slug}`,
},
};
}
export default async function ProductPage({ params }: Props) {
const product = await fetchProduct(params.slug);
return (
<main>
<h1>{product.name}</h1>
<p>{product.description}</p>
<span>Price: ${product.price}</span>
</main>
);
}
async function fetchProduct(slug: string) {
const res = await fetch(`https://api.yourstore.com/products/${slug}`, {
next: { revalidate: 3600 }, // ISR: revalidate every hour
});
return res.json();
}
Result: Google crawls this page and immediately sees the <title>, <meta description>, Open Graph tags, and the full product content - all in the initial HTML response.
Structured Data: The SEO Signal Most Devs Skip
Title tags and meta descriptions are table stakes. Structured data (JSON-LD) is where you unlock rich results - star ratings, breadcrumbs, FAQs - directly in the SERP. Almost nobody does this properly.
Here's a reusable component that injects JSON-LD without a library:
// components/JsonLd.tsx
type JsonLdProps = {
data: Record<string, unknown>;
};
export function JsonLd({ data }: JsonLdProps) {
return (
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(data) }}
/>
);
}
// Usage in your product page:
import { JsonLd } from '@/components/JsonLd';
export default async function ProductPage({ params }: Props) {
const product = await fetchProduct(params.slug);
const structuredData = {
'@context': 'https://schema.org',
'@type': 'Product',
name: product.name,
description: product.description,
image: product.image,
offers: {
'@type': 'Offer',
price: product.price,
priceCurrency: 'USD',
availability: 'https://schema.org/InStock',
},
};
return (
<main>
<JsonLd data={structuredData} />
<h1>{product.name}</h1>
<p>{product.description}</p>
</main>
);
}
Deploy this and validate your schema using Google Search Console's coverage report. If it passes, you're eligible for enhanced listings.
Measuring What Actually Matters: Core Web Vitals + Crawlability
Writing good code is step one. Knowing whether Google agrees is step two. Most devs skip the verification loop and wonder why rankings don't move.
A few things worth automating:
1. Verify your pages are actually being server-rendered
curl -A "Googlebot" https://yoursite.com/products/your-slug | grep "<h1>"
If you see your <h1> content in the terminal output, you're good. If you see nothing - SSR isn't working.
2. Generate a sitemap automatically
// app/sitemap.ts
import { MetadataRoute } from 'next';
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const products = await fetchAllProducts();
return products.map((product) => ({
url: `https://yoursite.com/products/${product.slug}`,
lastModified: new Date(product.updatedAt),
changeFrequency: 'weekly',
priority: 0.8,
}));
}
Next.js will serve this at /sitemap.xml automatically. Submit it in Google Search Console.
3. Audit your metadata at scale
When you have dozens of pages, manually checking each one breaks down fast. For this, I've been using @power-seo - an npm package that crawls your Next.js routes and flags missing titles, duplicate descriptions, broken canonical tags, and missing structured data in one pass.
npx @power-seo audit --site https://yoursite.com --output report.json
It's not magic - it's the same checks you'd do manually, but automated. The output is a JSON report you can pipe into CI to fail builds when critical SEO tags are missing. For a deeper walkthrough of the full configuration, the Next.js SEO complete guide covers integrating it with GitHub Actions.
What I Learned (After Getting It Wrong First)
SSR alone isn't enough. You can server-render perfectly and still tank on Core Web Vitals. LCP, CLS, and FID are ranking signals - use Next.js Image component, font optimization, and avoid layout shifts from async data loads.
Canonical tags matter more than you think. If your product pages are accessible at multiple URLs (with/without trailing slash, with query params), Google splits link equity across them. One canonical tag fixes this.
generateMetadatais not optional for dynamic routes. Static pages can get away with a global layout metadata. Dynamic routes need per-page metadata or you'll have dozens of pages sharing the same title.Automate your SEO checks before they become surprises. A missing
<title>on a high-traffic page can sit unnoticed for weeks. Build the audit into your pipeline - whether that's@power-seo, a Lighthouse CI run, or a custom script.
Let's Talk About This
Here's something worth discussing: Why is SEO analysis for JavaScript websites fundamentally different from traditional HTML site analysis?
Most SEO tools were built assuming HTML is what the server sends. But with JS frameworks, the "real" HTML only exists after execution. That gap changes everything - from how you audit, to how you structure data, to how you think about crawl budget.
Drop your thoughts in the comments. Have you hit crawlability issues with React or Next.js? What solved it for you? I'm especially curious whether anyone's had luck with dynamic rendering as a fallback versus going full SSR.
Top comments (1)
I had the same issue once—everything looked perfect in the browser but Google saw nothing. Switching just one page to SSR in Next.js fixed indexing within days. Biggest lesson: always “view source,” not just DevTools.