I spent 5 hours debugging why Google couldn't see my React app. The fix was 4 lines of code.
Not a webpack config. Not a server migration. Four. Lines. That afternoon taught me more about JavaScript SEO than any blog post I'd read. If you're building modern frontends with React, understanding React SEO is no longer optional. If you're shipping SPAs, Next.js apps, or anything that leans on client-side rendering and you care about organic traffic, this checklist is for you. Let's go through the real issues and the actual fixes.
1. Understand How Googlebot Actually Renders Your JS
Here's the counterintuitive part most developers miss: Googlebot renders JavaScript, but not instantly.
Google uses a two-wave indexing model. The first wave crawls your raw HTML. The second wave which may come days or weeks later renders the JavaScript and indexes the dynamic content. If your critical content only exists after JS executes, you're invisible to Google's first pass.
Run this test right now:
# Fetch your page the way Googlebot sees it (raw HTML, no JS)
curl -A "Googlebot" https://your-site.com/your-page | grep -i "your key content"
If that grep returns nothing, you have a rendering problem. Your users see the page fine. Googlebot's first wave does not.
The fix: Move critical content headings, body text, product descriptions into the server-rendered HTML, not a useEffect that fires after mount.
// Bad: Content only exists after JS runs
function ProductPage() {
const [product, setProduct] = React.useState(null);
React.useEffect(() => {
fetch('/api/product/123').then(r => r.json()).then(setProduct);
}, []);
return <h1>{product?.name ?? ''}</h1>; // Empty on first Googlebot wave
}
// Good: Content in the initial HTML payload (Next.js example)
export async function getServerSideProps() {
const product = await fetchProduct(123);
return { props: { product } };
}
export default function ProductPage({ product }) {
return <h1>{product.name}</h1>; // Present in raw HTML
}
2. Fix Your Meta Tags They're Probably Broken in SPAs
Client-side routing is the silent killer of meta tags. When a user navigates from / to /about in a React SPA, the <title> and <meta name="description"> in your index.html don't update unless you explicitly tell them to.
Check this in DevTools: navigate between routes and watch the <title> tag in the Elements panel. If it's not changing, every page on your site is competing for the same keyword in Google's index.
The manual fix with react-helmet:
import { Helmet } from 'react-helmet-async';
function BlogPost({ post }) {
return (
<>
<Helmet>
<title>{post.title} | My Blog</title>
<meta name="description" content={post.excerpt} />
<meta property="og:title" content={post.title} />
<meta property="og:description" content={post.excerpt} />
<link rel="canonical" href={`https://mysite.com/blog/${post.slug}`} />
</Helmet>
<article>
<h1>{post.title}</h1>
{/* ... */}
</article>
</>
);
}
This works, but it's manual. For every new page type you create, you need to remember to wire up all five or six meta tags correctly. Miss one and you've got a page with no Open Graph image and a generic description.
If you're looking to automate this at scale across dozens of page templates I've been using power-seo on a Next.js App Router project. It handles the tag generation from a config object so you're not copy-pasting Helmet blocks everywhere. Setup looks like this:
npm install power-seo
// app/blog/[slug]/page.jsx
import { generateMetadata as powerSeoMeta } from 'power-seo';
export async function generateMetadata({ params }) {
const post = await getPost(params.slug);
return powerSeoMeta({
title: post.title,
description: post.excerpt,
canonical: `https://mysite.com/blog/${post.slug}`,
openGraph: { image: post.coverImage },
});
}
Four lines instead of fifteen, and you get consistent output every time. The full checklist approach is documented here if you want to see how this fits into a broader SEO pipeline.
3. Structured Data: The Checklist Item Nobody Does
Most developers get meta tags right eventually. Almost nobody adds structured data, and that's a real missed opportunity especially for blogs, e-commerce, and local businesses.
Structured data (JSON-LD) tells Google exactly what type of content it's looking at. It's what gets you rich results: star ratings, FAQ dropdowns, breadcrumbs in the SERP. Here's what a correct Article schema looks like:
// components/ArticleSchema.jsx
export function ArticleSchema({ post, authorName, siteUrl }) {
const schema = {
"@context": "https://schema.org",
"@type": "Article",
"headline": post.title,
"description": post.excerpt,
"author": {
"@type": "Person",
"name": authorName,
},
"datePublished": post.publishedAt,
"dateModified": post.updatedAt,
"image": post.coverImage,
"url": `${siteUrl}/blog/${post.slug}`,
};
return (
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(schema) }}
/>
);
}
Drop this in your page component and validate it at Google's Rich Results Test. You should see it parsed correctly within seconds.
Common mistake: Putting JSON-LD in a useEffect and appending it to the DOM dynamically. Don't. Inject it server-side or via the <head> so it's present in the raw HTML on that critical first Googlebot wave.
4. Canonical Tags and Duplicate Content Are Silently Killing Your Rankings
This is the one developers argue about most. "We don't have duplicate content." You probably do.
Any of these create duplicate pages in Google's index:
-
https://mysite.com/pagevshttps://mysite.com/page/ -
http://vshttps:// -
www.mysite.comvsmysite.com -
/products?sort=pricevs/products?sort=rating(faceted navigation) - Paginated routes without proper canonicals
The fix is a canonical tag on every page, pointing to the definitive URL:
// In Next.js App Router (app/layout.jsx or per-page)
export const metadata = {
alternates: {
canonical: 'https://mysite.com/products', // Always the clean, definitive URL
},
};
For paginated content, page 2 should NOT self-canonicalize. It should either point to page 1 or self-canonicalize and use rel="next" / rel="prev" (Google has deprecated these, but they still help some crawlers):
<!-- On /products?page=2 -->
<link rel="canonical" href="https://mysite.com/products?page=2" />
<!-- Don't point page 2's canonical back to page 1 that tells Google page 2 is duplicate -->
Run a crawl with Screaming Frog or a similar tool once a month. Filter for pages with missing or non-self-referencing canonicals. Fix them before Google deindexes them.
What I Learned / Key Takeaways
-
Crawl your own site like a bot. Use
curl -A "Googlebot"and Google's URL Inspection tool regularly. Your browser always shows the JS-rendered version. Googlebot sometimes doesn't. -
Server-side render critical content. If a heading, product name, or article body only exists after a
useEffect, it's at risk of not being indexed on the first wave. - Meta tags are per-route, not per-app. Every unique URL needs a unique title and description. Build this into your page template once, correctly, and never think about it again.
- Structured data is low-hanging fruit. An hour of JSON-LD setup can unlock rich results that boost CTR without any change in rankings.
If you want to try the automated meta tag generation approach, here's the repo: https://github.com/CyberCraftBD/power-seo
Over to You
Have you tried a Next SEO alternative for App Router yet? The original next-seo package has some rough edges with the new app/ directory, and I'm curious what patterns the community has settled on.
Drop your setup in the comments especially if you've found a clean way to handle dynamic OG images at scale. That's the next problem on my list.
Top comments (0)