Here's the thing nobody tells you when you start building SPAs: React renders in the browser, but Googlebot often doesn't wait long enough to see what you rendered. Your <title> tag? Empty on first load. Your meta descriptions? Missing. Your beautiful content? Invisible to crawlers.
In this article, I'll walk through the exact problems I ran into, how I diagnosed them, and the tools I used to fix them — including a lightweight npm package that saved me hours of boilerplate.
Why React Apps Struggle with SEO (The Real Reason)
Most React tutorials skip this part: when a crawler hits your React app, it receives something like this:
<!DOCTYPE html>
<html>
<head>
<title></title>
<meta name="description" content="" />
</head>
<body>
<div id="root"></div>
<script src="/static/js/main.chunk.js"></script>
</body>
</html>
That's it. An empty shell. Your JavaScript hasn't executed yet, so Google sees nothing meaningful. Even if Googlebot does eventually execute your JS (and it sometimes does), the timing is unreliable — and you have zero control over it.
There are three layers to this problem:
-
Dynamic
<head>tags — your<title>and<meta>don't update per route - Missing structured data — no JSON-LD for rich search results
- No way to audit the problem — you're flying blind without tooling
Let's fix each one.
Step 1: Make Your <head> Tags Dynamic Per Route
The first thing I did was stop assuming React Router alone would handle my SEO. It won't. You need a way to inject page-specific metadata into <head> on every route change.
The classic solution is react-helmet-async. Here's a reusable SEOHead component I now drop into every project:
// components/SEOHead.jsx
import { Helmet } from 'react-helmet-async';
export function SEOHead({ title, description, canonical, ogImage }) {
return (
<Helmet>
<title>{title}</title>
<meta name="description" content={description} />
<link rel="canonical" href={canonical} />
{/* Open Graph */}
<meta property="og:title" content={title} />
<meta property="og:description" content={description} />
<meta property="og:url" content={canonical} />
{ogImage && <meta property="og:image" content={ogImage} />}
{/* Twitter Card */}
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content={title} />
<meta name="twitter:description" content={description} />
</Helmet>
);
}
Use it in any page component like this:
// pages/BlogPost.jsx
import { SEOHead } from '../components/SEOHead';
export function BlogPost({ post }) {
return (
<>
<SEOHead
title={`${post.title} | My Blog`}
description={post.excerpt}
canonical={`https://mysite.com/blog/${post.slug}`}
ogImage={post.coverImage}
/>
<article>
<h1>{post.title}</h1>
{/* rest of content */}
</article>
</>
);
}
Wrap your app with HelmetProvider at the root:
// index.jsx
import { HelmetProvider } from 'react-helmet-async';
root.render(
<HelmetProvider>
<App />
</HelmetProvider>
);
Result: Every route now has accurate, unique <title> and <meta description> tags. Crawlers can read them.
Step 2: Add Structured Data Without Losing Your Mind
This is where most developers give up. Structured data (JSON-LD) unlocks rich results in Google — star ratings, breadcrumbs, article dates — but the spec is verbose and easy to get wrong.
I started using @power-seo here, not because it does magic, but because it generates validated JSON-LD schemas with a clean API instead of me copy-pasting blobs from schema.org.
npm install @power-seo
Here's how I added Article structured data to blog posts:
import { ArticleSchema } from '@power-seo';
export function BlogPost({ post }) {
return (
<>
<SEOHead ... />
<ArticleSchema
headline={post.title}
description={post.excerpt}
author={{ name: post.author.name, url: post.author.profileUrl }}
datePublished={post.publishedAt}
dateModified={post.updatedAt}
image={post.coverImage}
/>
<article>...</article>
</>
);
}
This generates a properly formatted <script type="application/ld+json"> block in your <head>. No manual JSON wrangling, no forgetting required fields. You can validate the output instantly with Google's Rich Results Test.
For product pages, it's the same pattern with ProductSchema. For FAQs, FAQSchema. The schemas match what Google actually expects.
Step 3: Audit What Crawlers Actually See
Here's the workflow I now use before shipping any React page:
1. Google's URL Inspection Tool (Search Console) — paste your URL and click "Test Live URL." This shows you exactly what Googlebot renders, including whether your dynamic <head> tags are populated.
2. curl the raw HTML — simulates a crawler that doesn't execute JS:
curl -s https://yoursite.com/blog/some-post | grep -i '<title\|<meta name="description'
If this returns empty or generic values, you have a problem regardless of what React renders in the browser.
3. Run Lighthouse in CI — add this to your GitHub Actions workflow:
- name: Lighthouse SEO Audit
uses: treosh/lighthouse-ci-action@v10
with:
urls: |
https://yoursite.com/
https://yoursite.com/blog/
budgetPath: ./lighthouse-budget.json
uploadArtifacts: true
Your lighthouse-budget.json can enforce a minimum SEO score:
[{
"path": "/*",
"assertions": {
"categories:seo": ["error", { "minScore": 0.9 }]
}
}]
Result: Your CI pipeline now fails if SEO score drops below 90. No more shipping broken metadata.
What I Learned: Key Takeaways
-
Never trust the browser view for SEO. Always test with
curlor a crawler simulator. What you see in Chrome DevTools and what Googlebot sees can be completely different. -
Dynamic
<head>management is non-negotiable for SPAs.react-helmet-asyncis the minimum viable solution — set it up before you write your first route, not after you notice you're missing from search results. - Structured data is a multiplier, not a nice-to-have. Rich results get significantly higher click-through rates. If you're writing articles, listing products, or building FAQs, add JSON-LD from day one.
- Automate the audit. Manual checks are for debugging. Lighthouse in CI is for prevention. The two hours you spend setting up automated SEO checks will save you from ranking drops you'd never notice until they hurt.
If you want to explore the structured data approach further, I wrote a more detailed breakdown here: React developer tool for SEO
Let's Talk About It
How are you handling SEO in your React apps? Are you using SSR (Next.js, Remix), pre-rendering, or making client-side rendering work with tools like these?
I'm especially curious about teams who've made pure CSR work well for SEO what's your stack look like? Drop it in the comments, genuinely interested to compare notes.
Top comments (0)