*In 2026, Google CAN render your SPA. The question is WHEN
"Google renders JavaScript" is true. Here's the part they don't tell you:
Google renders JavaScript in two waves. And the gap between those waves is where your rankings live or die.
Wave 1 — The Crawler Pass:
Googlebot visits your URL. It downloads the HTML. If the HTML has content — actual text, headings, meta tags — it indexes that content immediately. Fast. Reliable. This is what happens with server-rendered pages.
Wave 2 — The Renderer Queue:
If your HTML is an empty shell with a <div id="root"></div> and a script tag, Googlebot puts your URL in a separate queue for JavaScript rendering. This queue is processed by a headless Chrome instance — but not immediately. Not even that day. It could be hours. It could be days. For large sites, it could be weeks.
During the time between Wave 1 and Wave 2, your page exists in Google's index with whatever was in that initial HTML. Which, for a React SPA, is essentially nothing.
Here's what makes this genuinely dangerous: the page is indexed. Google Search Console shows it as "Discovered — currently not indexed" or "Crawled — currently not indexed" — and then eventually "Indexed." You see it move through the states. You think it's working. But the content that was indexed in Wave 1 was thin or empty, and that initial signal affects how the page is treated in rankings even after Wave 2 processes it.
The SPA indexing timeline:
Day 1: Googlebot visits → Sees empty HTML → Queues for JS rendering
Day 1-7: JS rendering queue processes → Content indexed
(During this window: page either not indexed or indexed with thin content)
Day 7+: Full content indexed
(But the initial quality signal was already recorded)
vs. Server-Rendered:
Day 1: Googlebot visits → Sees full HTML content → Indexed immediately
LCP: fast → positive quality signal
Content: complete → correct ranking signals from day one
I'm not making this up. Google has documented the two-wave process. What they've been less clear about is the timing — which varies by site size, crawl budget, and how frequently you publish new content.
For a small site with fifty pages, this might be manageable. For a product with thousands of pages — a catalog, a marketplace, a documentation site — the crawl budget implications are significant. Google allocates a limited number of crawl requests per site per day. If each page requires two crawl passes (Wave 1 HTML + Wave 2 JavaScript rendering), you've effectively halved your crawl budget efficiency.
The Core Web Vitals Problem Nobody Talks About
Here's the second way client-side rendering hurts SEO even after your content is indexed:
Core Web Vitals are ranking factors. And client-side rendering has a structural disadvantage on the most important one.
LCP — Largest Contentful Paint measures how long it takes for the largest visible element on the page to render. For most pages, that's the hero image, the product photo, or the main heading.
In a client-side rendered React app, the sequence for the LCP element looks like this:
- Browser receives empty HTML
- Browser downloads JavaScript bundle (200KB? 500KB? More?)
- React initializes
- Component renders
-
useEffectfires, API call goes out - Data returns
- Component re-renders with actual content
- LCP element finally appears
Steps 2 through 8 all happen before the LCP clock stops. On a fast connection, this might be 1.5 seconds. On a 4G connection dropping to 3G in a moving car, this is 4-6 seconds. Google's threshold for "Good" LCP is under 2.5 seconds. "Poor" is over 4 seconds.
Client-Side Rendering LCP waterfall:
0ms HTML received (empty)
|
200ms JS bundle starts downloading
|
800ms JS bundle finished downloading
|
850ms React initializes
|
900ms Component renders (shows loading spinner)
|
950ms useEffect fires, API request sent
|
1400ms API response received
|
1450ms Component re-renders with content
| ← LCP recorded here
1450ms [LCP: 1.45s on fast fiber. Multiply by 3x on mobile 4G.]
Server-Side Rendering LCP waterfall:
0ms HTML request sent
|
300ms Server fetches data + renders HTML
|
300ms Browser receives full HTML with content already inside
| ← LCP recorded here (the heading/image was in the HTML)
300ms [LCP: ~300ms. JS loads in background for interactivity.]
The difference isn't a performance trick. It's architectural. The server did the work before the bytes left the data center. The browser got a complete page, not a promise of a page.
The product I work on at currently — we have pages where LCP improvements directly correlated with conversion rate improvements. A 400ms improvement in LCP on the support portal's main landing pages reduced bounce rate measurably. The SEO and the business outcome are the same metric wearing different clothes.
Which Pages You Cannot Afford to Get Wrong
Not every page in your app carries the same SEO risk. The mistake is treating them all the same.
Pages that absolutely need server rendering:
These are the pages where a search engine discovering thin content, slow LCP, or missing metadata will cost you rankings that are hard to recover.
- Landing pages — your homepage, product pages, category pages. These are where acquisition happens. Get them wrong and you're invisible to people who haven't heard of you.
- Blog and content pages — the long-tail keywords live here. If the content isn't in the initial HTML, you're competing at a disadvantage.
- Product detail pages — especially if you have thousands of them. Crawl budget matters at scale.
- Any page you want to rank for a specific keyword — if ranking matters, server rendering matters.
Pages where client-side rendering is absolutely fine:
- Authenticated dashboards — nobody Googles their way into your admin panel. SEO is irrelevant here.
- Real-time features — live data, WebSocket-driven UIs, collaborative tools. Server rendering adds complexity for zero SEO benefit.
- Highly interactive tools — calculators, configurators, complex form flows. The interactivity IS the product.
- User-generated content behind login — private by design, no crawlers should see it.
The architectural decision: know which pages are acquisition pages and which are retention pages. Acquisition pages need server rendering. Retention pages can be client-side. Most apps contain both — and treating them all the same is leaving rankings on the table.
Building the Next.js SEO Stack — Complete Implementation
Let's build this properly. Four layers: rendering strategy, metadata, structured data, performance.
Layer 1: Rendering Strategy Per Page Type
The App Router makes this natural — each page's data fetching strategy determines its rendering behavior.
// app/products/[id]/page.tsx
// Server Component — rendered on the server, HTML sent to browser
// Googlebot receives full product content in Wave 1. No Wave 2 needed.
interface Product {
id: string;
name: string;
description: string;
price: number;
images: string[];
category: string;
inStock: boolean;
}
interface ProductPageProps {
params: { id: string };
}
async function ProductPage({ params }: ProductPageProps) {
// This fetch runs on the server during the request.
// Product data is baked into the HTML before it leaves your server.
// No useEffect. No loading state for the initial content.
const product = await fetch(`https://api.yourstore.com/products/${params.id}`, {
// revalidate: 3600 = cache this for 1 hour, then revalidate in background.
// ISR behavior — static-site speed, fresh-enough data.
next: { revalidate: 3600 }
}).then(res => {
if (!res.ok) throw new Error('Product not found');
return res.json() as Promise<Product>;
});
return (
<main>
{/* This H1 is in the HTML. Googlebot reads it in Wave 1.
It's the most important on-page SEO signal. */}
<h1>{product.name}</h1>
{/* Product description — in the HTML, indexed immediately */}
<p>{product.description}</p>
{/* Price — in the HTML, used for rich snippets with structured data */}
<p aria-label={`Price: $${product.price}`}>${product.price}</p>
{/* The add-to-cart interaction is client-side — that's fine.
The CONTENT is server-rendered. The INTERACTION is client-side.
This is the correct split. */}
<AddToCartButton productId={product.id} inStock={product.inStock} />
</main>
);
}
export default ProductPage;
// components/AddToCartButton.tsx
// 'use client' — this component runs in the browser
// It handles interaction. It doesn't need to be server-rendered.
'use client';
import { useState } from 'react';
interface AddToCartButtonProps {
productId: string;
inStock: boolean;
}
export function AddToCartButton({ productId, inStock }: AddToCartButtonProps) {
const [adding, setAdding] = useState(false);
const [added, setAdded] = useState(false);
if (!inStock) {
return <button disabled>Out of Stock</button>;
}
const handleClick = async () => {
setAdding(true);
await addToCart(productId);
setAdding(false);
setAdded(true);
};
return (
<button onClick={handleClick} disabled={adding}>
{added ? '✓ Added to Cart' : adding ? 'Adding...' : 'Add to Cart'}
</button>
);
}
For static content like blog posts — build-time generation:
// app/blog/[slug]/page.tsx
// generateStaticParams: generate HTML for all posts at build time.
// Result: CDN-served static HTML. Zero server computation per request.
// Googlebot gets full content instantly on Wave 1. Always.
export async function generateStaticParams() {
const posts = await fetch('https://cms.yoursite.com/posts?fields=slug')
.then(res => res.json());
return posts.map((post: { slug: string }) => ({
slug: post.slug,
}));
}
async function BlogPost({ params }: { params: { slug: string } }) {
const post = await fetch(`https://cms.yoursite.com/posts/${params.slug}`)
.then(res => res.json());
return (
<article>
<h1>{post.title}</h1>
<div dangerouslySetInnerHTML={{ __html: post.htmlContent }} />
</article>
);
}
Layer 2: Metadata That Actually Works
// app/layout.tsx — Root layout metadata, the defaults every page inherits
import type { Metadata } from 'next';
export const metadata: Metadata = {
title: {
template: '%s | YourBrand',
default: 'YourBrand — What You Do in One Line',
},
description: 'Your default meta description — 150-160 characters, compelling, includes your main keyword.',
openGraph: {
type: 'website',
locale: 'en_US',
url: 'https://yourbrand.com',
siteName: 'YourBrand',
images: [
{
url: 'https://yourbrand.com/og-image.jpg',
width: 1200,
height: 630,
alt: 'YourBrand — brief description',
},
],
},
twitter: {
card: 'summary_large_image',
site: '@yourbrandhandle',
},
alternates: {
canonical: 'https://yourbrand.com',
},
robots: {
index: true,
follow: true,
googleBot: {
index: true,
follow: true,
'max-video-preview': -1,
'max-image-preview': 'large',
'max-snippet': -1,
},
},
};
// app/products/[id]/page.tsx — page-level metadata override
export async function generateMetadata(
{ params }: { params: { id: string } }
): Promise<Metadata> {
const product = await fetch(`https://api.yourstore.com/products/${params.id}`)
.then(res => res.json() as Promise<Product>);
const description = product.description.slice(0, 155);
const canonicalUrl = `https://yourbrand.com/products/${params.id}`;
const imageUrl = product.images[0] ?? 'https://yourbrand.com/og-default.jpg';
return {
title: product.name, // Renders as "Product Name | YourBrand"
description,
openGraph: {
type: 'website',
title: product.name,
description,
url: canonicalUrl,
images: [{ url: imageUrl, width: 1200, height: 630, alt: product.name }],
},
// Canonical URL — critical for products accessible via multiple URLs
alternates: {
canonical: canonicalUrl,
},
};
}
Layer 3: Structured Data — The Part Everyone Skips
Structured data is the difference between appearing in search results and appearing in rich search results. Product prices, ratings, availability, article publish dates, FAQ dropdowns — these all come from structured data. They improve click-through rates, which improves rankings. The loop is real.
// components/ProductStructuredData.tsx
interface ProductSDProps {
product: {
id: string;
name: string;
description: string;
price: number;
currency: string;
inStock: boolean;
images: string[];
brand: string;
sku: string;
rating?: { average: number; count: number };
};
url: string;
}
export function ProductStructuredData({ product, url }: ProductSDProps) {
const structuredData = {
'@context': 'https://schema.org',
'@type': 'Product',
name: product.name,
description: product.description,
image: product.images,
brand: { '@type': 'Brand', name: product.brand },
sku: product.sku,
offers: {
'@type': 'Offer',
url,
priceCurrency: product.currency,
price: product.price,
// Use Schema.org availability values — not custom strings
availability: product.inStock
? 'https://schema.org/InStock'
: 'https://schema.org/OutOfStock',
seller: { '@type': 'Organization', name: product.brand },
},
// Only include aggregateRating if you have REAL ratings visible on the page.
// Google validates structured data against visible content.
// Fake ratings get you penalized.
...(product.rating && {
aggregateRating: {
'@type': 'AggregateRating',
ratingValue: product.rating.average,
reviewCount: product.rating.count,
bestRating: 5,
worstRating: 1,
},
}),
};
return (
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(structuredData) }}
/>
);
}
// For blog posts — Article structured data
export function ArticleStructuredData({ post, url, authorName }: ArticleSDProps) {
const structuredData = {
'@context': 'https://schema.org',
'@type': 'Article',
headline: post.title,
description: post.excerpt,
image: post.featuredImage,
author: {
'@type': 'Person',
name: authorName,
url: `https://yourbrand.com/authors/${post.authorSlug}`,
},
publisher: {
'@type': 'Organization',
name: 'YourBrand',
logo: {
'@type': 'ImageObject',
url: 'https://yourbrand.com/logo.png',
},
},
datePublished: post.publishedAt,
dateModified: post.updatedAt,
mainEntityOfPage: {
'@type': 'WebPage',
'@id': url,
},
};
return (
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(structuredData) }}
/>
);
}
Layer 4: The next/image Non-Negotiable
Red card for any developer using <img> instead of Next.js Image on content pages.
import Image from 'next/image';
// 🚫 The LCP killer
function BadProductImage({ src, alt }: { src: string; alt: string }) {
return (
<img
src={src}
alt={alt}
// No width/height = layout shift = bad CLS score
// No lazy loading = all images load at once = slow LCP
// No format optimization = 800KB JPEG instead of 80KB WebP
/>
);
}
// ✅ The correct implementation
function ProductImage({ src, alt }: { src: string; alt: string }) {
return (
<Image
src={src}
alt={alt}
width={800}
height={600}
// priority = "this is the LCP element — preload it"
// Use on the main product image, hero images, above-the-fold content.
// Don't use on every image — that defeats the purpose.
priority
// sizes = which image size to download at which viewport.
// Without this, mobile downloads the full 800px image unnecessarily.
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 800px"
style={{ objectFit: 'cover' }}
/>
);
}
The priority prop is the one developers most consistently miss. Without it, even with Next.js image optimization, the LCP image might be lazy-loaded — which means it doesn't start loading until the user's viewport encounters it. For the main product image that's always above the fold, that's wrong. priority preloads the image, which is often the difference between a Good and a Poor LCP score.
The Sitemap You're Probably Not Generating Correctly
// app/sitemap.ts — generates /sitemap.xml dynamically
import { MetadataRoute } from 'next';
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const [products, posts] = await Promise.all([
fetch('https://api.yourstore.com/products?fields=id,updatedAt').then(r => r.json()),
fetch('https://cms.yoursite.com/posts?fields=slug,updatedAt').then(r => r.json()),
]);
const productUrls: MetadataRoute.Sitemap = products.map(
(product: { id: string; updatedAt: string }) => ({
url: `https://yourbrand.com/products/${product.id}`,
lastModified: new Date(product.updatedAt),
changeFrequency: 'weekly',
priority: 0.8,
})
);
const postUrls: MetadataRoute.Sitemap = posts.map(
(post: { slug: string; updatedAt: string }) => ({
url: `https://yourbrand.com/blog/${post.slug}`,
lastModified: new Date(post.updatedAt),
changeFrequency: 'monthly',
priority: 0.6,
})
);
return [
{
url: 'https://yourbrand.com',
lastModified: new Date(),
changeFrequency: 'daily',
priority: 1.0,
},
...productUrls,
...postUrls,
];
}
// app/robots.ts — generates /robots.txt
import { MetadataRoute } from 'next';
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: '*',
allow: '/',
// Don't waste crawl budget on authenticated routes
disallow: ['/dashboard/', '/account/', '/api/'],
},
],
sitemap: 'https://yourbrand.com/sitemap.xml',
};
}
Here's what I want to say:
Technical SEO is necessary but not sufficient. You can implement everything in this article perfectly — server rendering, metadata, structured data, image optimization, sitemaps — and still rank poorly if your content is thin, your site has no backlinks, and your pages don't answer the questions people are actually searching for.
The technical foundation is the floor. It's the thing that stops technical mistakes from being the reason you don't rank. But it doesn't create rankings by itself.
The developers who understand this stop treating SEO as a technical checklist and start asking: "Does this page genuinely answer the question someone typed into Google?" That question is harder than any of the implementation work. It requires talking to the product team, the content team, the users. It requires humility about whether what you've built is actually useful to a stranger.
Technical excellence in service of thin content is a well-optimized empty shelf.
That said — the technical floor matters enormously. The page that answers the question perfectly AND has proper server rendering AND fast LCP AND correct metadata beats the page that answers the question perfectly but delivers it as a client-side spinner. Every time.
Build the floor. Then fill the shelf.
But
** "Google has gotten better at rendering JavaScript. Isn't this all outdated advice?"**
Google has genuinely improved JavaScript rendering. For small sites with limited pages, client-side rendering might work fine. But "works fine" and "optimal" are different things. The two-wave indexing delay is real and documented. The LCP disadvantage of client-side rendering is architectural, not a rendering quality issue. And for sites at scale — thousands of pages — crawl budget efficiency still strongly favors server rendering. The advice isn't outdated; the risk profile has changed. It's lower risk than 2018. It's not zero risk.
"I can fix LCP with client-side rendering by optimizing my bundle and using React.lazy."
Bundle optimization and code splitting reduce JavaScript parsing time, which helps. But they don't change the fundamental sequence: the browser still needs to download JS, execute JS, and fetch data before the LCP element appears. Server rendering moves the data fetch to the server, before the first byte reaches the browser. These are different optimizations solving different bottlenecks. You can optimize a client-rendered app to good LCP scores — but you're working harder to reach the same place a server-rendered app starts at.
"This is all Next.js specific. What about Remix, Astro, or other frameworks?"
The principles — server rendering acquisition pages, proper metadata, structured data, image optimization — apply regardless of framework. Remix has excellent SSR. Astro's island architecture naturally server-renders content with minimal JavaScript. The Next.js implementation details in this article are specific, but the mental model transfers. Pick the framework that fits your team; implement the principles regardless.
📚 For more you can check
- 📄 "Rendering on the Web" by Jason Miller & Addy Osmani on web.dev
- 📄 "How Google Search indexes JavaScript sites" on Google Search Central
- 🎥 "Core Web Vitals Explained" by Google Chrome Developers on YouTube
✨ Let's keep the conversation going!
If you found this interesting, I'd love for you to check out more of my work or just drop in to say hello.
✍️ Read more on my blog: bishoy-bishai.github.io
☕ Let's chat on LinkedIn: linkedin.com/in/bishoybishai
📘 Curious about AI?: You can also check out my book: Surrounded by AI
Top comments (0)