DEV Community

INOXXAI
INOXXAI

Posted on

How I fixed Googlebot indexing for a React + Leaflet SPA (without Next.js or Puppeteer)

I built PlaceLabels — a crowd-sourced map
where locals drop honest reviews about their neighborhoods. Safety
ratings, cost of living, vibe. Real people, not algorithms.

About a week after launch I checked Google.

site:placelabels.com — 1 result. Just the homepage.

I had 35 city pages live. London, Mumbai, Tokyo, New York.
All rendering beautifully in the browser. All completely invisible
to Google.

Classic SPA problem. Here's exactly how I fixed it.


The setup

  • React 18 + TypeScript
  • Vite (not Next.js)
  • Leaflet + OpenStreetMap for the map
  • Deployed as static files + Express backend

Every page body was just:


Enter fullscreen mode Exit fullscreen mode

Googlebot visits /mumbai. Waits for JavaScript. Gets impatient.
Leaves. No content indexed.


Why I didn't migrate to Next.js

Honestly? Time and complexity.

React-Leaflet has real SSR complications. The map is the core of
the product. I didn't want to spend two weeks migrating and
debugging hydration issues.

I needed something faster.


The fix: a dead simple Node.js prerender script

No Puppeteer. No headless browser. No 300MB Chromium download.

Just a Node.js script that runs after every Vite build and
manipulates the HTML output directly.

For each city it does five things:

  1. Sets a unique <title>
  2. Sets a unique <meta description>
  3. Injects a real visible <h1> into the page body
  4. Sets the correct canonical URL
  5. Adds BreadcrumbList schema markup

Then writes the result to /dist/{city}/index.html.

// scripts/prerender.mjs
const cities = ['mumbai', 'london', 'new-york' /* 35 total */];

for (const city of cities) {
  const cityName = city.replace(/-/g, ' ')
    .replace(/\b\w/g, l => l.toUpperCase());

  const html = baseHTML
    .replace('PlaceLabels',
      `${cityName} Neighborhoods — Real Local Reviews | 
PlaceLabels`)
    .replace('',
      `${cityName} Neighborhoods
       Honest neighborhood reviews for ${cityName} from 
       locals who actually live there.
       `);

  fs.mkdirSync(`dist/${city}`, { recursive: true });
  fs.writeFileSync(`dist/${city}/index.html`, html);
}
Enter fullscreen mode Exit fullscreen mode

Build script:

vite build && node scripts/prerender.mjs
Enter fullscreen mode Exit fullscreen mode

Each city page is now a real 16KB HTML file. Googlebot reads it
instantly, no JavaScript required.


The 404 problem I almost missed

While digging into Google Search Console crawl stats I found
something alarming — 18% of all crawl requests were returning 404.

Nearly 1 in 5 URLs Googlebot tried to visit didn't exist.

Traced it down to three gaps in the Express server:

Gap 1 — Label pages were 404ing

When users drop a pin on the map, it creates a URL like:
/bangalore/cubbon-park-morning-jogs-peaceful

Express had no handler for these. Every one returned 404. Fixed by
adding an SSR handler that looks up the label slug in Postgres and
generates a full page with schema markup on the fly.

Gap 2 — No catch-all route

Express 5 doesn't accept bare * wildcards — it crashes on
startup. Added /{*wildcard} to serve the React SPA for any
unmatched route.

Gap 3 — express.static() was in the wrong place

This one was subtle. The catch-all was catching /sitemap.xml
and serving index.html instead of the actual XML file.

Google Search Console showed "Couldn't fetch" on the sitemap.

Fix was simple — move express.static() to before any route
handlers:

// CORRECT order
app.use(express.static(path.join(__dirname, 'dist'))); // FIRST
app.get('/city/:slug', cityHandler);
app.get('/{*wildcard}', spaHandler); // LAST
Enter fullscreen mode Exit fullscreen mode

The unexpected bonus: user-generated SEO pages

Here's something I didn't plan for.

Every label a user drops on the map now has its own URL, its own
page, its own schema markup. Real content. Real data. Unique title
and description.
/bangalore/cubbon-park-morning-jogs-peaceful
/mumbai/bandra-safe-family-friendly
/london/shoreditch-expensive-vibrant-nightlife

Each one targets a long-tail keyword nobody else is competing for.
And every new user who drops a label creates another one
automatically.

445 label pages live right now. Growing daily.


Results

Before: 1 page indexed by Google

After one day of fixes:

974 pages discovered. Sitemap status: Success.

PageSpeed SEO score: 100/100 on both mobile and desktop.


The three things worth remembering

1. You don't need Next.js to fix SPA indexing.
A Node.js script that writes static HTML files is faster to
implement and just as effective for most use cases.

2. Express middleware order matters more than you think.
express.static() before route handlers. Always.

3. User actions can be SEO assets.
Every label drop is a new indexed page. Design your UGC with
this in mind from the start.


The full project is open source:
👉 github.com/InoxxAIsource/neighborhoodtruth-map

Live map: placelabels.com

Happy to answer questions about any part of the implementation.

Top comments (0)