Building a modern Single Page Application (SPA) with Vite and Vue is great for user experience, but it's a minefield for SEO. We faced three major hurdles:
- Aggressive Bot Protection: Our
.htaccesswas so tight it was blocking crawlers that we actually wanted. - The "SPA Meta Trap": Social media bots (Facebook, WhatsApp) couldn't read our dynamic recipe titles or images because they don't execute JavaScript.
- The Scale Problem: We have access to millions of recipes via the Spoonacular API, but we don't own the full database. How do you tell Google about millions of pages you don't physically store?
The Tech Stack
- Frontend: Vite + Vue 3 (Hosted on Apache)
- Backend: Node.js + Express (Hosted on Firebase Functions)
- Database: MongoDB
- Provider: Spoonacular API
The Solution: A 3-Step "Self-Healing" Architecture
1. Solving the Social Preview (The Meta Injection)
Since our frontend is on a standard Apache host, we couldn't use edge functions easily. Instead, we optimized our URL structure to include SEO-friendly slugs:
recipe-finder.org/recipe/644488-german-rhubarb-cake-with-meringue
We then implemented a backend-driven meta-injection strategy. When a recipe is requested, our Express server pre-fills the Open Graph tags (og:title, og:image, og:description) using the recipe summary, ensuring beautiful previews on social media.
2. The "Self-Building" Database
We didn't want to scrape millions of recipes (and get banned). Instead, we created an Organic Growth Engine.
Every time a user (guest or authenticated) clicks a recipe, our Express backend performs an Upsert into MongoDB. If it's a new recipe, it's added to our "SEO Index." If it's an existing one, we update the lastViewed timestamp.
// Remove stale entry, then push back to front with a fresh timestamp
await recipeViewedModel.findOneAndUpdate(
{ auth0Id },
{ $pull: { recipes: { id: recipe.id } } }
);
await recipeViewedModel.findOneAndUpdate(
{ auth0Id },
{ $push: { recipes: { $each: [{ ...recipe, viewedAt: new Date() }], $position: 0 } } },
{ upsert: true, new: true }
);
This ensures our database only grows with high-quality, relevant content that users actually care about.
3. The Dynamic Hybrid Sitemap
A static sitemap.xml was impossible for millions of potential links. We built a Dynamic Sitemap Index:
- sitemap-main.xml — A static file on our hosting server for core pages (Home, Tools, About).
- sitemap-recipes-[n].xml — Dynamic routes on Express that query MongoDB and generate XML on the fly in 50,000-unit chunks.
-
The Master Index — A central
sitemap.xmlthat bridges the two, served via a silent proxy in.htaccess.
RewriteRule ^sitemap\.xml$ [https://your-region-your-project.cloudfunctions.net/api/sitemap.xml](https://your-region-your-project.cloudfunctions.net/api/sitemap.xml) [R=301,L]
RewriteRule ^sitemap-recipes-([0-9]+)\.xml$ [https://your-region-your-project.cloudfunctions.net/api/sitemap-recipes-$1.xml](https://your-region-your-project.cloudfunctions.net/api/sitemap-recipes-$1.xml) [R=301,L]
Any request for a sitemap is silently routed to the Express API, which assembles the XML from MongoDB on the fly — no static file maintenance required.
The Results
- Google Search Console Verified: Live URL testing shows Google successfully rendering the SPA and reading the dynamic content.
- Automated SEO: The more our users cook, the larger our sitemap grows. We don't have to manually add a single link.
-
Zero-Maintenance Scaling: The system handles 10 recipes or 10 million with the same memory footprint thanks to MongoDB's
$groupand$limitaggregations.
Key Takeaway for CoderLegion
Don't build for millions of pages on Day 1. Build a system that lets your users' activity grow your SEO footprint for you. Work with the bots, not against them.
Author: Rusu Ionut
Project: recipe-finder.org
Top comments (0)