Most developers hit this wall and never know why their Express API feels slow even after deployment.
Not a server issue. Not a database issue.
It was missing query indexing + no response caching in MongoDB + Express.
🔴 The Problem
Every /api/products call was doing a full collection scan on 50,000+ documents — no index, no cache.
| Metric | Value |
|---|---|
| Average response time | 1800–2400ms |
| Documents scanned per call | 50,000+ |
| Index used | ❌ None |
| Cache | ❌ None |
I only discovered this after opening the MongoDB Atlas Query Profiler — something most devs never check until something breaks in production.
🐌 Before — Slow Query (No Index)
Here's what my Mongoose schema looked like:
// mongoose model — no index defined
const productSchema = new Schema({
name: String,
category: String,
price: Number,
stock: Number,
createdAt: Date,
});
const Product = mongoose.model('Product', productSchema);
And the Express route:
// ❌ Slow route — full collection scan every time
router.get('/products', async (req, res) => {
try {
const products = await Product.find({}).sort({ price: -1 });
res.json(products);
} catch (err) {
res.status(500).json({ error: err.message });
}
});
Every single request was:
- Scanning all 50,000+ documents in the collection
- Sorting them in memory
- Returning the full Mongoose document objects (heavy!)
- Doing this again on the very next request — no memory of the last call
✅ After — Indexed + Cached (5 Lines Added)
Step 1: Add a Compound Index to the Mongoose Schema
const productSchema = new Schema({
name: String,
category: String,
price: Number,
stock: Number,
createdAt: Date,
});
// ✅ Add compound index for frequent query pattern
productSchema.index({ category: 1, price: -1 });
const Product = mongoose.model('Product', productSchema);
This tells MongoDB to pre-sort data by category (ascending) and price (descending) — so queries matching this pattern skip the full scan entirely.
Step 2: Add In-Memory Caching with node-cache
npm install node-cache
import NodeCache from 'node-cache';
// ✅ Cache with 60-second TTL
const cache = new NodeCache({ stdTTL: 60 });
router.get('/products', async (req, res) => {
try {
// Check cache first
const hit = cache.get('all_products');
if (hit) return res.json(hit);
// Cache miss — query DB with .lean() for speed
const products = await Product
.find({})
.sort({ price: -1 })
.lean(); // ✅ Returns plain JS objects, not Mongoose docs
// Store in cache for next 60 seconds
cache.set('all_products', products);
res.json(products);
} catch (err) {
res.status(500).json({ error: err.message });
}
});
📉 The Result
| Metric | Before | After |
|---|---|---|
| First request | ~2100ms | ~180ms |
| Repeated requests | ~2100ms | ~60ms |
| DB documents scanned | 50,000+ | Index only |
| Memory overhead | High (Mongoose docs) | Low (plain objects) |
Response time dropped from ~2100ms → 60ms on repeated calls. MongoDB Atlas query profiler confirmed full index usage. Zero code refactor needed.
💡 3 Things I Now Do in Every MERN Project
1. Add Compound Indexes for Frequent Query Patterns
Think about how your data is queried most often. If you always filter by category and sort by price, that's your index:
schema.index({ category: 1, price: -1 });
You can check if your index is being used in MongoDB Atlas → Performance Advisor or by running:
// In MongoDB shell or Compass
db.products.find({ category: 'electronics' })
.sort({ price: -1 })
.explain('executionStats')
Look for "IXSCAN" in the winningPlan — that means your index is being used. "COLLSCAN" means full scan (bad ❌).
2. Always Use .lean() on Read-Only Queries
// ❌ Without .lean() — returns full Mongoose document
const products = await Product.find({});
// ✅ With .lean() — returns plain JS objects (~40% faster)
const products = await Product.find({}).lean();
Mongoose documents come with a lot of overhead — getters, setters, validation methods, etc. If you're only reading data and sending it as JSON, .lean() strips all of that away.
⚠️ Don't use
.lean()if you need Mongoose methods like.save(),.populate(), or virtuals on the result.
3. Cache Data That Doesn't Change Every Second
Not everything needs to be fresh from the database on every request. Product listings, category data, blog posts, config settings — these can safely be cached for 30–120 seconds.
const cache = new NodeCache({ stdTTL: 60 }); // 60 seconds default
// Generic cache wrapper utility
const getCached = async (key, fetchFn, ttl = 60) => {
const hit = cache.get(key);
if (hit) return hit;
const data = await fetchFn();
cache.set(key, data, ttl);
return data;
};
// Usage
router.get('/categories', async (req, res) => {
const categories = await getCached(
'all_categories',
() => Category.find({}).lean(),
120 // cache for 2 minutes
);
res.json(categories);
});
🛠️ Bonus: How to Profile Your MongoDB Queries
If you're not sure where your slowdowns are, here's how to find them:
Option 1 — MongoDB Atlas Query Profiler
- Go to your cluster → Performance Advisor
- Look for queries with high execution time and
COLLSCAN - Atlas will even suggest indexes for you
Option 2 — Mongoose Debug Mode (local dev)
// Add this in your app.js / server.js
mongoose.set('debug', true);
This logs every query to the console — you can see exactly what's being executed.
Option 3 — .explain() on any query
const result = await Product.find({ category: 'electronics' })
.sort({ price: -1 })
.explain('executionStats');
console.log(result.executionStats);
// Look for: totalDocsExamined vs totalDocsReturned
// If they're very different → you need an index
🧠 Quick Summary
❌ No index + No cache → 2100ms (full scan every time)
✅ Index + .lean() → 180ms (first request)
✅ Index + Cache → 60ms (repeated requests)
Three small changes. Massive difference in production.
🙋 What's Your Biggest MERN Bottleneck?
Have you run into slow queries in your MERN app? Have a different caching strategy you prefer (Redis, etc.)?
Drop it in the comments — I read every one. 👇
If this helped you, consider following for more MERN performance tips, Node.js patterns, and full-stack deep dives every week.
Top comments (0)