DEV Community

Cover image for MongoDB Query Optimization: How I Reduced Response Time from 2 Seconds to 200ms
VivekLumbhani
VivekLumbhani

Posted on

MongoDB Query Optimization: How I Reduced Response Time from 2 Seconds to 200ms

When I joined a startup company managing 500+ node devices,
our MongoDB queries were taking 2 seconds to return results. For a
real-time system, this was unacceptable. Users were experiencing
delays, and our API was struggling under load.

Three months later, those same queries were returning in 200ms -
a 90% improvement. Heres exactly how I did it.
The Situation:

  • 500+ node devices sending real-time sensor data
  • 10,000+ API requests per month
  • MongoDB database growing rapidly
  • Query response times: 2-3 seconds
  • Users experiencing lag in dashboard updates

Why It Mattered:
For an IoT system, real-time means real-time. A 2-second delay in
showing sensor data could mean:

  • Temperature alerts arriving too late
  • User frustration and churn
  • System appearing "broken"
  • Unable to scale to more devices I started by profiling our slowest queries using MongoDB's built-in tools:
  1. Enable MongoDB Profiler:
    db.setProfilingLevel(2)

  2. Analyze slow queries:
    db.system.profile.find({millis: {$gt: 1000}}).sort({ts: -1})

What I Found:

  • Missing indexes on frequently queried fields
  • N+1 query problems in our API
  • Large documents being fetched when we only needed specific fields
  • No pagination on collection scans

The solution:

  1. Strategic Indexing // BEFORE: Full collection scan db.sensorData.find({ deviceId: "ABC123", timestamp: { $gte: startDate } })

// Query took 2000ms scanning 50,000+ documents

// AFTER: Compound index
db.sensorData.createIndex({ deviceId: 1, timestamp: -1 })

// Query now takes 45ms

  1. Projection (Fetch Only What You Need) // BEFORE: Fetching entire document (5KB average) const data = await SensorData.find({ deviceId: id })

// AFTER: Project only required fields (0.5KB)
const data = await SensorData.find(
{ deviceId: id },
{ temperature: 1, humidity: 1, timestamp: 1, _id: 0 }
)

// Result: 60% less data transfer, 40% faster queries

  1. Aggregation Pipeline Optimization // BEFORE: Multiple queries + application-level processing const devices = await Device.find({ userId }) const readings = await Promise.all( devices.map(d => SensorData.find({ deviceId: d.id })) ) // Total: 500ms + N queries

// AFTER: Single aggregation pipeline
const result = await Device.aggregate([
{ $match: { userId } },
{ $lookup: {
from: "sensordata",
localField: "deviceId",
foreignField: "deviceId",
as: "readings"
}},
{ $project: { /* only needed fields */ }}
])
// Total: 120ms for single query

  1. Connection Pooling // Proper MongoDB connection configuration mongoose.connect(uri, { maxPoolSize: 50, minPoolSize: 10, maxIdleTimeMS: 30000, serverSelectionTimeoutMS: 5000 })

The Results (Show the impact)

Before vs After:

│ Metric │ Before │ After │ Improvement│

│ Avg Query Time │ 2000ms │ 200ms │ 90% │
│ API Response Time │ 2500ms │ 350ms │ 86% │
│ Database Load │ High │ Low │ 70% ↓ │
│ Concurrent Users │ 50 │ 200+ │ 4x │

Business Impact:

  • User satisfaction increased (feedback from support tickets)
  • System could now handle 200+ concurrent users
  • Ready to scale to 1000+ IoT devices
  • Reduced cloud costs (less CPU/memory usage)

Key Takeaways (Make it actionable)

  1. Always profile first - Don't optimize blind

  2. Indexes are crucial - But don't over-index

    • Index your query patterns, not random fields
    • Monitor index usage with db.collection.stats()
  3. Fetch less data - Use projection religiously

  4. Aggregate > Multiple queries - Push logic to database

  5. Monitor continuously - Set up alerts for slow queries

  6. Test with production-like data - 100 records perform
    differently than 100,000

Tools I Used

  • MongoDB Compass (visual query profiler)
  • MongoDB Atlas Performance Advisor (if using Atlas)
  • Node.js mongoose query profiling
  • Custom logging middleware for API timing

Common Pitfalls to Avoid

❌ Adding indexes without understanding query patterns
❌ Over-indexing (slows down writes)
❌ Not using connection pooling
❌ Fetching entire documents when you need 2 fields
❌ Running aggregations on application side instead of database
❌ Not monitoring query performance over time

Conclusion

Optimizing MongoDB queries isn't magic - it's about understanding
your data access patterns and using the right tools. The 90%
improvement we achieved came from:

  • Strategic indexing (40% improvement)
  • Proper projections (25% improvement)
  • Aggregation optimization (20% improvement)
  • Connection pooling (5% improvement)

What MongoDB optimization challenges are you facing? Drop your
questions in the comments!
Happy Learning!!

Top comments (0)