π The Wake-Up Call
It was 2 AM when my phone rang. The CEO's voice was tense. "The app is crawling. Users are complaining. Fix it."
I still remember that night vividly. Our NodeJS app that handled customer orders was buckling under traffic. Response times had gone from snappy to sluggishβ2-3 seconds per request. π±
Sound familiar? If you're a junior developer working with NodeJS, you might have faced (or will face) this scenario. Let me share how I transformed our application from a tortoise to a cheetah! β‘
π΅οΈ Diagnosing the Problem
The next morning, bleary-eyed but determined, I started investigating. Our stack looked pretty standard:
Technology | Purpose |
---|---|
NodeJS/Express | API server |
MongoDB | Database |
Redis | Caching |
AWS EC2 | Hosting |
The app was structured like most Express applications, with routes handling requests and controllers processing business logic. But something was clearly wrong.
π The Bottlenecks
I discovered three main issues that were throttling our performance:
- Database queries were inefficient π’
- No caching strategy πΎ
- Blocking operations in the event loop βοΈ
π οΈ The Transformation Journey
Step 1: Optimizing MongoDB Queries β‘
I found queries like this throughout our codebase:
const users = await User.find({});
This was loading ALL users into memory before filtering on the application side!
I replaced them with properly indexed, filtered queries:
const users = await User.find({ active: true }).select('name email').limit(10);
Quick Win π: Adding appropriate indexes reduced query times by 80%!
Query Type | Before | After | Improvement |
---|---|---|---|
User Search | 1200ms | 95ms | 92% faster |
Product Listing | 1800ms | 120ms | 93% faster |
Order History | 2100ms | 150ms | 93% faster |
Step 2: Implementing Redis Caching π
For frequently accessed data that rarely changes (like product categories), I implemented Redis:
const data = await redisClient.get('categories') || await fetchAndCacheCategories();
This simple pattern eliminated repetitive database queries for common data. Think of Redis as your refrigerator - why go grocery shopping (query the database) every time you want a snack? π
Step 3: Non-Blocking Operations π
I discovered we were reading files synchronously in some routes:
// Before: Blocking the event loop π
const template = fs.readFileSync('./templates/email.html');
// After: Non-blocking π
const template = await fs.promises.readFile('./templates/email.html');
Remember: NodeJS is like a single-lane highway. One slow truck (blocking operation) causes traffic for everyone!
π― The Memory Leak Mystery
Our server would occasionally crash after running for a few days. Using node --inspect
and Chrome DevTools, I discovered we were accumulating references to large objects.
The culprit? Event listeners that weren't being properly removed:
// Fixed by adding proper cleanup
emitter.removeListener('data', handleData);
β‘ Pro Tip: Async/Await Patterns
One pattern that significantly improved our code readability and performance:
// Use Promise.all for parallel operations
const [users, products, categories] = await Promise.all([
fetchUsers(),
fetchProducts(),
fetchCategories()
]);
This runs all three operations concurrently rather than sequentially, cutting response time by ~66%!
π The Results
Metric | Before | After |
---|---|---|
Avg Response Time | 2300ms | 95ms |
Server Memory | 1.2GB | 450MB |
Crash Frequency | Every 2-3 days | None in 3 months |
π― Key Takeaways
- Index everything that you query frequently in MongoDB π
- Cache aggressively with Redis for data that changes infrequently πΎ
- Avoid blocking the event loop - use async operations wherever possible π
- Monitor memory usage to catch leaks before they become problems π
- Use Promise.all for concurrent operations β‘
- Profile regularly - what's fast today might be slow tomorrow π
The CEO called me a week later: "Whatever you did, it worked. The app feels lightning fast."
Sometimes the biggest performance gains come from the simplest changes. Happy optimizing! π
Top comments (0)