Our API was slow. Like, really slow. The kind of slow where you click something and then wonder if you actually clicked it.
For months, I told myself "it's fine, it works." Then I looked at the analytics and saw 28% of users were rage-refreshing during page loads. Oof. π¬
So I spent a week actually fixing it. Went from 1.9 seconds to 200ms. 89% improvement. No rewrites, no new servers, just fixing dumb things I should've fixed months ago.
Here's the whole story with all the code.
The "Oh Crap" Moment
Before I did anything, I needed to know where the time was actually going. I assumed it was the database (it wasn't). I assumed our code was the problem (it wasn't really).
Added some quick logging:
const perfLog = {};
async function trackPerformance(label, fn) {
  const start = Date.now();
  const result = await fn();
  perfLog[label] = Date.now() - start;
  console.log(`${label}: ${Date.now() - start}ms`);
  return result;
}
// Then wrapped everything
const user = await trackPerformance('fetchUser', () => getUserData(id));
const posts = await trackPerformance('fetchPosts', () => getPosts(userId));
After 24 hours of logging, here's what I found:
External API calls:  1,340ms (71%) β π±
Database queries:      280ms (15%)
Data processing:       180ms (9%)
Everything else:       100ms (5%)
I'd wasted two whole days optimizing database queries when 71% of our time was waiting on external APIs. Classic developer move: optimizing the wrong thing.
Lesson learned: Always measure. Your gut is probably wrong.
Fix #1: Parallel Everything (The 20-Minute Win)
Here's what I was doing like an absolute noob:
// Making API calls like it's 1999
const userProfile = await externalAPI.getProfile(userId);      // 240ms
const userPosts = await externalAPI.getPosts(userId);          // 290ms
const analytics = await externalAPI.getAnalytics(userId);      // 380ms
const settings = await externalAPI.getSettings(userId);        // 210ms
const notifications = await externalAPI.getNotifications(userId); // 220ms
// Total time: 1,340ms of just... waiting
These calls were 100% independent. None of them needed data from another. Why was I making each one wait?!
Fixed it:
// Run 'em all at once
const [userProfile, userPosts, analytics, settings, notifications] = 
  await Promise.all([
    externalAPI.getProfile(userId),
    externalAPI.getPosts(userId),
    externalAPI.getAnalytics(userId),
    externalAPI.getSettings(userId),
    externalAPI.getNotifications(userId)
  ]);
// Total time: 380ms (just the slowest one)
Result: 1,340ms β 380ms
That's it. That's the fix. Took me 20 minutes. Gave me a 50% improvement in overall response time. Sometimes the biggest wins are the simplest.
ποΈ Fix #2: Cache All The Things
With APIs running in parallel, I started looking at what we were calling them for.
User settings? Check.
Analytics summaries? Check.
Recommendation lists? Check.
Then it hit me: this stuff barely changes. User settings might change once a week. Analytics update hourly. Recommendations refresh daily.
Yet here I was, fetching this data fresh on every. single. request. Like we're serving stock prices or something. π€¦
Time for Redis:
const CACHE_TTL = {
  userProfile: 300,      // 5 min - changes occasionally
  userSettings: 3600,    // 1 hour - rarely changes
  analytics: 300,        // 5 min - needs to be freshish
  posts: 180,           // 3 min - fairly dynamic
  recommendations: 86400 // 24 hours - updates once daily
};
async function getCached(key, fetchFn, ttl) {
  // Check cache first
  const cached = await redis.get(key);
  if (cached) {
    console.log(`π Cache HIT for ${key}`);
    return JSON.parse(cached);
  }
  // Cache miss - fetch and store
  console.log(`π Cache MISS for ${key}`);
  const data = await fetchFn();
  await redis.setex(key, ttl, JSON.stringify(data));
  return data;
}
// Usage
const settings = await getCached(
  `settings:${userId}`,
  () => externalAPI.getSettings(userId),
  CACHE_TTL.userSettings
);
After a few hours, cache hit rate: 68%.
That means we went from 12,000+ external API calls per day to about 3,800. Not only faster, but also:
- Way cheaper (fewer API calls)
 - More reliable (less dependent on external services)
 - Scales better (Redis is fast)
 
Result: 900ms β 340ms average response time
Fix #3: The N+1 Query Nightmare
Even though DB wasn't the main problem, I found something that made me cringe:
// First, get all posts (1 query)
const posts = await db.query(
  'SELECT * FROM posts WHERE user_id = ?', 
  [userId]
);
// Then loop through and get comments for each post (N queries)
for (const post of posts) {
  post.comments = await db.query(
    'SELECT * FROM comments WHERE post_id = ?',
    [post.id]
  );
}
// User has 30 posts? That's 1 + 30 = 31 database queries!
Classic N+1 query problem. Every backend dev's enemy.
Fixed with a single query:
const postsWithComments = await db.query(`
  SELECT 
    p.id, 
    p.title, 
    p.content, 
    p.created_at,
    JSON_ARRAYAGG(
      JSON_OBJECT(
        'id', c.id,
        'content', c.content,
        'author', c.author
      )
    ) as comments
  FROM posts p
  LEFT JOIN comments c ON p.id = c.post_id
  WHERE p.user_id = ?
  GROUP BY p.id
`, [userId]);
// ONE query, no matter how many posts
Result: 280ms β 120ms database time
Not huge by itself, but way better under load. 50 concurrent users means 50 queries instead of 1,550.
Fix #4: Stop Sending So Much Garbage
I exported one of our API responses and... 890 KB. For a dashboard. π
Scrolled through the JSON:
- User object with 30+ fields (frontend used 5)
 - Every post with full content (frontend showed previews)
 - Internal IDs that shouldn't leave the server
 - Debug fields in production
 - Three different timestamp formats (???)
 
Here's what I did:
// Before: send EVERYTHING
return {
  user: fullUserObject,      // All 30 fields
  posts: allPostsWithAllData, // Everything!
  settings: completeSettings, // Kitchen sink!
  debug: debugInfo           // Why is this here?!
};
// After: send what's needed
return {
  user: {
    id: user.id,
    name: user.name,
    avatarUrl: user.avatarUrl
  },
  posts: posts.map(p => ({
    id: p.id,
    title: p.title,
    preview: p.content.substring(0, 150) + '...',
    date: p.created_at
  })),
  settings: {
    theme: settings.theme,
    language: settings.language
  }
};
Also added gzip (literally one line in Express):
const compression = require('compression');
app.use(compression());
Payload size: 890KB β 125KB (86% smaller!)
Especially important for mobile users and people with slower connections.
Fix #5: Connection Pooling (The Boring One That Matters)
We were doing this:
async function handleRequest(req, res) {
  // Create new connection every time
  const connection = await mysql.createConnection(dbConfig);
  const data = await connection.query(sql);
  await connection.end(); // Close it
  return data;
}
Every. Single. Request.
That's 15-20ms just for the TCP handshake + SSL + auth. Wasteful.
Changed to connection pool:
// Create pool once at startup
const pool = mysql.createPool({
  connectionLimit: 10,
  host: process.env.DB_HOST,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  database: 'myapp',
  waitForConnections: true,
  queueLimit: 0
});
// Reuse connections
async function handleRequest(req, res) {
  const data = await pool.query(sql);
  return data;
}
Saved 15-20ms per request. Not huge, but it adds up. Plus way more stable under load.
The Final Numbers
Before:
β±οΈ  Response time: 1,900ms
π  Users refreshing: 28%
π  API calls/day: 12,000+
ποΈ  DB queries/request: 30-40
π¦  Response size: 890KB
After:
β±οΈ  Response time: 200ms (89% faster!) β‘
π  Users refreshing: 4% (86% drop!)
π  API calls/day: ~3,800 (68% reduction!)
ποΈ  DB queries/request: 3-5 (88% fewer!)
π¦  Response size: 125KB (86% smaller!)
What changed:
- Session completion: +23%
 - Time on site: +18%
 - "Slow loading" support tickets: basically gone
 
π οΈ Tools I Used
Here's my stack for this (but principles work everywhere):
Backend:
- Node.js + Express
 - MySQL with mysql2
 - Redis for caching
 - compression middleware
 
Monitoring:
- New Relic (free tier is solid)
 - Custom performance logging
 - Chrome DevTools Network tab
 
Testing:
π‘ Key Takeaways
If I had to do this again, here's what matters:
1. Measure First, Optimize Later
I wasted two days on the wrong thing. Add logging. Look at data. Fix what's actually slow.
2. Parallel > Sequential
If operations are independent, run them in parallel. Free performance.
3. Cache Intelligently
Most data doesn't need to be real-time. Figure out appropriate TTLs.
4. Watch for N+1 Queries
Loop with a query inside = usually bad. Consolidate when possible.
5. Audit Your Payloads
You're probably sending way more than needed. Trim it down.
6. Use Connection Pools
Creating connections is expensive. Reuse them.
π€ Discussion
I'm curious about your experiences:
What's been your biggest API performance win?
Any optimization techniques I missed?
Tools you'd recommend for performance monitoring?
Drop your thoughts below! Always learning from this community. π
π Useful Resources
- Promise.all() documentation
 - Redis caching best practices
 - N+1 query problem explained
 - Connection pooling guide
 - Web performance optimization
 
π What's Next?
Thinking about writing follow-ups on:
- Advanced caching strategies
 - Database indexing deep dive
 - Load testing best practices
 - Monitoring setup guide
 
Let me know what you'd find most useful!
If this helped, drop a β€οΈ and follow for more backend optimization content!
*Have questions about any of the code? Drop them in the comments and I'll help out! *
    
Top comments (0)