Picture an e-commerce app making 2.3 million API calls per month to check exchange rates. A currency converter on every product page. Every page view, fresh API call. Same USD-to-EUR rate, fetched over and over and over.
Exchange rates update a few times per day. This app was checking 2.3 million times. At roughly $0.003 per call, that's $6,900/month for data that could be cached with 24 calls (one per currency pair per hour).
With a one-hour cache: $0.07/month.
This is the cheapest performance win in all of software development.
The One Question That Matters
Before caching anything, ask yourself: How often does this data actually change?
The answer determines your strategy:
| Data Type | Reality | Sensible Cache |
|---|---|---|
| Exchange rates | Updates a few times daily | 1-4 hours |
| Weather forecasts | Updates every 15-30 min | 15-30 minutes |
| Email validation | Result never changes for that email | Days/forever |
| IP geolocation | Rarely changes | Hours to days |
| Random jokes/quotes | Supposed to be different every time | Don't cache |
| User profile lookups | Changes when user edits it | Minutes, invalidate on update |
Caching a random joke defeats the purpose. But caching an exchange rate for an hour? Nobody notices. Nobody cares if EUR/USD was 1.0823 vs 1.0825. The directional accuracy is what matters.
The Simplest Cache
You don't need Redis. You don't need Memcached. You need a Map with a timestamp:
const cache = new Map();
function get(key) {
const item = cache.get(key);
if (!item) return null;
if (Date.now() > item.expiresAt) {
cache.delete(key);
return null;
}
return item.value;
}
function set(key, value, ttlSeconds) {
cache.set(key, {
value,
expiresAt: Date.now() + (ttlSeconds * 1000)
});
}
That's it. 15 lines. Works for single-server apps, scripts, serverless functions that reuse warm containers.
Using It
async function getExchangeRate(from, to) {
const cacheKey = `exchange:${from}:${to}`;
// Try cache first
const cached = get(cacheKey);
if (cached) return cached;
// Miss - fetch from API
const response = await fetch(
`https://api.apiverve.com/v1/exchangerate?currency1=${from}¤cy2=${to}`,
{ headers: { 'x-api-key': process.env.APIVERVE_KEY } }
);
const { data } = await response.json();
// Cache for 1 hour
set(cacheKey, data.exchangeRate, 3600);
return data.exchangeRate;
}
First call hits the API. Every subsequent call for the same currency pair in that hour returns instantly from memory. Zero latency, zero cost.
Cache Keys: Get Them Right
Your cache key determines what's considered "the same request." Get it wrong and you either:
- Return the wrong data (key too generic)
- Never hit the cache (key too specific)
Too generic:
const key = 'exchange_rate'; // Which currency pair?
Too specific:
const key = `exchange:${from}:${to}:${Date.now()}`; // Will never match
Just right:
const key = `exchange:${from}:${to}`;
For APIs with multiple parameters, include all the ones that affect the response:
function buildCacheKey(endpoint, params) {
const sorted = Object.keys(params)
.sort()
.map(k => `${k}=${params[k]}`)
.join('&');
return `${endpoint}?${sorted}`;
}
// buildCacheKey('currencyconverter', { from: 'USD', to: 'EUR', value: 100 })
// => "currencyconverter?from=USD&to=EUR&value=100"
Stale-While-Revalidate
Here's a clever pattern: return stale data immediately, then refresh in the background.
async function getWithSWR(key, fetcher, ttlSeconds) {
const cached = cache.get(key);
const now = Date.now();
// Fresh hit - just return it
if (cached && now < cached.expiresAt) {
return cached.value;
}
// Stale but exists - return it and refresh in background
if (cached && now < cached.staleAt) {
// Fire and forget the refresh
fetcher().then(value => {
cache.set(key, {
value,
expiresAt: now + (ttlSeconds * 1000),
staleAt: now + (ttlSeconds * 2 * 1000)
});
});
return cached.value; // Return stale data immediately
}
// Nothing usable - must wait for fresh data
const value = await fetcher();
cache.set(key, {
value,
expiresAt: now + (ttlSeconds * 1000),
staleAt: now + (ttlSeconds * 2 * 1000)
});
return value;
}
User gets instant response. Data refreshes behind the scenes. They might see slightly stale data, but they never wait.
This is perfect for data where freshness matters but not that much—weather, exchange rates, trending content.
When to Use Redis
In-memory caches have two problems:
- They die when your server restarts
- They don't share between multiple servers
If either matters, use Redis:
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL);
async function getExchangeRate(from, to) {
const key = `exchange:${from}:${to}`;
// Check Redis
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
// Fetch and cache
const response = await fetch(
`https://api.apiverve.com/v1/exchangerate?currency1=${from}¤cy2=${to}`,
{ headers: { 'x-api-key': process.env.APIVERVE_KEY } }
);
const { data } = await response.json();
// Cache with TTL - Redis handles expiration
await redis.setex(key, 3600, JSON.stringify(data.exchangeRate));
return data.exchangeRate;
}
Same pattern, different storage backend. Redis handles TTL expiration automatically.
When to stick with in-memory:
- Single server
- Data can be rebuilt on restart
- Serverless (each invocation might be fresh anyway)
When to use Redis:
- Multiple servers need to share cache
- Cache is expensive to rebuild
- You need cache to survive deployments
Different APIs, Different TTLs
Not everything deserves the same caching strategy.
Cache Aggressively (Hours to Days)
Exchange rates, country data, timezone info
This data changes slowly. Cache it for hours or days.
// Exchange rates - 4 hours
set(`exchange:${from}:${to}`, rate, 14400);
// Country info - 24 hours
set(`country:${code}`, countryData, 86400);
// Timezone data - 24 hours
set(`timezone:${zone}`, tzData, 86400);
Cache Moderately (Minutes)
Weather, stock prices, dynamic content
Changes throughout the day but not every second.
// Weather - 15 minutes
set(`weather:${lat}:${lon}`, forecast, 900);
// Market data - 5 minutes during trading hours
set(`market:${symbol}`, data, 300);
Cache by Input (Long TTL, Keyed by Input)
Email validation, phone validation, IP lookup
The result for a specific input won't change. Cache per unique input.
// Email validation - cache per email, 7 days
set(`email:${sha256(email)}`, result, 604800);
// IP geolocation - cache per IP, 24 hours
set(`ip:${ip}`, location, 86400);
Why hash the email? Two reasons: shorter keys, and you're not storing raw PII in your cache.
Don't Cache
Random generators, time-sensitive data
The Random Quote API should return something different each time. Caching defeats the purpose.
// Don't do this
set('random_quote', quote, 3600); // Now you get the same quote for an hour
// Do this
const quote = await fetchRandomQuote(); // Fresh every time
Similarly, don't cache anything where staleness causes real problems—stock trading, real-time availability, live scores.
Cache Warming
For data you know you'll need, fetch it before users ask:
async function warmCurrencyCache() {
const popularPairs = [
['USD', 'EUR'], ['USD', 'GBP'], ['EUR', 'GBP'],
['USD', 'JPY'], ['USD', 'CAD'], ['USD', 'AUD']
];
for (const [from, to] of popularPairs) {
await getExchangeRate(from, to);
await sleep(100); // Don't hammer the API
}
console.log('Currency cache warmed');
}
// Run on server start
warmCurrencyCache();
First user to check USD/EUR gets an instant response instead of waiting for the API.
Run cache warming:
- On server/container start
- On a schedule (every hour, before your cache expires)
- After deployments
Measuring Cache Effectiveness
Track your hit rate. If it's low, something's wrong:
let hits = 0;
let misses = 0;
function get(key) {
const item = cache.get(key);
if (!item || Date.now() > item.expiresAt) {
misses++;
return null;
}
hits++;
return item.value;
}
function getStats() {
const total = hits + misses;
return {
hits,
misses,
hitRate: total > 0 ? ((hits / total) * 100).toFixed(1) + '%' : 'N/A'
};
}
Healthy caches see 80-95% hit rates. If you're below 50%:
- TTLs might be too short
- Keys might be too specific
- Traffic pattern might not repeat (different users, different data)
The Math
Let's make this concrete.
Scenario: E-commerce site showing prices in multiple currencies.
- 50,000 page views/day
- Each view needs USD→EUR rate
- No caching: 50,000 API calls/day = 1.5M/month
With 1-hour cache:
- Max 24 calls/day for that currency pair
- 720 calls/month
- 99.95% reduction in API calls
Cost comparison (on APIVerve):
- Without caching: 1.5M credits (you'd need the Mega plan)
- With caching: 720 credits (covered by Free tier with room to spare)
Same functionality. Faster responses. Fraction of the cost.
Common Mistakes
Setting TTL too short. If your cache expires every 60 seconds and users check every 30 seconds, you're not really caching. Match TTL to how often the data actually changes.
Not caching error responses. If an API is down, you probably don't want to hammer it every request. Cache the error for a short time (30-60 seconds) to give it a break.
try {
const data = await fetchAPI();
set(key, { success: true, data }, 3600);
} catch (err) {
set(key, { success: false, error: err.message }, 60); // Cache errors briefly
}
Forgetting cache invalidation. If you're caching user data that users can edit, invalidate the cache when they edit it:
async function updateUserProfile(userId, newData) {
await db.update('users', userId, newData);
cache.delete(`user:${userId}`); // Invalidate cache
}
Over-engineering. You don't need a distributed cache system for a side project. Start with a Map. Upgrade when you need to.
Caching is the rare optimization that makes things faster and cheaper. Every duplicate call you eliminate is latency you remove and money you save.
Start simple: a Map with TTL. Measure your hit rate. Adjust TTLs based on actual data change patterns. Graduate to Redis when you need shared state.
Get your API key and watch how far your credits go when you're caching properly. You'll be surprised.
Originally published at APIVerve Blog
Top comments (0)