Why Caching?
Every database query takes time. If 1000 users
request the same categories list every minute,
that's 1000 database queries for identical data.
Caching stores the result in memory after the
first query. Everyone else gets it instantly.
The Cache Logic
cache = {}
CACHE_TTL = 60 # seconds
def get_from_cache(key: str):
if key in cache:
data, expires_at = cache[key]
if time.time() < expires_at:
print(" CACHE HIT")
return data
else:
del cache[key]
print(" CACHE EXPIRED")
print(" CACHE MISS")
return None
def set_cache(key: str, data):
cache[key] = (data, time.time() + CACHE_TTL)
Cache Miss vs Cache Hit
@app.get("/categories")
def get_categories():
cached = get_from_cache("all_categories")
if cached:
return {"source": "cache", "data": cached}
# Hit the database only on cache miss
categories = db.query(Category).all()
set_cache("all_categories", result)
return {"source": "database", "data": result}
First request -> source: "database" (Cache Miss)
Second request -> source: "cache" (Cache Hit)
Postman Proof
First request - Cache Miss
Second request - Cache Hit
Cache TTL
After 60 seconds the cache expires automatically.
The next request hits the database again and
refreshes the cache. This ensures data stays
reasonably fresh without hammering the database.
Lessons Learned
Caching is one of the highest ROI optimizations
in backend development. Identify routes that
don't change often and cache them.
For production use Redis instead of in-memory
cache it persists across server restarts
and works across multiple instances.
Day 24 done. 6 more to go.


Top comments (0)