DEV Community

Fiyinfoluwa Ojo
Fiyinfoluwa Ojo

Posted on

Caching Intro: Making Your API Lightning Fast with In-Memory Cache

Why Caching?

Every database query takes time. If 1000 users
request the same categories list every minute,
that's 1000 database queries for identical data.

Caching stores the result in memory after the
first query. Everyone else gets it instantly.

The Cache Logic

cache = {}
CACHE_TTL = 60  # seconds

def get_from_cache(key: str):
    if key in cache:
        data, expires_at = cache[key]
        if time.time() < expires_at:
            print(" CACHE HIT")
            return data
        else:
            del cache[key]
            print(" CACHE EXPIRED")
    print(" CACHE MISS")
    return None

def set_cache(key: str, data):
    cache[key] = (data, time.time() + CACHE_TTL)
Enter fullscreen mode Exit fullscreen mode

Cache Miss vs Cache Hit

@app.get("/categories")
def get_categories():
    cached = get_from_cache("all_categories")
    if cached:
        return {"source": "cache", "data": cached}

    # Hit the database only on cache miss
    categories = db.query(Category).all()
    set_cache("all_categories", result)
    return {"source": "database", "data": result}
Enter fullscreen mode Exit fullscreen mode

First request -> source: "database" (Cache Miss)
Second request -> source: "cache" (Cache Hit)

Postman Proof

First request - Cache Miss

cache miss

Second request - Cache Hit

Cache hit

Cache TTL

After 60 seconds the cache expires automatically.
The next request hits the database again and
refreshes the cache. This ensures data stays
reasonably fresh without hammering the database.

Lessons Learned

Caching is one of the highest ROI optimizations
in backend development. Identify routes that
don't change often and cache them.
For production use Redis instead of in-memory
cache it persists across server restarts
and works across multiple instances.

Day 24 done. 6 more to go.

GDGoCBowen30dayChallenge

Top comments (0)