DEV Community

Ajit Kumar
Ajit Kumar

Posted on

Speed Up Your Django App: A Beginner's Guide to Redis Caching

You’ve built a Django app. It works. But as your database grows, those once-snappy pages are starting to feel... sluggish. You check your logs and see a query taking 2 seconds to load.

Your first thought might be: "I need a bigger server."
Your second thought (the correct one) should be: "I need a cache."

In this guide, we’ll walk through why caching matters, how to set it up with Redis, and—most importantly—how to prove it’s actually working.


What is Caching?

Imagine you are a librarian. Someone asks you for a complex report on 19th-century architecture. You spend 20 minutes walking to the back of the library, climbing a ladder, and finding the book.

If ten more people ask for that same report in the next hour, would you walk back every time? No. You’d keep a copy on your desk. That desk is your cache.

Why Redis?

While Django supports several backends (Database, File System, Local Memory), Redis is the industry standard. It’s an in-memory data store, meaning it lives in your RAM. It is significantly faster than reading from a traditional Disk-based database (PostgreSQL/MySQL).


The Setup: Before Caching

Let’s look at a standard Django view that calculates "Trending Products" for an e-commerce dashboard. This query is "expensive" because it involves aggregates and filters over thousands of rows.

# views.py
from django.shortcuts import render
from .models import Order
from django.db.models import Count

def trending_products_view(request):
    # This query runs every single time the page is refreshed!
    trending_data = (
        Order.objects.values('product__name')
        .annotate(total_sales=Count('id'))
        .order_by('-total_sales')[:10]
    )
    return render(request, 'dashboard.html', {'products': trending_data})

Enter fullscreen mode Exit fullscreen mode

The "Before" Performance:

  • Database Hits: 1 per request.
  • Response Time: ~500ms - 1.5s (depending on DB size).
  • Scalability: Poor. 100 concurrent users = 100 heavy DB queries.

Step 1: Install and Configure Redis

First, you need the Redis server installed on your machine and the Python interface.

pip install django-redis

Enter fullscreen mode Exit fullscreen mode

Update your settings.py to point Django toward Redis:

# settings.py
CACHES = {
    "default": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": "redis://127.0.0.1:6379/1", # /1 is the database index
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
        }
    }
}

Enter fullscreen mode Exit fullscreen mode

Step 2: Implementing the "Cache-First" Pattern

The most common way to cache is the "Cache-First" pattern. You check the cache; if it's there (a Hit), return it. If not (a Miss), fetch from the DB and save it to the cache for next time.

Here is our updated view:

from django.core.cache import cache
from django.shortcuts import render
from .models import Order

def trending_products_view(request):
    cache_key = "trending_products_list"

    # 1. Try to get data from Redis
    result = cache.get(cache_key)

    if result is None:
        # 2. Cache MISS: Run the expensive DB query
        print("Fetching from Database...")
        result = list(
            Order.objects.values('product__name')
            .annotate(total_sales=Count('id'))
            .order_by('-total_sales')[:10]
        )

        # 3. Store in Redis for 24 hours (86400 seconds)
        cache.set(cache_key, result, timeout=86400)
    else:
        print("Fetching from Cache!")

    return render(request, 'dashboard.html', {'products': result})

Enter fullscreen mode Exit fullscreen mode

Step 3: Verifying and Monitoring

This is where many developers trip up. How do you know it's working?

1. The Terminal Monitor

Open your terminal and run the Redis monitor tool. This shows every command hitting your Redis server in real-time.

redis-cli monitor

Enter fullscreen mode Exit fullscreen mode

Now, refresh your browser.

  • First Load: You will see a SET command in the monitor.
  • Second Load: You will see a GET command. If you see GET followed by another SET, your logic is broken (Cache Miss).

2. Inspecting Keys

To see exactly what is stored in your Redis database:

redis-cli keys "*"
# Output will look like: ":1:trending_products_list"

Enter fullscreen mode Exit fullscreen mode

Note: Django-redis automatically prepends a version (usually :1:) to your keys.

3. Response Headers

You can also use your browser's "Network" tab. Compare the Time column.

  • Before: 1.2s
  • After (Cache Hit): 15ms

Step 4: Pro-Tip - Cache Warming

What if that first user of the day is a VIP? They shouldn't have to wait for the "Cache Miss." We use a Warming Script—a custom Django management command that runs on a schedule (Cron job) to pre-fill the cache.

# management/commands/warm_cache.py
from django.core.management.base import BaseCommand
from django.core.cache import cache
# ... import your models and logic ...

class Command(BaseCommand):
    def handle(self, *args, **options):
        # Run the heavy logic manually
        data = ... 
        cache.set("trending_products_list", data, timeout=86400)
        self.stdout.write("Cache warmed successfully!")

Enter fullscreen mode Exit fullscreen mode

Common Pitfalls to Avoid ⚠️

1. Stale Data

If you cache product prices for 24 hours and change a price in the admin panel, the user will see the old price.
Solution: Use Django "Signals" to clear the cache whenever a model is saved.

2. The Gunicorn Trap

Django code is loaded into memory by your web server (Gunicorn/Uvicorn). If you change your cache_key name in your code, you must restart Gunicorn. Otherwise, your server will keep looking for the old key while your warming script generates the new one.

sudo systemctl restart gunicorn

Enter fullscreen mode Exit fullscreen mode

3. Caching Everything

Don't cache things that are unique to every user (like a shopping cart) unless you include the user_id in the cache key.


Conclusion

Caching is the bridge between a "hobby project" and a "production-ready app." By moving your most frequent, expensive queries into Redis, you reduce the load on your database and provide a lightning-fast experience for your users.

Next Step for you: Try implementing caching on your slowest API endpoint and use redis-cli monitor to watch the magic happen!


Have questions about cache invalidation? Drop a comment below!

Top comments (0)