DEV Community

Cover image for Why your .NET 8 API needs a cache layer — and how to build it right with Redis/Valkey and tag invalidation
fenixkit
fenixkit

Posted on

Why your .NET 8 API needs a cache layer — and how to build it right with Redis/Valkey and tag invalidation

Caching is one of those things that sounds optional until your database starts getting hammered at scale, your response times creep up, and you realise you've been querying the same data hundreds of times per minute. This article covers why a cache layer matters, how to implement cache-aside properly with tag-based invalidation in .NET 8, how to handle Redis outages gracefully, and why Valkey is worth knowing about.


Why bother with cache at all?

The short answer: your database doesn't need to answer the same question twice.

A typical read-heavy API hits the database for the same product list, the same user profile, the same category results — on every request. Each one is a network round trip, a query execution, and serialisation overhead. At low traffic it's fine. At scale it isn't.

A cache layer puts the answer in Redis the first time, and returns it directly on every subsequent request — milliseconds, no database involved.

The reasons people avoid it:

  • "It adds complexity" — only if you build it badly
  • "Cache invalidation is hard" — it is, but it doesn't have to be unpredictable
  • "Redis going down takes my API down" — only if you don't handle it properly

All three are solvable.


The cache-aside pattern

Cache-aside is the simplest correct approach:

  1. On read — check Redis first. Hit → return. Miss → query the database, populate Redis, return.
  2. On write — invalidate the relevant cache entries, then write to the database.
GET /api/products/abc123

  1. Check Redis  ──▶  HIT  ──▶  return cached JSON ✓
               └──▶  MISS ──▶  query database
                              └──▶  populate Redis ──▶  return ✓

PUT /api/products/abc123

  → invalidate cache entries for this product
  → write to database
Enter fullscreen mode Exit fullscreen mode

Simple in theory. The problem is step 2 — which cache entries do you invalidate?


The invalidation problem

If you cache by key only (product:abc123), that's easy — delete that key on update. But most APIs cache more than that:

  • Paged lists — product:paged:p1:s20
  • Cursor pages — product:cursor:start:20:fwd
  • Filtered results — product:category:Gaming

When you update a product, all of those might be stale. You can't just delete one key.

The naive solution is to expire everything with a short TTL. It works, but it means serving stale data for up to N minutes after every write, and it doesn't scale — at high write rates your cache is constantly cold.


Tag-based invalidation

A better approach: every cached entry is registered under one or more tags. When you write, you invalidate by tag — wiping all entries associated with that tag at once.

In Redis, a tag is a Set that holds the keys registered under it:

product:abc123              STRING   cached product JSON          TTL 5 min
product:paged:p1:s20        STRING   cached page JSON             TTL 5 min
product:category:Gaming     STRING   cached category list         TTL 5 min

tag:product                 SET      { paged keys, cursor keys }    no TTL
tag:product:abc123          SET      { "product:abc123" }            no TTL
tag:product:category:Gaming SET      { "product:category:..." }      no TTL
Enter fullscreen mode Exit fullscreen mode

Tag sets have no TTL — they are deleted when InvalidateByTagAsync runs, leaving no orphaned entries.

On every write, the repository wipes all matching tags.

The update case is worth calling out: when a product moves from Electronics to Gaming, you need to invalidate both the old and new category cache. The solution is to union the tags from the original and the updated entity before invalidating — both category caches get wiped, no extra logic needed in your handler.


Three levels of control

Not everything needs automatic invalidation. A well-designed cache layer gives you three levels:

Level Mechanism Use for
Automatic Base repository calls GetInvalidationTags on every write Standard CRUD — always on
Tag-based _cache.InvalidateByTagAsync("product:category:Gaming") Custom domain queries
Manual _cache.InvalidateAsync("product:abc123") Surgical single-key removal

You pick the right level per operation. Most of the time the automatic level handles everything.


Handling Redis outages — FailOpen vs FailClosed

This is where most cache implementations go wrong. If Redis throws an exception and you let it propagate, your API returns 500s whenever the cache is unavailable — even though your data is perfectly fine in the database.

FailOpen (recommended default): treat any Redis error as a cache miss. The request falls through to the database, succeeds, and returns normally. Redis being down is a performance degradation, not an outage.

FailClosed: return an error when Redis is unavailable. Use this only when cache correctness is a hard requirement.

For most APIs, FailOpen is the right default. Redis is a performance layer, not a source of truth.


Making cache optional

There are scenarios where you want to run without Redis entirely — local development or environments where you haven't provisioned a cache server yet.

The clean solution is a no-op implementation of your cache interface that can be swapped in via config:

// appsettings.json / .env
Cache__Enabled=false
Enter fullscreen mode Exit fullscreen mode

When disabled: the cache interface resolves to a no-op, IConnectionMultiplexer is never registered, and the Redis health check is omitted automatically. No code changes required anywhere else.


Valkey — the Redis fork worth knowing about

In 2024, Redis changed its licence from BSD, no longer open-source. In response, the Linux Foundation forked Redis at version 7.2 and created Valkey — an open-source, community-maintained drop-in replacement.

Valkey is wire-protocol compatible with Redis. StackExchange.Redis connects to it transparently — no client changes, no code changes needed.

# docker-compose.valkey.yml
valkey:
  image: valkey/valkey:7.2-alpine
  command: valkey-server --requirepass ${CACHE_PASSWORD}
  ports:
    - "6379:6379"
Enter fullscreen mode Exit fullscreen mode
valkey:6379,password=yourpassword,protocol=2
Enter fullscreen mode Exit fullscreen mode

If you're happy with Redis 8, nothing changes. If you prefer a fully open-source stack, Valkey 7.2 is a transparent swap.


Putting it together

The full pattern in a .NET 8 Minimal API:

  1. Read — check Redis, miss falls through to the database, result populates Redis on return
  2. Write — union tags from old + new entity, invalidate, write to database
  3. FailOpen by default — Redis errors never surface as 500s
  4. Optional — disable via config, no-op swaps in automatically

If you'd rather not wire all of this from scratch, I've packaged the full implementation into FenixKit — .NET 8 Minimal API starter kits with the cache layer, tag invalidation, FailOpen, Valkey support, and health checks all included and pre-configured.

Top comments (0)