DEV Community

Cover image for Multi-Layered Caching with Decorators in Elixir: Optimizing Performance and Scalability
Ahsan Nabi Dar
Ahsan Nabi Dar

Posted on

Multi-Layered Caching with Decorators in Elixir: Optimizing Performance and Scalability

Phil Karlton's famous quote aptly captures the essence of caching in computer science: "There are only two hard things in Computer Science: cache invalidation and naming things." Caching is a powerful technique to improve application performance by storing frequently accessed data in a faster, more readily accessible location. However, effective cache invalidation remains a challenge.

Caching is a difficult problem and invalidating a cache is even more difficult. Elixir, with its powerful in-memory caching options such as ETS and DETS, provides a robust solution that often eliminates the need for external caching systems like Memcached or Redis. For a deep dive, Dashbit’s blog post on why you may not need Redis with Elixir is an excellent resource. But for globally distributed, dynamically scaling serverless environments, these in-memory caches become less suitable.

However, scaling caching across multiple global regions in a serverless environment introduces new challenges. Dynamic scaling of nodes means that your in-memory cache can disappear along with the node, complicating matters.

Cachex, the most popular Elixir caching library, offers a clustered cache option but lacks support for dynamic node configuration. This limitation becomes evident in environments like Fly.io, where nodes scale dynamically and their addresses aren’t known at startup. Engaging with the Fly.io community led me to an open issue on Cachex’s GitHub regarding dynamic node configuration. This search introduced me to Nebulex, a feature-rich library supporting multiple cache stores, including Cachex and Redis.

Nebulex also supports various caching patterns out of the box. While setting up the Redis adapter for Nebulex, I encountered a blocker issue with Upstash Redis, revealing another limitation. Despite this, the exploration provided insights on constructing a layered caching solution using decorators a approach Nebulex uses.

With the limitations of Cachex in dynamic environments and the Redis adapter for Nebulex not meeting expectations, Inspired by the strengths of Cachex and Nebulex, we can create a custom layered caching solution using decorators in Elixir. This approach combines the speed of a local Cachex as L1 (short TTL) with the scalability of an external Redis as L2 (longer TTL)), reducing code clutter and improving maintainability. This setup avoids cluttering the code with numerous get, set, and delete operations by using decorators.

Advantages of Multi-Layered Caching

  • Faster Responses: Cachex (L1) provides microsecond latency for frequently accessed data.
  • Reduced External Hits: Redis (L2) serves as a fallback, significantly reducing the number of requests to external services.
  • Cost Efficiency: Fewer requests to Redis minimize cloud costs.
  • Graceful Degradation: The system can serve stale data from L1 within a threshold, ensuring continuity while fetching fresh data from L2.

Here's an outline of the approach:

  • Define the Decorator: Create a decorator function that accepts the actual function to be wrapped and the cache configuration options (e.g., cache type, TTLs for L1 and L2).
  • Check L1 Cache: Inside the decorator, first check the L1 cache (Cachex) for the requested data using the key derived from the function arguments.
  • Retrieve from L2 Cache: If the data is not found in L1, retrieve it from the L2 cache (Redis) using the same key.
  • Fetch from Source: If both L1 and L2 caches miss, call the wrapped function to fetch the data from the original source.
  • Cache the Result: Store the fetched data in both L1 and L2 caches with their respective TTLs.
  • Return the Data: Finally, return the retrieved or fetched data.
  • By wrapping functions with this decorator, you can transparently introduce caching without altering the core logic of your application.

Using Elixir's decorator library it as simple as writing it as such

Decorator class allowing you to decorate your functions as
@decorate lx_fetch(["key", args0])
@decorate lx_evict(["key", args0])

defmodule Maverick.Cache.Decorator do
  @moduledoc """
  Maverick.Cache.Decorator
  """

  use Decorator.Define, lx_fetch: 1, lx_evict: 1

  alias Maverick.Cache.L1, as: CacheL1
  alias Maverick.Cache.L2, as: CacheL2

  def lx_fetch(attrs, body, _context) do
    quote do
      case CacheL1.get(unquote(attrs)) do
        {:ok, nil} ->
          case CacheL2.get(unquote(attrs)) do
            {:ok, nil} ->
              unquote(body)

            {:ok, data} ->
              CacheL1.set(unquote(attrs), data)
              data

            {:error, _reason} ->
              unquote(body)
          end

        {:ok, data} ->
          data

        {:error, _reason} ->
          unquote(body)
      end
    end
  end

  def lx_evict(attrs, body, _context) do
    quote do
      CacheL1.del(unquote(attrs))
      CacheL2.del(unquote(attrs))
      unquote(body)
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

L1 cache

defmodule Maverick.Cache.L1 do
  @moduledoc """
  Maverick.Cache.L1
  """
  require Logger

  use Appsignal.Instrumentation.Decorators

  alias Maverick.Utility.Helper

  @cachex :maverick_cachex
  @ttl 1800

  @decorate transaction_event()
  def get(key) do
    cache_key = Helper.generate_cache_key(key)
    Logger.info("Get from L1 Cache for #{cache_key}")
    Cachex.get(@cachex, cache_key)
  end

  @decorate transaction_event()
  def set(key, value, ttl \\ @ttl) do
    cache_key = Helper.generate_cache_key(key)
    Logger.info("Set in L1 Cache for #{cache_key}")
    Cachex.put(@cachex, cache_key, value, ttl: :timer.seconds(ttl))
  end

  @decorate transaction_event()
  def del(key) do
    cache_key = Helper.generate_cache_key(key)
    Logger.info("Delete from L1 Cache for #{cache_key}")
    Cachex.del(@cachex, cache_key)
  end
end
Enter fullscreen mode Exit fullscreen mode

L2 cache

defmodule Maverick.Cache.L2 do
  @moduledoc """
  Maverick.Cache.L2
  """
  require Logger

  use Appsignal.Instrumentation.Decorators

  alias Maverick.Utility.Helper

  @redis :maverick_redix
  @ttl 7_776_000

  @decorate transaction_event()
  def get(key) do
    cache_key = Helper.generate_cache_key(key)

    Logger.info("Get from L2 Cache for #{cache_key}")

    case Redix.command(@redis, ["GET", cache_key]) do
      {:ok, nil} -> {:ok, nil}
      {:ok, data} -> {:ok, :erlang.binary_to_term(data)}
      {:error, reason} -> {:error, reason}
    end
  end

  @decorate transaction_event()
  def set(key, value, ttl \\ @ttl) do
    cache_key = Helper.generate_cache_key(key)
    Logger.info("Set in L2 Cache for #{cache_key}")
    Redix.command(@redis, ["SET", cache_key, :erlang.term_to_binary(value), "EX", ttl])
  end

  @decorate transaction_event()
  def del(key) do
    cache_key = Helper.generate_cache_key(key)
    Logger.info("Delete from L2 Cache for #{cache_key}")
    Redix.command(@redis, ["DEL", cache_key])
  end

  def flush_all() do
    Redix.command(:redix, ["FLUSHALL"])
    |> case do
      {:ok, status} -> status
      {:error, reason} -> reason
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

Cache key generation is done as in Nebulex using erlang's phash/2

def generate_cache_key(key) do
    :erlang.phash2(key) |> Integer.to_string()
end
Enter fullscreen mode Exit fullscreen mode

And my favourite version of Phil Karlton's quote is:

"There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors."
-- Leon Bambrick

Top comments (0)