DEV Community

Mạnh Vũ
Mạnh Vũ

Posted on

2 1 1 1 1

SuperCache - An in-house cache we build for our product

SuperCache Intro

For build a realtime & low latency application by using Phoenix & Elixir we make a library for cache data in memory. In time we started to build our product we didn't find a good cache library then we desired to build a new cache library with name SuperCache by Elixir.

SuperCache is simple library based on Ets table (from Erlang), each table for each partition for improving performance in case a lot of processes access to.

Almost functions are same style with Ets function for people can easy to use (if people have worked with Ets before).

Why we build cache that based on Ets? It's because in the early of our product, we using tuple a lot then it is perfect for us build a cache with Ets style, we can query with key/value style or by pattern matching. And another reason is Ets has a long time and it is stable & fast enough for us.

With SuperCache we are easy to share data between processes or between applications (we will add support cluster in the future). This important thing help us so much for reduce time to develop & reduce complexity of our system.

Belong with Ets style we add support for Key/Value, Stack, Queue struct with multi instances of each for convenience when working with shared global data.

Features

Cache with Ets style and have partitions that can configuration in runtime. Cache is a global data in node then any process from any application run on node can access. You can choose a Ets table type if need.

Key/Value memory database for simple tasks, supported multi instance of key/value with easy to use and can access from any process in node. Key/Value spreads all it data in all partitions by key (calculated by hash order of Erlang).

Queue (FIFO), supported multi global queues with much more simple to use. Remember one thing, that is a queue store all it data in a partition then you can access it by match/match_object if need!.

Stack (FILO), same with Queue.

Note: For Key/Value, Queue, Stack when get data it can have default value (nil if you don't put to argument) for case no data in it.

Guide

It's very easy to use.

First, add to deps:

def deps do
  [
    {:super_cache, "~> 0.6.0"}
  ]
end
Enter fullscreen mode Exit fullscreen mode

Second, start the cache:

SuperCache.start!()
Enter fullscreen mode Exit fullscreen mode

Default, number of partitions is a number of online_schedulers of Erlang VM.

You can run with options like:

opts = [key_pos: 0, partition_pos: 1, table_type: :bag, num_partition: 3]
SuperCache.start!(opts)
Enter fullscreen mode Exit fullscreen mode

Now, you can use the APIs of lib.


SuperCache.put!({:hello, :world, "hello world!"})

SuperCache.get_by_key_partition!(:hello, :world)

SuperCache.delete_by_key_partition!(:hello, :world)
Enter fullscreen mode Exit fullscreen mode

KV example (after started cache lib):

alias SuperCache.KeyValue, as: KV
KV.add("my_kv", :key_a, "Hello")
data = KV.get("my_kv", :key_a)
IO.puts "#{inspect data}"
Enter fullscreen mode Exit fullscreen mode

Queue example (after started cache lib):

alias SuperCache.Queue, as: Q
Q.add("my_queue", "new_task")
task = Q.out("my_queue")
IO.puts "new task: #{inspect task}"
Enter fullscreen mode Exit fullscreen mode

Stack example (after started cache lib):

alias SuperCache.Stack, as: S
S.push("my_stack", :hello)
data = S.pop("my_stack", :no_data)
IO.puts "data: #{inspect data}"
Enter fullscreen mode Exit fullscreen mode

Library is still developing and battle test before go to product but you can try!

Source is available on Github

Billboard image

The fastest way to detect downtimes

Join Vercel, CrowdStrike, and thousands of other teams that trust Checkly to streamline monitoring.

Get started now

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Immerse yourself in a wealth of knowledge with this piece, supported by the inclusive DEV Community—every developer, no matter where they are in their journey, is invited to contribute to our collective wisdom.

A simple “thank you” goes a long way—express your gratitude below in the comments!

Gathering insights enriches our journey on DEV and fortifies our community ties. Did you find this article valuable? Taking a moment to thank the author can have a significant impact.

Okay