DEV Community

Arnold Acho
Arnold Acho

Posted on • Originally published at

Least Frequently Used (LFU) Cache with Constant Time Complexity

In this blog post, I will explain to you how to implement an in-memory least frequently used cache(LFU) in Go. The implementation has constant time complexity O(1) for Get, Set and Removing the least frequently used item from the cache when the cache is full.

The LFU cache algorithm presented here is based on this paper written by some very very smart people (hey, they even work at FAANG).

Data Structure

Data Structure

Source: An O(1) algorithm for implementing the LFU cache eviction scheme

The data structure for this cache is a struct with 3 fields.

type Cache struct {
    cap           int
    frequencyList *list.List
    lookupTable   map[interface{}]*lookupTableNode
Enter fullscreen mode Exit fullscreen mode


This represents the capacity of the cache, i.e the maximum number of items the cache can contain. If the cache already has cap items and you attempt to add a new item in the cache, the least frequently used item will be evicted.


This is a doubly linked list which

  • A doubly linked list of all the cache items which have a been accessed n number of times
  • n, the frequency weight for items in that node

This list is represented by [head] -> [1] -> [2] -> [5]->[9] in the image above. All the items which have been accessed 1 time will be stored in the node with value [1] so the nodes [x] and [y] have been accessed 1 time while the node [b] has been assessed 5 times since it's under the node [5].

So as not to re-invent the wheel, I used the container/list package for the doubly linked list.


This is a hashmap. i.e basic key => value store. The key in this case, is the key of the item in the cache and the value is a pointer to the item's node in the frequencyList doubly linked list.

Cache Operations


Set is used to an item in the LFU cache with key and value. It returns ErrInvalidCap if the cache is not initialized

func (cache *Cache) Set(key interface{}, value interface{}) (err error) {
    // check if cache has been initialized.
    if cache.Cap() <= 0 {
        return ErrInvalidCap

    // if the key already exists, change the value.
    if _, ok := cache.lookupTable[key]; ok {
        cache.lookupTable[key].value = value

    // create a lookup table node.
    lookupTableNode := &lookupTableNode{value: value}

    // check if the lookupTable has enough space. If it doesn't, pop the least frequently used item from the Cache.
    if cache.IsFull() {

    // set the lookup table node.
    cache.lookupTable[key] = lookupTableNode

    // if frequency list is empty or the first item in the list doesn't have weight 1 create a new node with weight 1.
    if cache.frequencyList.Len() == 0 || cache.frequencyList.Front().Value.(*frequencyListNode).weight != minFrequencyWeight {
        freqListNode := &frequencyListNode{weight: minFrequencyWeight, list: list.New()}
        freqListNode.element = cache.frequencyList.PushFront(freqListNode)

    // get the first item in the frequency list node. We're sure the item has the weight 1.
    freqListNode := cache.frequencyList.Front().Value.(*frequencyListNode)

    // set the node parent of the newly set item in the frequency list node Cache.
    freqListNodeListNode := &frequencyListNodeListNode{parent: freqListNode, key: key}

    // set the frequencyListNodeListNode in the lookup table node.
    lookupTableNode.frequencyListNodeListNode = freqListNodeListNode

    // add the newly created frequencyListNodeListNode to the frequencyListNode of weight 1.
    freqListNodeListNode.element = freqListNode.list.PushBack(freqListNodeListNode)

    return err
Enter fullscreen mode Exit fullscreen mode


Get returns an item for the LFU cache by a key. It returns ErrCacheMiss if there's a cache miss.

func (cache *Cache) Get(key interface{}) (value interface{}, err error) {
    // check if the key exists if it doesn't return with a Cache miss error.
    node, ok := cache.lookupTable[key]
    if !ok {
        return value, ErrCacheMiss

    freqListNode := node.frequencyListNodeListNode.parent

    // check if the next node's weight is equal to current weight +1
    // if not, create a new node with weight = current weight + 1 ans insert if after the current node
    if freqListNode.element.Next() == nil || (freqListNode.element.Next().Value.(*frequencyListNode).weight != freqListNode.weight+1) {
        newFreqListNode := &frequencyListNode{
            weight:  freqListNode.weight + 1,
            element: nil,
            list:    list.New(),
        newFreqListNode.element = cache.frequencyList.InsertAfter(newFreqListNode, freqListNode.element)

    // gets the list with weight = node weight + 1. This node MUST exist because it was created above
    nextFreqListNode := freqListNode.element.Next().Value.(*frequencyListNode)
    node.frequencyListNodeListNode.parent = nextFreqListNode

    // remove node from current frequency list node

    // remove freq list node from the cache's freq list if the list node has NO item in it.
    if freqListNode.list.Len() == 0 {

    // setting the element of the node in it's new list
    node.frequencyListNodeListNode.element = nextFreqListNode.list.PushBack(node.frequencyListNodeListNode)

    return node.value, err
Enter fullscreen mode Exit fullscreen mode


The pop method removes the least frequently used item from the LFU Cache.

func (cache *Cache) pop() {
    // The frequency list node MUST exist i.e the cache cap.
    freqListNodeListNode := cache.frequencyList.Front().Value.(*frequencyListNode).list.Front().Value.(*frequencyListNodeListNode)

    // Remove key from lookup table.
    delete(cache.lookupTable, freqListNodeListNode.key)

    // remove node from frequency list node.

    // if frequency list node list is now empty, remove the frequency list node from Cache's frequency list.
    if cache.frequencyList.Front().Value.(*frequencyListNode).list.Len() == 0 {
Enter fullscreen mode Exit fullscreen mode


The code for this LFU cache is on github, One improvement I'm thinking of is to make this cache generic so it can work in cases where the key is a string, int or some other comparible struct/pointer. The generics implementation can be found in 2.0-dev branch on github.

Top comments (0)