DEV Community

Cover image for Savior: Low-Level Design
ElshadHu
ElshadHu

Posted on

Savior: Low-Level Design

Grinding Go: Low-Level Design

I went back to the drawing board for interview preparation and to sharpen my problem-solving skills. Software development is in a weird stage. 2 weeks ago, when I saw my friend doing low-level-design, I thought that in current stage it is meaningless. How wrong I was. My friend started asking me about problem solving and critical thinking. I realized that I am killing my skills by only focusing on abstraction nowadays and learning new tools that maybe in a new job I will never use. For that reason, I did research and saw that a pretty good amount of devs are complaining about how inline suggestions on IDE and AI tools kill their problem-solving skills. I watched this video which is related to current interview style by NeetCode. I realized that nothing has changed pretty much and problem-solving skills stay as the strongest skills. That's why, I create a new repository: grinding-go.

Self Reflection

My friend and I have been starting to interview each other for low-level-design and system design. I noticed that my critical thinking has got softer. While making our first interviews, I was mumbling. I wasn't giving details because of my half-formed ideas. So, I planned a strategy for this current stage of my coding skills: Do low-level design barehand and ask LLM to find articles about the design and make it ask socratic follow-up questions for my design in order to sharpen my problem solving skills. Therefore, I went back to the drawing board to do low-level design for problems that are abstracted away and are just big word alerts. I also realized again, algorithms and computational thinking don't go anywhere, they just stay under abstractions in large codebases which if I don't keep my skills up to date with them, I'll struggle a lot.

The Main Resource

I created grinding-go project in Go because I love this language for its simplicity and speed. I started reading the summary of 100 Go Mistakes and coding snippets which are in this repo via this website. I realized as a developer, I need to come back to my basics and make them as strong as possible by doing also boring stuff. After that book, I would definitely say that I stopped treating Go like other languages because it has its own style especially when it comes to error handling and concurrency.

First Technical Challenge: Implement LRU Cache

When it comes to my core structure, I used LRUCache struct which contains LinkedList and mutex for thread-safety and concurrency:

type LRUCache struct {
    mu       sync.RWMutex
    capacity int
    cache    map[int]*Node
    list     *LinkedList
}
Enter fullscreen mode Exit fullscreen mode

Here, the main purpose is to make a quick look-up with map and when we change the position of the least recently used item we use Linkedlist which doesn't take O(n). Also we add a new item into the lru cache if it passes the capacity we need to remove the tail of the linked list which takes O(1) operation.

if len(lru.cache) > lru.capacity {
        tailKey, err := lru.list.removeTail()
                // ...
}
Enter fullscreen mode Exit fullscreen mode

While solving this problem, I forgot to nil out the removed element's pointers from the linkedList. Now in Go this is not a "dangling pointer" like in C/C++ since Go has a garbage collector. But it still matters because the removed node can hold references to other nodes in the list which prevents the GC from freeing memory that should be freed. It can also cause logical bugs if you accidentally traverse through stale references:

// clear dangling pointers
    n.prev = nil
    n.next = nil
Enter fullscreen mode Exit fullscreen mode

Meanwhile handling the neighbours of linked list elements, I noticed that I am making mistakes and losing the references to elements. I wrote everything in one file which reminded me of solving problems in one file rather than abstracting away everything. I first heard this approach from ThePrimeAgen: solve it in one file first, then split later. LRU Cache is everywhere in production anyway. What I don't get is when people tell that solving these problems is dead. It isn't dead. In production and large codebases the same patterns and data structures are being used with over abstraction and if you don't know at low level, it will be more difficult.

Rate Limiter

When I started this problem, I directly thought about Sliding window technique but I already used this technique in different projects and I wanted to do it with Token Bucketing and for that reason I chose it. At the end of the day, we as developers have to challenge ourselves and see how it goes. I already got the basic idea of it and meanwhile solving it I made searches about the main difference between Token Bucketing and Sliding window for checking out the trade-offs again. I watched this video and started implementing it. I implemented a RateLimiter struct which has a composition relationship with TokenBucket:

type RateLimiter struct {
    buckets   map[string]*TokenBucket // maps a user/key to their bucket
    mu        sync.Mutex              // Protects concurrent map access
    rate      float64                 // default rate for new buckets
    maxTokens float64                 //default burst size for new buckets
}
Enter fullscreen mode Exit fullscreen mode

I also wrote in TokenBucket struct a mutex field because it has to handle its own operation concurrently and I implemented Allow method which controls how many requests a client can make during a specific time frame:

func (tb *TokenBucket) Allow() bool {
    tb.mu.Lock()
    defer tb.mu.Unlock()

    now := time.Now()
    tb.lastSeen = now
    newTokens := now.Sub(tb.lastRefill).Seconds() * tb.refillRate
    tb.tokens += newTokens
    if tb.tokens > tb.maxTokens {
        tb.tokens = tb.maxTokens
    }
    tb.lastRefill = now
    if tb.tokens >= 1 {
        tb.tokens--
        return true
    }
    return false
}

Enter fullscreen mode Exit fullscreen mode

After doing that I was happy and thought that I'm done but when it comes to socratic question I saw that I'm not cleaning up idle buckets in RateLimiter. Without cleanup, if you have millions of unique keys hitting your limiter, those buckets just sit in memory forever. So I added a cleanUpLoop that periodically evicts stale entries:

func (rl *RateLimiter) cleanUpLoop(interval, maxIdle time.Duration) {
    ticker := time.NewTicker(interval)

    for range ticker.C {
        rl.mu.Lock()
        for key, tb := range rl.buckets {
            if time.Since(tb.lastSeen) > maxIdle {
                delete(rl.buckets, key)
            }
        }
        rl.mu.Unlock()
    }
}
Enter fullscreen mode Exit fullscreen mode

Since I mentioned the trade-offs between token bucket and sliding window, here is a short comparison:

Token Bucket:

  • Tokens refill at a fixed rate, each request costs one token
  • Allows bursts up to bucket size which is good for APIs like payment or webhook endpoints
  • Low memory, just a counter and a timestamp

Sliding Window:

  • Tracks requests in a time window and counts them
  • Smoother and stricter rate control, no big bursts
  • Higher memory since you need to store timestamps or counters per sub-window

Pub/Sub

As you already know in system design interviews especially when it comes to microservices architecture Pub/Sub is mentioned a lot. Because of its asynchronous nature and substantial workload with rest APIs pub/sub makes sense to use to reduce tight coupling and solve the problem by also keeping the performance good. But, I never thought about its implementation details and when it comes to learning anything, my strategy is to write the code with my naive solution and research about it during building my solution. I watched this video by Hussein Nasser which really helped me a lot. Especially those who want to prioritize concepts over tools, I highly recommend this person's videos.

Types

there are 4 core structs which are:

  • Topic: this groups related messages
  • Subscriber: this is the one receives messages.
  • Broker: knows which subscribers care about which topics and route messages to them
  • Message: is just the data

Broker plays a middle man role and its methods delegate to Topic methods and return their errors directly. When I first wrote Unsubscribe method, I used Broker's mutex via defer which was a poor decision because the lock was only needed for the topic lookup not for topic-level operations. Holding the broker lock while doing topic operations would block all other broker methods unnecessarily. Then I changed it properly:

func (b *Broker) Unsubscribe(topicName string, sub *Subscriber) error {
    b.mu.Lock()
    t, ok := b.topics[topicName]
    b.mu.Unlock() // release broker lock before touching topic
    if !ok {
        return ErrTopicNotFound
    }
    return t.RemoveSubscriber(sub.id)
}

Enter fullscreen mode Exit fullscreen mode

Also, in the Broadcast method I made sure to pass the subscriber as a parameter to the goroutine. Without this, it becomes a closure bug where all goroutines can end up referencing the last value of the loop variable. Since Go 1.22, the loop variable scoping changed and each iteration gets its own copy. So this specific closure bug is fixed in Go 1.22+. But passing the variable as a parameter is still a good habit, it makes intent clear and keeps your code compatible with older versions. You can read about it in the official Go blog post.

    for _, sub := range t.subscribers {
        // yo I need to  use subscriber as a parameter because in another case it is closure bug
        // Because  by the time goroutine runs, the for loop may have moved and sub points to the last sub
        go func(s *Subscriber) {
            select {
            case s.ch <- msg:
            default:
            }
        }(sub)
Enter fullscreen mode Exit fullscreen mode

In this problem, I made a PR to my own project and made my friend to review it in order to get a different perspective and he gave me feedback. At the end of the day, these are just tools and I should focus more on architecture rather than some fancy words about tools.

Lastly

I will keep this as a series and after each 3 low-level-design, I will come with my naive solutions to these big word alerts. I reminded myself of my engineering skills by coming back to the drawing board, thinking about the problems in low level and understanding trade-offs while implementing it. Nowadays my friend and I are so busy with job-hunting and we are planning to write about our project verdis. In my previous blog, I mentioned that we had our first podcast which looked like we filmed it with a fridge camera but we already did our second podcast and this time is way better :). I added a new technique into my formula besides building in open and contributing also solving the problems with low level design. If you are curious about it and if you see anything in my solutions feel free to give any suggestions and if you know any technical challenges to recommend, feel free to create issues in grinding-go.

Top comments (1)

Collapse
 
danqzq profile image
Danial Jumagaliyev

Ganging 🔥