DEV Community

Cover image for Best tech things I read this week - #5
Arpit Mohan
Arpit Mohan

Posted on • Updated on

Best tech things I read this week - #5

I share TL;DR versions of articles on software engineering that I read every weekday through my newsletter - in.snippets(). Sign up here to receive these in your inbox.


Our weekly reading revolved around software architecture & tooling. It's imperative to get some of these principles right early in product development. Here's a brief list of the top articles.

#go #devdecisions

Why Go and not Rust?

Though this post compares Go & Rust, the underlying points succinctly explain how some of Go’s characteristics make it a fantastic fit for developing enterprise software. Enterprise software projects take time to build & tend to become large and complex. Many people (sometimes whole teams) change leaving behind a lot of code written with very varying levels of quality. Multiple teams working in parallel produce redundant code. The domain shifts over time, inevitably invalidating older assumptions and consequently causing abstractions to leak. Enterprise software development is all about scale & dealing with complexity and Go is all about suppressing complexity at scale. Here are a few Go traits that help tame the beast of enterprise software development:

  • Its concurrency model makes it a good fit for server-side applications that must primarily handle multiple independent requests.
  • Great support for HTTP & related protocols.
  • Great tooling to diagnose concurrency and performance problems.
  • Cross-compilation makes deployment on any platform easy.
  • It offers a limited set of features built-in the language making it easy to learn. Go forces people to “do more with less” which ensure that Go projects remain understandable even when they scale tremendously.
  • Go enforces a single, universal style via go fmt.
  • There are only two levels of visibility for variables, and the only concurrency model is CSP. It’s way harder to fall into incomprehensible pitfalls when writing in Go than in other languages.

Read the full post here and I am sure you will love the author’s pragmatic conclusion as much as I did:

“Let’s avoid tying our identity to a single language and embrace practicality first and foremost.” 

9 mins read


#architecture #softwaredesign

Software Architecture is Overrated, Clear and Simple Design is Underrated

“As an engineer, your goal should be more about solving solutions and learning through them rather than picking a shiny architecture pattern, in hopes that that will solve your problem.”

Gergely Orosz is the engineering lead at Uber. He has contributed to rewriting Uber's distributed payment systems, designing and shipping Skype on Xbox One and open-sourcing RIBs, Uber's mobile architecture framework. In this post, he explains how it is all about simple design and not any software architecture pattern when it comes to creating software systems. None of the system design processes at leading tech startups use any standard software architecture planning tools. No project has one person owning the architecture or design and nobody talks in terms of the common architecture patterns while trying to design a software system.

The process of designing software is fairly simple and open one. It encourages inputs from everyone on the team irrespective of their number of years of experience. Almost everyone just starts with addressing the business problem, brainstorming an approach to solve it and putting this approach on a whiteboard while discussing & refining it. This exercise on the whiteboard clearly describes the architecture of the system. This gets written up in a simple document with the simplest diagrams possible. This document also states why certain choices are being made based on the context of the problem at hand, what are the tradeoffs for & the alternatives to these choices. And in the end, this document gets sent to the engineering team to take feedback from everyone.

There is no following software architecture patterns, just solving a problem in the simplest way possible and making sure that even the junior-most engineer on the team understands everything that’s going on.

“The last thing you want to do is taking one or more architecture pattern, using it as a hammer, looking for nails to use it on.”

Full post with more details into the process here.
10 mins read


#networking #performance

An alternative approach to rate limiting

If you are building web applications at consumer scale, you may find Figma’s approach to building its rate limiter quite helpful. A rate limiter caps how many requests a sender , a user or an IP address , can issue in a specific window of time. Nikrad Mahdi considered the existing implementations such as Token bucket, Fixed window counters and Sliding Windows log but none seemed to solve effectively for two factors that were important for the team:

  • accurately limit excessive use of their web application
  • use as little memory as possible

The token bucket algorithm’s Redis operations aren’t atomic. In a distributed environment, the “read-and-then-write” behavior creates a race condition. This means the rate limiter can at times be too lenient. Redis operations are atomic for a fixed window approach but it can sometimes let through twice the number of allowed requests per minute. For example, if the rate limits are 5 requests per minute and a user makes 5 requests at 11:00:59, they could make 5 more requests at 11:01:00 because a new counter begins at the start of each minute. And while the precision of the sliding window log approach is useful, it leaves a considerably large memory footprint because it stores a value for every request.

As none of these met the team’s requirement, Nikrad implemented an alternative approach by combining fixed window counters and sliding window log. Requests from each sender are counted using multiple fixed time windows 1/60th the size of the rate limit’s time window. To reduce the memory footprint, they store the counters in a Redis hash  which offer efficient storage when they have fewer than 100 keys. When each request increments a counter in the hash, it also sets the hash to expire an hour later. If they have a rate limit of 500 requests per day per user on a website with 10,000 active users, they would at most need to store ~600,000 values in Redis. That comes out to a memory footprint of ~2.4 MB.

Check out the post to understand the practical implications of implementing a rate limiting algo.
8 mins read


P.S - This is a list of the best of tech articles that my team at Appsmith reads through the week. I share it here every Saturday. Would love to hear your feedback on this.

Top comments (0)