<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manas Trivedi</title>
    <description>The latest articles on DEV Community by Manas Trivedi (@kettlesteam).</description>
    <link>https://dev.to/kettlesteam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kettlesteam"/>
    <language>en</language>
    <item>
      <title>Blueis-h 4: Intrusive thoughts (and tables)</title>
      <dc:creator>Manas Trivedi</dc:creator>
      <pubDate>Sun, 29 Mar 2026 11:50:27 +0000</pubDate>
      <link>https://dev.to/kettlesteam/blueis-h-4-intrusive-thoughts-and-tables-4h8i</link>
      <guid>https://dev.to/kettlesteam/blueis-h-4-intrusive-thoughts-and-tables-4h8i</guid>
      <description>&lt;p&gt;Well this entry is late (again 😭) but at least I have a reason this time and the delay wasn't THAT long. So firstly, I got rank 1 in my college's competitive coding competition after our team solved all problems with 10 minutes to spare so YAY!&lt;/p&gt;

&lt;p&gt;Now coming to the task at hand this section of the project was truly brutal, to the point where I wanted to give up and simply keep on using the STL's unordered map.&lt;/p&gt;

&lt;p&gt;The concept of an intrusive data structure is simple to internalize, a way to give structure to your data without embedding your data within this structure.&lt;/p&gt;

&lt;p&gt;The implementation on the other hand is... something else, the functions definitions were the simplest part but then came the structs apparently there's a reason incremental resizing isn't widely used unless necessary, I had to keep track of two maps simultaneously and juggle between them. While this sounds obvious (because duh some entries will be left in the older table even though a newer, and bigger one exists until rehashing is complete), but in practice this meant implementing two levels of functions one for the nodes themselves and one for the table.&lt;/p&gt;

&lt;p&gt;Choosing the hash function was the simplest part but reading about it was fun. Apparently the choice is not much of a deal breaker as long as we use a non-cryptographic hash since we want better distribution and faster speeds and do not care much for the hash's security. &lt;/p&gt;

&lt;p&gt;I went with MurmurHash most project's I've seen use FNV and while that would've been fine for this project too I just wanted to use Murmur to get that better distribution (I also broke my project trying to write a test script for getting chain lengths to compare between FNV and Murmur so there's that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion(?)
&lt;/h3&gt;

&lt;p&gt;This felt more like a rant than a technical blog tbh but I was genuinely able to learn a LOT from this addition alone than I have from all the past ones combined. Learning intrusive data structures, the fact that they can add structure and yet never touch any data itself (thanks to the nifty container_of() definition which tbh took me a while to even make heads and tails of), handling equality checks in two places by comparing hcodes in nodes and data in server all of it was a bit too much to take in at once and I believe I'll keep coming back to it every once in a while to review but it was a lot of fun&lt;/p&gt;

&lt;p&gt;Next I think I'll start working on sorted sets since I heard they are pretty useful for getting "hot searches" and similar features, sooo let's see how long that takes.&lt;/p&gt;

&lt;p&gt;Byee!!&lt;/p&gt;

</description>
      <category>hashmap</category>
      <category>hashtable</category>
      <category>redis</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Blueis-h 3: Me v/s the need to over-engineer</title>
      <dc:creator>Manas Trivedi</dc:creator>
      <pubDate>Sun, 15 Mar 2026 12:21:18 +0000</pubDate>
      <link>https://dev.to/kettlesteam/blueis-h-3-me-vs-the-need-to-over-engineer-5gj6</link>
      <guid>https://dev.to/kettlesteam/blueis-h-3-me-vs-the-need-to-over-engineer-5gj6</guid>
      <description>&lt;p&gt;Who could've guessed this update would be even more delayed 😭. Not a good enough excuse this time, I just took time to actually read into my next set of topics instead of starting directly with implementing.&lt;br&gt;
..... But let's backtrack, my last post mentioned that the next thing on our agenda was implementing our own hash tables, and like the reason for it is pretty simple actually.&lt;br&gt;
Our architecture is single threaded event-loop based concurrency, so, if we use the &lt;code&gt;unordered_map&lt;/code&gt; a few issues arise, the most notable one being tail latency.&lt;br&gt;
Our table will be resized when our &lt;code&gt;load_factor&lt;/code&gt; is exceeded, this resize when triggered, is typically optimized by stl for throughput and is performed in one go, what this does is cause a latency spike for that one request but overall the average is minimized. We do not want this.&lt;br&gt;
If our system encounters a latency spike, the entire pipeline is blocked for that duration which is a serious concern and the solution is to spread out the the resize over multiple incoming requests and while this &lt;em&gt;would&lt;/em&gt; increase the time taken by those requests it would avoid the spike and instead spread it out.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Over-engineering hits
&lt;/h2&gt;

&lt;p&gt;The above is not at all a trivial implementation, but were it being worked upon since my last post it should've long been implemented.&lt;br&gt;
Sadly however, this was unacceptable to me.&lt;br&gt;
A friend pointed out a flaw in my buffers, that I'm maintaining for each socket, for those I was using stl &lt;code&gt;uint8_t&lt;/code&gt; vectors and while that seems fine in principle it was bad design.&lt;br&gt;
All of the consumption that occurs in our vector occurs from the front and this cause the rest of the data to be shifted to the front costing us &lt;code&gt;O(N)&lt;/code&gt; time which must be mitigated.&lt;br&gt;
The way I decided to go about this is using pointer arithmetic.&lt;br&gt;
We could simply have a struct like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nc"&gt;Buffer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;uint8_t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;buffer_begin&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;uint8_t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;buffer_end&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;uint8_t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;data_begin&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;uint8_t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;data_end&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;for consumption we simply move forward the &lt;code&gt;data_begin&lt;/code&gt; pointer and all it costs is &lt;code&gt;O(1)&lt;/code&gt;, for appending we can similarly increment the data_end pointer and perform a &lt;code&gt;memcpy()&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue in this
&lt;/h3&gt;

&lt;p&gt;This was not so much an issue with the approach as it was an issue with my understanding of pointers and their arithmetic, but it took me a full day to actually understand and implement the resizing functionality of this approach.&lt;/p&gt;

&lt;p&gt;We have 2 scenarios:&lt;/p&gt;

&lt;h4&gt;
  
  
  A. enough unused space in buffer to compact it and append
&lt;/h4&gt;

&lt;p&gt;In this scenario we can simple perform a &lt;code&gt;memmove()&lt;/code&gt; and move the data from &lt;code&gt;data_begin&lt;/code&gt; to &lt;code&gt;buffer_begin&lt;/code&gt;.&lt;br&gt;
We can then adjust the pointers accordingly and proceed with copying new data.&lt;/p&gt;

&lt;h4&gt;
  
  
  B. Not enough space in buffer
&lt;/h4&gt;

&lt;p&gt;This is the scenario where we must allocate more space and then first copy the current buffer followed by the new data. I decided to be traditional and 2x the size of the current buffer until it was large enough to accommodate the new data.&lt;br&gt;
Here we make use of &lt;code&gt;malloc()&lt;/code&gt; followed by a &lt;code&gt;memcpy()&lt;/code&gt; since using calloc is unnecessary here as we do not need zero initialization and we can simply overwrite the garbage values (this is different from what we'll perform in the hash table but that will come later (sooner rather than later i hope 😭)).&lt;/p&gt;




&lt;p&gt;I guess this concludes this post. I'll challenge myself to finish up hash tables asap and write the next blog soon.&lt;/p&gt;

&lt;p&gt;Bye! 👋&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>redis</category>
      <category>pubsub</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Blueis-h 2: Making Sense of the Byte Soup</title>
      <dc:creator>Manas Trivedi</dc:creator>
      <pubDate>Mon, 23 Feb 2026 19:56:06 +0000</pubDate>
      <link>https://dev.to/kettlesteam/blueis-h-2-making-sense-of-the-byte-soup-gc8</link>
      <guid>https://dev.to/kettlesteam/blueis-h-2-making-sense-of-the-byte-soup-gc8</guid>
      <description>&lt;p&gt;Been a while since the last post, but who could've known that knowing what an event loop is and finally writing to actually implement it are two different beasts (and one is so much more scary 😭).&lt;/p&gt;

&lt;p&gt;Once I was done with it though, the time came to decide upon the protocols I'll be using and I decided to go the simplest route available given how this is my first system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Framing Protocol
&lt;/h2&gt;

&lt;p&gt;For the framing protocol, going the tried and tested route of using a fixed byte length prefix followed by a variable length string seemed easier to implement and more importantly understand, it also fit well with my choice for my next protocol and my project's needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Command Protocol
&lt;/h2&gt;

&lt;p&gt;This is the protcol I used to actually parse the commands once we have enough stuff in the buffer to actually constitute a full request. For this I decided to use a fixed length prefix 'nstr' for the number of strings in the command. &lt;/p&gt;

&lt;p&gt;This is followed by another fixed length prefix for the length of the first string followed by the string itself, this goes on for however many strings constitute the command.&lt;/p&gt;




&lt;p&gt;This works in theory but feels kind of trivial so I'm unable to decide if this is the route I should go but unless I find something more optimal I'll stick with this.&lt;/p&gt;

&lt;p&gt;Another progress I made was to just slap an unordered_map at my server to finally get the KV-Store working. This was less thought of as a final feature and more of my own personal wish to see my project in action so I believe the next step I'll be taking is to construct my own hashtable.&lt;/p&gt;

&lt;p&gt;This update took a while because Fate didn't like I was finally being a bit productive and decided to make me sick 😭. Hopefully, the next update is much sooner.&lt;/p&gt;

&lt;p&gt;Thanks for the read!&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>redis</category>
      <category>pubsub</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Blueis-h 1: Thread Pools and Event Loops</title>
      <dc:creator>Manas Trivedi</dc:creator>
      <pubDate>Tue, 10 Feb 2026 05:40:13 +0000</pubDate>
      <link>https://dev.to/kettlesteam/blueis-h-1-thread-pools-and-event-loops-1ohl</link>
      <guid>https://dev.to/kettlesteam/blueis-h-1-thread-pools-and-event-loops-1ohl</guid>
      <description>&lt;p&gt;Welcome to the first post of this series which also just happens to the first design tradeoff I encountered in this project: whether to achieve concurrency by the use of threads, or use the event loop.&lt;br&gt;
Threads sounded appealing at first mainly because I'm familiar with them because of mainstream usage of the term but they came with heavy drawbacks such as, race conditions and then having to manage those conflicts through the usage of lock systems. Another significant drawback is that if we rely solely on threads for achieving concurrency we'll only ever be able to handle a linearly increasing number of clients that scales with the number of available threads.&lt;br&gt;
Event loop was a new topic to me but it offered a single threaded way to tackle these issues. The event loop works with the realization that the main cause for delay in a serialized workflow is the active communication channel waiting for I/O from the socket buffer, once we realize that, we can tackle this by using something call &lt;strong&gt;non-blocking I/O&lt;/strong&gt;.&lt;br&gt;
In non-blocking I/O we &lt;em&gt;poll&lt;/em&gt; the socket's file descriptor at every iteration on whether it is ready to read/write or drop the connection.&lt;br&gt;
This ensures we never on a socket if it is not ready for I/O allowing us to handle multiple connections at once, at every turn when a read/write is performed we can then check if the data in the socket's (program maintained) buffer is enough to form a full request or response.&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>redis</category>
      <category>pubsub</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Blueis-h 0: The Journey Begins</title>
      <dc:creator>Manas Trivedi</dc:creator>
      <pubDate>Mon, 19 Jan 2026 15:14:26 +0000</pubDate>
      <link>https://dev.to/kettlesteam/blueis-h-0-the-journey-begins-2be8</link>
      <guid>https://dev.to/kettlesteam/blueis-h-0-the-journey-begins-2be8</guid>
      <description>&lt;p&gt;This is my first post here and it's a precursor to a series of blog posts which I will henceforth use to document my descent into the crazy but enticing world of systems design.&lt;/p&gt;

&lt;p&gt;I'll be making my own version of Redis but before I can even think about embarking on that journey I need to learn so much, everything from how to implement concurrency to what kind of "magic" Redis even performs under the hood to do what it does.&lt;/p&gt;

&lt;p&gt;This series is not about speed. It’s about understanding slowly, painfully, and properly.&lt;/p&gt;

&lt;p&gt;I've not even begun properly and yet I can see looming in the distance - wrong assumptions, code rewrites, and so much staring at the screen to the point it becomes a blur.&lt;/p&gt;

&lt;p&gt;So please; wish me luck.&lt;/p&gt;

</description>
      <category>0</category>
      <category>systemdesign</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
