<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shivam Chauhan</title>
    <description>The latest articles on DEV Community by Shivam Chauhan (@shivam_chauhan).</description>
    <link>https://dev.to/shivam_chauhan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shivam_chauhan"/>
    <language>en</language>
    <item>
      <title>WTF is Low-Level Design?</title>
      <dc:creator>Shivam Chauhan</dc:creator>
      <pubDate>Thu, 17 Oct 2024 18:57:19 +0000</pubDate>
      <link>https://dev.to/shivam_chauhan/wtf-is-low-level-design-58b3</link>
      <guid>https://dev.to/shivam_chauhan/wtf-is-low-level-design-58b3</guid>
      <description>&lt;p&gt;Imagine trying to build a house without a detailed floor plan. You might know you need walls, a roof, and windows, but without specifics, you'll end up with a disaster. Similarly, Low-Level Design is the detailed blueprint of a software system. It delves into the specifics of class diagrams, object interactions, and the minute details that High-Level Design (HLD) overlooks.&lt;br&gt;
LLD is where we decide how to implement the components defined in the HLD. It's the step where abstract ideas become concrete plans, ready to be transformed into code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is LLD Important?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It is defined in the name itself, LOW&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;High-Level Design gives you the 10,000-foot view perspective of the system architecture. But it's the Low-Level Design that maps out the journey from point A to point B, ensuring every component works seamlessly together.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enhancing Code Quality and Maintainability&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A well-thought-out LLD leads to cleaner code that's easier to understand, maintain, and extend. It helps in identifying potential bottlenecks and design flaws before they make their way into the codebase.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Facilitating Effective Communication&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;LLD serves as a common language among team members, bridging gaps between developers, testers, and stakeholders. It provides a clear picture of the system's workings, making collaboration smoother.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Do Companies Expect LLD Skills?&lt;/strong&gt;&lt;br&gt;
In the competitive tech landscape, companies aren't just looking for code monkeys who can churn out lines of code. They seek engineers who can:&lt;br&gt;
Design Robust Systems: Craft solutions that are scalable, efficient, and resilient.&lt;br&gt;
Think Critically: Anticipate challenges and design systems that can adapt to changing requirements.&lt;br&gt;
Collaborate Effectively: Communicate ideas clearly and work seamlessly in a team setting.&lt;br&gt;
Having strong LLD skills showcases your ability to think deeply about problems and design solutions that stand the test of time.&lt;/p&gt;

&lt;p&gt;LLD in Real Life: The Resonance with Our Experiences&lt;br&gt;
We've all been there—diving headfirst into coding, only to hit a wall when the pieces don't fit together. I remember a project where we skipped the detailed design phase in our rush to meet deadlines. The result? A tangled mess of code that was nearly impossible to debug or extend.&lt;br&gt;
It's like trying to assemble IKEA furniture without the manual (and trust me, that's a challenge I'd rather avoid). LLD is that manual—it guides us through the assembly process, ensuring each part fits perfectly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge: Mastering LLD&lt;/strong&gt;&lt;br&gt;
Despite its importance, many developers find LLD daunting. It requires:&lt;br&gt;
&lt;strong&gt;Deep Understanding:&lt;/strong&gt; Grasping not just the requirements but the best ways to implement them.&lt;br&gt;
&lt;strong&gt;Attention to Detail:&lt;/strong&gt; Considering all possible interactions and edge cases.&lt;br&gt;
&lt;strong&gt;Experience:&lt;/strong&gt; Knowing design patterns and principles that lead to effective solutions.&lt;/p&gt;

&lt;p&gt;Enter Coudo AI: Your Ally in Mastering LLD&lt;br&gt;
What if there was a way to simplify this complex process? A tool that could guide you through the intricacies of LLD, offering insights and suggestions tailored to your project?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introducing Coudo AI&lt;/strong&gt;&lt;br&gt;
Coudo AI (&lt;a href="//www.coudo.ai"&gt;www.coudo.ai&lt;/a&gt;) is an intelligent assistant designed to help developers like us navigate the challenges of Low-Level Design. It's like having a seasoned mentor by your side, offering guidance and support when you need it most.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Coudo AI Empowers You&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Personalized Design Assistance&lt;br&gt;
Coudo AI analyzes your project requirements and suggests design patterns and structures that fit your specific needs. It helps you create detailed class diagrams and sequence diagrams with ease.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhancing Learning and Growth&lt;br&gt;
As you work, Coudo AI provides explanations and insights, reinforcing your understanding of LLD principles. It's not just about getting the job done—it's about becoming a better engineer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Saving Time and Reducing Errors&lt;br&gt;
By catching potential issues early in the design phase, Coudo AI helps you avoid costly mistakes down the line. It streamlines the design process, so you can focus on what you do best—building amazing software.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;My Experience with Coudo AI&lt;/strong&gt;&lt;br&gt;
I recently used Coudo AI on a project that initially seemed overwhelming. With its guidance, I was able to:&lt;/p&gt;

&lt;p&gt;Map Out Complex Interactions: Visualize how different components would interact before writing any code.&lt;br&gt;
Choose the Right Patterns: Implement design patterns that improved scalability and maintainability.&lt;br&gt;
Improve Team Collaboration: Share clear and detailed designs with my team, making development smoother.&lt;br&gt;
It transformed a potentially stressful project into an enjoyable and educational experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why You Should Give Coudo AI a Try&lt;/strong&gt;&lt;br&gt;
In an industry that's constantly evolving, tools like Coudo AI offer a competitive edge. They not only help you deliver better software but also accelerate your professional growth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits at a Glance:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Confidence in Your Designs:&lt;/strong&gt; Know that your solutions are sound and well-structured.&lt;br&gt;
&lt;strong&gt;Continuous Learning:&lt;/strong&gt; Stay updated with best practices and modern design principles.&lt;br&gt;
&lt;strong&gt;Efficiency:&lt;/strong&gt; Spend less time wrestling with design challenges and more time coding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrapping Up&lt;/strong&gt;&lt;br&gt;
Low-Level Design is more than just a step in the development process—it's the foundation that supports robust and effective software solutions. By investing time in mastering LLD, we position ourselves to create systems that are not only functional but exceptional.&lt;br&gt;
And with tools like Coudo AI, the journey becomes less daunting and more rewarding. So why not take the leap? Dive deep into LLD, embrace the details, and watch your projects—and your career—soar to new heights.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Up&lt;/strong&gt;&lt;br&gt;
But, how to master it?&lt;br&gt;
Follow so you don't miss the notification&lt;/p&gt;

&lt;p&gt;Happy designing and coding!&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>systemdesign</category>
      <category>lld</category>
      <category>programming</category>
    </item>
    <item>
      <title>Breaking the 300 barrier</title>
      <dc:creator>Shivam Chauhan</dc:creator>
      <pubDate>Wed, 21 Feb 2024 18:35:01 +0000</pubDate>
      <link>https://dev.to/shivam_chauhan/breaking-the-300-barrier-3jbb</link>
      <guid>https://dev.to/shivam_chauhan/breaking-the-300-barrier-3jbb</guid>
      <description>&lt;p&gt;Everything has it’s limit and we Humans are known to &lt;strong&gt;break&lt;/strong&gt; them!!&lt;/p&gt;

&lt;p&gt;Sound’s speed being 343.2m/s seemed &lt;strong&gt;unbreakable&lt;/strong&gt; to our species until 1947 when Bell X-1 piloted by U.S. Air Force Captain Chuck Yeager &lt;strong&gt;&lt;code&gt;Breaks the Sound Barrier&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Why is it relevant to us? Well, in my current org, we were been limited by our servers to a speed of 300 requests served per minute per server hosted on a single Rails application served by the Puma webserver. This limit seemed unbreakable to us and we lived our days by that.&lt;/p&gt;

&lt;p&gt;Today, I will take you through the process of how we broke this limit to &lt;strong&gt;2200 RPM&lt;/strong&gt; and an inspirational journey for us budding engineers to learn how to understand our servers and listen to these choke points.&lt;/p&gt;

&lt;p&gt;Let’s discuss what happened and how we achieved it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backstory
&lt;/h2&gt;

&lt;p&gt;Recently, we have been experiencing some Latency issues. The reason being the increase in our userbase as more companies are adopting our platform to do their procurement efficiently. Our core product is a monolith rails application served on AWS servers. We use AWS ELB's Application load balancer to swiftly balance the load on multiple servers and increase them when needed.&lt;br&gt;
So, these sudden latency issues caused Load Balancer to aggressively scale up horizontally by adding more servers. This worked and made our latency back to an acceptable 200ms limit. But the catch is that this approach is not reliable enough to keep up with the cost.&lt;br&gt;
After viewing the data from &lt;a href="https://newrelic.com/" rel="noopener noreferrer"&gt;New Relics&lt;/a&gt;, we found out that our one server can go upto at max of &lt;strong&gt;300 RPM limit&lt;/strong&gt; and we thought this is what max a server can achieve.&lt;/p&gt;
&lt;h3&gt;
  
  
  Then what happened next?
&lt;/h3&gt;

&lt;p&gt;I have always thought that yes increasing the number of servers works (up to a limit) but instead of just adding more instances, should we check if we have already used the last bit of compute which our current system can offer?&lt;/p&gt;

&lt;p&gt;We use &lt;code&gt;t2.2xlarge&lt;/code&gt; machines provided by &lt;strong&gt;AWS&lt;/strong&gt;. These are powerful machines which made me wonder "&lt;code&gt;how come these dam machines with  8 vCPU machine and  32GB of RAM can't handle more than 5 requests a second?&lt;/code&gt;". There's something wrong we are doing.&lt;/p&gt;

&lt;p&gt;This curiosity made me dewl into the server’s instance, SSHed it and install the htop command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;htop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, started the htop command interface by&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;htop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and closely monitored the CPU and Memory usage of the instance. Then I copied some CURLs from our dashboard into &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;, made a collection of it and ran these collections in parallel with &lt;a href="https://github.com/postmanlabs/newman" rel="noopener noreferrer"&gt;Newman CLI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here’s the screenshot of the htop commands while the server was in high load.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25f1e6w49cs9qa5sce0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25f1e6w49cs9qa5sce0d.png" alt="htop command for 1 worker" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do you see anything wrong in these images..??&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With having so much load on the system, still only &lt;strong&gt;1 particular vCPU&lt;/strong&gt; number #6 is getting to 40% utilisation while the rest are just lying idle with 1-2% usage. Also, our huge 32GB RAM seems like taking the day off and getting only 1.42 GB utilised.&lt;/p&gt;

&lt;p&gt;This instantly made me realise that we are severely underutilising our resources. &lt;strong&gt;But how to fully utilise them?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;As we use &lt;a href="https://puma.io/" rel="noopener noreferrer"&gt;Puma&lt;/a&gt; as our webserver for our rails application, I quickly went to Puma's config file which typically resides in config/puma.rb. The config was set as&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;threads_count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"RAILS_MAX_THREADS"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="p"&gt;}.&lt;/span&gt;&lt;span class="nf"&gt;to_i&lt;/span&gt;
&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="n"&gt;threads_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;threads_count&lt;/span&gt;
&lt;span class="c1"&gt;# workers ENV.fetch("WEB_CONCURRENCY") { 2 }&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;PS: I have removed the unimportant stuff&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is mostly the default config template of Puma and was kept untouched since intial tweaks at beginning of the project a long time ago.&lt;br&gt;
Here we can see terms such as &lt;code&gt;threads_count&lt;/code&gt; and &lt;code&gt;workers&lt;/code&gt;. These terms caught my eye and after some google search found out that Puma can utilise multiple cores of a machine by running multiple processes of the rails application. Puma terms these as workers. We can also define the number of threads to spawn up which can be beneficial in case of I/O block on the database.&lt;/p&gt;

&lt;p&gt;According to our current config, we initialise 5 threads on a single Puma worker. Each thread can handle 1 request at a time. So we can deduce, that a single server can at most handle 5 requests at a time. Doing some mathematics, assuming 200ms average request response time, we can say that&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;1/0.200&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; 60 &lt;span class="o"&gt;=&lt;/span&gt; 300
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, in theory, we can handle 300 requests per minute on a single server which was the assumption we started with.&lt;br&gt;
After this, I decided to play with this configuration and see what we could achieve. &lt;br&gt;
But, to go ahead I need a system to measure the metrics of our load testing. So I quickly set up &lt;a href="https://locust.io/" rel="noopener noreferrer"&gt;Locust&lt;/a&gt; on my system. Locust is an open-source easy to setup load-testing framework.&lt;/p&gt;

&lt;p&gt;Let's tweak the config&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;RAILS_MIN_THREADS &lt;span class="o"&gt;=&lt;/span&gt; 1
RAILS_MAX_THREADS &lt;span class="o"&gt;=&lt;/span&gt; 16
WEB_CONCURRENCY &lt;span class="o"&gt;=&lt;/span&gt; 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the test, these are the htop results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpz1gn8xfcxjoisytdk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpz1gn8xfcxjoisytdk5.png" alt="htop command for 5 workers" width="800" height="560"&gt;&lt;/a&gt;&lt;br&gt;
Kudos, now all our CPU cores are being utilised, but still they are not fully utilised. They are mostly at 35% on average.&lt;/p&gt;

&lt;p&gt;A new question arises, &lt;code&gt;What is the limit after which we can say we should stop burning more CPU?&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Locust says that we had achieved a steady 29 RPS, which translates to approx 1700 RPM.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzce8ajmvk1q3dj1svh73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzce8ajmvk1q3dj1svh73.png" alt="Locust results on 5 workers" width="502" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, doing this little tweak we went from 300 to 1700 RPM. But what if we increase more workers..??&lt;/p&gt;

&lt;p&gt;Now, let’s try increasing workers count to 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50nwfs1iq4x3njp2o278.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50nwfs1iq4x3njp2o278.png" alt="htop command for 7 workers" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here in htop, we can see that now our vCPUs are getting almost used to maximum limits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzwsn1aovnj7fxnx99pp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzwsn1aovnj7fxnx99pp.png" alt="Locust results on 7 workers" width="538" height="140"&gt;&lt;/a&gt;&lt;br&gt;
Doing this we were able to achieve constant RPS of around 37 which translates to a whopping &lt;strong&gt;2200 RPM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I did many other tests with different worker counts and found 5 being a suitable number for our usecase but it can vary for your application drastically.&lt;br&gt;
Here's a puma config which works best for Gitlab -&amp;gt; &lt;a href="https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/puma.rb.example?ref_type=heads" rel="noopener noreferrer"&gt;https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/puma.rb.example?ref_type=heads&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping it up
&lt;/h3&gt;

&lt;p&gt;This was a great learning experience for me, where diving deep into the system helped me understand how code is actually running and serving our users.&lt;/p&gt;

&lt;p&gt;Happy coding and learning!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visit my website at &lt;a href="https://shivam.fyi/" rel="noopener noreferrer"&gt;shivam.fyi&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>rails</category>
      <category>performance</category>
      <category>aws</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
