<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rahul</title>
    <description>The latest articles on DEV Community by Rahul (@rahul_ramfort).</description>
    <link>https://dev.to/rahul_ramfort</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rahul_ramfort"/>
    <language>en</language>
    <item>
      <title>Never assume that patch updates are always non-breaking!</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Wed, 14 Apr 2021 13:45:48 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/never-assume-that-patch-updates-are-always-non-breaking-nkb</link>
      <guid>https://dev.to/rahul_ramfort/never-assume-that-patch-updates-are-always-non-breaking-nkb</guid>
      <description>&lt;p&gt;We learnt it the hard way. 👇&lt;/p&gt;

&lt;p&gt;Last Friday afternoon, we bumped the &lt;code&gt;Rails&lt;/code&gt; version from 5.2.3 to 5.2.5. We had to do this because one of the dependent gems of rails had been yanked very recently and rails quickly released a patch update to address this issue. &lt;/p&gt;

&lt;p&gt;We meticulously went through the changelog because we didn't want this to break our system especially since it was on a Friday afternoon. (We have a history of Friday deployments causing outages :D )&lt;/p&gt;

&lt;p&gt;Changelog said, good to go.&lt;/p&gt;

&lt;p&gt;We went ahead and prepared for the release. &lt;/p&gt;

&lt;p&gt;Things looked alright on our staging(test) environment and we deployed it to prod. Our production deployment pipeline failed. (Eyebrows were raised at this moment) &lt;/p&gt;

&lt;p&gt;Because until that point, deployments were going through without any hassle as Docker had been using the cached version of the yanked gem. Since there's a failure now, the cache is gone and we can no longer push any further deployments without fixing this. 🤦‍♂️&lt;/p&gt;

&lt;h4&gt;
  
  
  So how did it pass the sanity on our staging?
&lt;/h4&gt;

&lt;p&gt;Funnily, the release also had a &lt;code&gt;log&lt;/code&gt; to one of our background processes and I just checked if the latest code was there on the new pod. But what I didn't notice was that the application pods were crashing. Sanity was done on the previous release 🤨&lt;/p&gt;

&lt;p&gt;I tried making some change to the buggy release and pushed it to the stage and we found the issue on staging now. &lt;/p&gt;

&lt;p&gt;Application pods weren't getting up because the health checks were failing. (We did not have any health checks for our background job pods)&lt;/p&gt;

&lt;h4&gt;
  
  
  So what's the big deal?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;No further deployments can be pushed to production &lt;/li&gt;
&lt;li&gt;Our staging was down.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Shoot! Almost everyone was blocked in one way or the other. &lt;/p&gt;

&lt;p&gt;At first, I thought this was an infra issue. But soon I realised, the health check request wasn't even going through. The API was broken. 🤯&lt;/p&gt;

&lt;p&gt;We were also using a gem called &lt;code&gt;grape&lt;/code&gt; for APIs and health checks were going through that API. &lt;br&gt;
Yes, Grape broke! &lt;/p&gt;

&lt;p&gt;Wait, a patch update of rails that had almost nothing in the changelog broke grape? YES!!!&lt;/p&gt;

&lt;h4&gt;
  
  
  So who's the culprit?
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;Rack&lt;/code&gt; - it was bumped from 2.0.7 -&amp;gt; 2.2.3 (We missed this as there were a lot of dependent gems that got updated)&lt;/p&gt;

&lt;p&gt;Rack is the middleware that forwards the requests to either grape or rails API. The response that it sends over had been changed (god knows why) and grape wasn't yet ready for this. The cascading effect was that all the grape APIs were failing including the health checks and our system was down.&lt;/p&gt;

&lt;p&gt;We now had no other option but to update grape to the latest version and hope that it fixes this issue!&lt;br&gt;
Thankfully, it fixed the issue.&lt;/p&gt;

&lt;h4&gt;
  
  
  What other option did we have?
&lt;/h4&gt;

&lt;p&gt;Had it not fixed the issue, we would have been forced to move all the APIs out of grape to rails API. &lt;br&gt;
Just the thought of this made me claustrophobic because that would not only ruin my Friday night but also would have consumed my weekend! &lt;/p&gt;

&lt;p&gt;Lucky escape indeed!&lt;/p&gt;

&lt;p&gt;Though it was pretty scary when it happened, I would take this learning on any day. &lt;/p&gt;

&lt;p&gt;Lessons learnt ✅ fortunately without any major outage. 🤞&lt;/p&gt;

&lt;p&gt;PS: This post was originally tweeted as a thread &lt;a href="https://twitter.com/rahul_ramfort/status/1381598665467293699?s=20"&gt;here&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>rails</category>
      <category>ruby</category>
    </item>
    <item>
      <title>Understanding Database Connection Pool</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Wed, 31 Mar 2021 13:31:03 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/understanding-database-connection-pool-5b9j</link>
      <guid>https://dev.to/rahul_ramfort/understanding-database-connection-pool-5b9j</guid>
      <description>&lt;p&gt;Recently we faced an issue on our production where the CPU of our Postgres instance was hitting 99%. It was struggling to serve requests and as a result, our APIs were failing.&lt;/p&gt;

&lt;p&gt;To start with, the issue happened in one of our Golang based microservice. This is a very thin &lt;em&gt;legacy&lt;/em&gt; microservice that serves two lightweight APIs, so this service was never touched often and nobody cared.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdatabase%2F3_connection_pool%2Fcpu100.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdatabase%2F3_connection_pool%2Fcpu100.png%3Fraw%3Dtrue" alt="Alt text of image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1674 active connections were ridiculously high for this service.&lt;/p&gt;

&lt;p&gt;Looking at this image, it was obvious that the number of active database connections was constantly increasing and a quick look at the code told us, we were not making use of the connection pool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database connection is costly
&lt;/h3&gt;

&lt;p&gt;When the application needs to query something, it creates a new connection to the remote database and uses it to query and then discards the connection.&lt;/p&gt;

&lt;p&gt;Creating a new connection is costly because it involves the following: -&lt;br&gt;
the application has to read the database connection string, send the TCP/IP call to the DB instance, the database now has to authenticate/authorize the request and then let the application establish the new connection. The application can now make the actual query with the new connection.&lt;/p&gt;

&lt;p&gt;Consider this happening for every time the application needs a DB connection, there's a waste of time and resources and this is why creating a new connection is termed &lt;code&gt;costly&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And this is the problem that the connection pool solves.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is a connection pool?
&lt;/h3&gt;

&lt;p&gt;As the name indicates, the connection pool creates a set of connections and caches them. These cached connections are then given to the application on demand and can be used to perform database operations. Once the application gets its job done with the connection, it returns the connection to the pool and the connections can be reused thereby eliminating the creation of new connections. Ultimately it improves the performance of your application.&lt;/p&gt;

&lt;p&gt;There are three connection pool variables that the golang SQL driver supports&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MaxOpenConns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This specifies the maximum number of open connections that can be made to the database. In our case, since it was not defined, by default it allows an unlimited number of connections. This is the reason why the DB connections were ever-increasing.&lt;/p&gt;

&lt;p&gt;Consider that the &lt;code&gt;MaxOpenConns&lt;/code&gt; variable is set at &lt;code&gt;5&lt;/code&gt;, now if all the connections are being used and the application needs another connection, it has to wait till one of those 5 connections is returned to the connection pool by the application. &lt;/p&gt;

&lt;p&gt;You have to estimate two things before setting this value, one - the number of connections your DB can handle, two - the number of parallel workers you expect your application to work with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;We have set the `MaxOpenConns` at 30 for our instances.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;MaxIdleConns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This specifies the number of connections that can be idle in the connection pool (not used by the application). By default, the number is 2. &lt;/p&gt;

&lt;p&gt;Consider that the &lt;code&gt;MaxOpenConns&lt;/code&gt; variable is set at 30 and &lt;code&gt;MaxIdleConns&lt;/code&gt; is set at 10. If the application is currently using only 12 of those connections, then the number of idle connections will be 18. Since the allowed &lt;code&gt;MaxIdleConns&lt;/code&gt; is 10, the rest 8 connections will be returned to the database.&lt;/p&gt;

&lt;p&gt;If your service can get a sudden increase in traffic every now and then, you should consider having a higher value for &lt;code&gt;MaxIdleConns&lt;/code&gt; so that when there's a spike in traffic, not a lot of new connections are created. So based on your load pattern, you can set the number of idle connections.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;We have set the `MaxIdleConns` at 20 for our instances.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;ConnMaxLifetime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This specifies the validity of the connections in the pool. Once expired, the connections have to be reestablished. By default, the validity is forever. Unless you have your database behind a load balancer and swap databases, this value can be set quite high.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;We have set the `ConnMaxLifetime` at 30 minutes for our instances.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After setting these variables, the number of connections came down to 180 (we have 6 pods) but the CPU was still at 99%. Turned out that the issue was because of some queries consuming more time than they should. Had to index a column as the data grew enormously over time which resulted in queries taking forever to complete.&lt;/p&gt;

&lt;p&gt;The Lesson: &lt;em&gt;Always write code that works at scale!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>postgres</category>
      <category>database</category>
    </item>
    <item>
      <title>Taking baby steps to protect privacy!</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Fri, 27 Nov 2020 12:46:07 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/taking-baby-steps-to-protect-privacy-ko3</link>
      <guid>https://dev.to/rahul_ramfort/taking-baby-steps-to-protect-privacy-ko3</guid>
      <description>&lt;p&gt;You would have already read a number of articles related to privacy and said this to yourself, "I don't have anything to hide. Why should I be concerned?". This &lt;a href="https://en.wikipedia.org/wiki/Nothing_to_hide_argument"&gt;argument&lt;/a&gt; is totally wrong. Allow me to explain.&lt;/p&gt;

&lt;p&gt;Look at these for instance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You google search to explore something like, "Which one should you buy, MacBook Air vs Pro?", the next day you would have received an email digest from Quora,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Still curious about &lt;code&gt;MacBook Air vs MacBook Pro&lt;/code&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to check out a product that your friend has bought, you explore it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You start getting ads for that product from different e-commerce platforms for a lesser price. (No, I don't want to buy that thing.)&lt;/p&gt;

&lt;p&gt;There are a lot of instances like these where a third-party slyly sends you something based on your usage and history. This is because of cross-site trackers that track and monitor you all the time.&lt;br&gt;
You could still feel this is really helpful to you and there's no harm in someone capturing all the data.&lt;/p&gt;

&lt;p&gt;Let me explain in a way that makes you feel it's wrong.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You talk with your friend and someone is listening to all your talk. Will you be okay with this? &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have someone observing and taking notes on you all the time from morning till you hit the bed on things like, what you like, what you don't like, whom do you talk to often, which app do you use frequently, what are your wants and many more.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, you would definitely want to run away from this person even though he assures you that he is only going to help you with these data. Right? This is what is happening currently when you do anything online. Every activity of yours gets recorded. &lt;/p&gt;

&lt;h3&gt;
  
  
  Data is Power
&lt;/h3&gt;

&lt;p&gt;You have given them (read Google) the unsupervised access to your everyday activities, usage pattern, your preferences, your photos, your locations, and everything and anything that can be tracked online. The amount of data that they possess on you is huge. Even you wouldn't be sure of all the insights that they get from your data. &lt;/p&gt;

&lt;p&gt;If you're still not convinced about protecting your privacy, read these posts, &lt;a href="https://teachprivacy.com/10-reasons-privacy-matters"&gt;this&lt;/a&gt; and &lt;a href="https://www.amnesty.org/en/latest/campaigns/2015/04/7-reasons-why-ive-got-nothing-to-hide-is-the-wrong-response-to-mass-surveillance/"&gt;this&lt;/a&gt;. This will help you get started.&lt;/p&gt;

&lt;p&gt;Okay, But why now?&lt;/p&gt;

&lt;p&gt;Recently Netflix released a docudrama &lt;a href="https://en.wikipedia.org/wiki/The_Social_Dilemma"&gt;Social Dilemma&lt;/a&gt; that explains a lot of essential points like the impact of social media on mental health, how social media tries to exploit the users and manipulates them, how it segments each profile and sends addictive content one after the other to keep the user engaged for a long time, how it fine-tunes the notifications to bring the user back to the app and a lot more.&lt;/p&gt;

&lt;p&gt;The one thing that made me sad after watching it is that the number of actionable items is very less. After explaining all the things, the show just tells you to &lt;em&gt;Turn off Notifications&lt;/em&gt; and nothing else.&lt;/p&gt;

&lt;p&gt;I decided to explore more on this and take baby steps in protecting my privacy.&lt;br&gt;
I wanted to take one step at a time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chrome - the browser
&lt;/h3&gt;

&lt;p&gt;Moving away from Chrome has forever been on my todo list. Having been used to several extensions, I didn't want to leave the ecosystem even with a lot of complaints on Chrome's battery consumption and higher RAM consumption.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the alternatives?
&lt;/h4&gt;

&lt;p&gt;There are a lot of alternatives but I chose &lt;a href="https://brave.com/"&gt;Brave&lt;/a&gt; because then I wouldn't miss the ecosystem. Brave like Chrome is also a Chromium-based web browser, so all the extensions would work here as well. Apparently, it has also received a lot of recommendations because it blocks ads and website trackers by default.&lt;/p&gt;

&lt;p&gt;The migration was pretty smooth. Loving it so far.&lt;/p&gt;

&lt;h3&gt;
  
  
  Google as a search engine
&lt;/h3&gt;

&lt;p&gt;Hands down, Google is the undisputed winner here. It is the best search engine we have ever had. But the fear with Google as the search engine is due to the amount of data it keeps collecting. If you haven't noticed it yet, spend some time &lt;a href="https://myactivity.google.com/myactivity"&gt;here&lt;/a&gt;. It shows your entire online history. Scary, ain’t it?&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the alternatives?
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://duckduckgo.com/"&gt;Duckduckgo&lt;/a&gt; and &lt;a href="https://www.qwant.com/"&gt;Qwant&lt;/a&gt; were my options as they don't profile users and respect user's privacy with the downside of not getting personalised results.&lt;/p&gt;

&lt;p&gt;I tried them both but came back to Google instantly. Google has made giant leaps in terms of user experience, showing better results, showing the results in the search page itself for a lot of queries.&lt;/p&gt;

&lt;p&gt;For instance, if you search for &lt;code&gt;EPL table&lt;/code&gt; it would show the table in the search page itself. Being used to this, it is unthinkable to switch to other search engines that do not provide this luxury.&lt;/p&gt;

&lt;p&gt;So, let's accept the fact that moving away from Google as the search engine is difficult and not everyone would do it. If you prefer to continue with Google, this is the least that you can do to protect your privacy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delete your old web activity.&lt;/li&gt;
&lt;li&gt;Turn off Web and App Activity.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  What did I do?
&lt;/h4&gt;

&lt;p&gt;Having said these, I would still want to get the best results when I work, I didn't want to end up spending more time just because I moved to a different search engine. So I chose to continue with Google as my search engine but with a small change. &lt;/p&gt;

&lt;p&gt;As of today, I have two combinations of browser and search engine.&lt;/p&gt;

&lt;h4&gt;
  
  
  Brave + Duckduckgo:
&lt;/h4&gt;

&lt;p&gt;This takes care of all personal usage. Logged in with my personal mail and all personal stuff goes here. For eg: I am writing this post on Brave + Duckduckgo.&lt;/p&gt;

&lt;h4&gt;
  
  
  Chrome + Google Search Engine:
&lt;/h4&gt;

&lt;p&gt;This is for all dev tasks and professional usage. I would also do the single page results here. I don't care even if I get tracked here because it would never affect my personal usage. I have used my office mail id to log in here but you can choose to even use a secondary/dummy mail for this. I have also installed the &lt;a href="https://chrome.google.com/webstore/detail/privacy-badger/pkehgijcmpdhfbdbbnkijodmdjhbjlgp"&gt;Privacy Badger&lt;/a&gt; extension that blocks the trackers.&lt;/p&gt;

&lt;p&gt;Both these browsers are active always and I choose one of them based on the usage type. It was hard to do these switches at first, but having used this for over a month now, the muscle memory has started to kick in and the switch happens spontaneously.&lt;/p&gt;

&lt;p&gt;If you are a strong consumer of youtube as me, you might feel regretful to turn off your activity as it affects the personalisation. The workaround I have done is to create a secondary google account for this. IMHO, it just took a week or two to get a similar kind of feed in this new account too.&lt;/p&gt;

&lt;p&gt;I don't really know how effective the steps that I have taken are gonna be. But, I am happy that I have done something to protect my privacy. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;* This is my first step. *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Check out these &lt;a href="https://www.thesocialdilemma.com/take-action/"&gt;actionables&lt;/a&gt; in case you're interested. There is so much that we can do to take back our lost privacy!&lt;/p&gt;

</description>
      <category>privacy</category>
    </item>
    <item>
      <title>Thoughts on Google Photos discontinuing free unlimited storage</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Thu, 12 Nov 2020 18:59:52 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/my-thoughts-on-google-photos-discontinuing-free-unlimited-storage-1ip</link>
      <guid>https://dev.to/rahul_ramfort/my-thoughts-on-google-photos-discontinuing-free-unlimited-storage-1ip</guid>
      <description>&lt;p&gt;If you have missed the &lt;a href="https://twitter.com/dflieb/status/1326586058289471491"&gt;news&lt;/a&gt;, here's a quick overview&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Currently, Google Photos allows storage of unlimited high-quality photos and videos on your cloud. &lt;/li&gt;
&lt;li&gt;Only original quality media counts toward the 15GB free storage.&lt;/li&gt;
&lt;li&gt;Beginning June 2021, high-quality photos and videos too will start counting towards your storage quota.&lt;/li&gt;
&lt;li&gt;Media uploaded till June 2021 will remain available for free.&lt;/li&gt;
&lt;li&gt;Pixel users will not be affected by this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We knew this was coming, primarily because of the volume of photos and videos that get uploaded daily. As Google claims, 28 billion photos and videos are uploaded every week. Google had to decide on this, the question was just when.&lt;/p&gt;

&lt;h4&gt;
  
  
  Google Photos wasn't a charity, it had its purpose
&lt;/h4&gt;

&lt;p&gt;IMO, the purpose of google photos in being free was to train its AI and computer vision based on the real-world photos and videos and Google Photos is their most dependable source for these datasets. If you have keenly observed, Google Photos as a product has had significant improvements over the years. It's been 5 years, even if the models aren't perfect yet, they already have ample data to train them better.&lt;/p&gt;

&lt;p&gt;This reminds me of a famous quote,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If you're not paying for it, you're the product
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we aren't the product anymore, the focus shifts to the actual product &lt;em&gt;Google one&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Promoting Google One
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://one.google.com/about"&gt;Google one&lt;/a&gt; - the subscription service was waiting for this move. Over these years, a substantial number of people would have been so used to the ecosystem that they would rather subscribe for the service instead of finding an alternative and migrate everything. This is a potential business model. Providing 100GB for less than 2$ is a pretty neat deal too especially with google's family sharing features. Most of the users would not even have a second thought over subscribing it right away.&lt;/p&gt;

&lt;h4&gt;
  
  
  Yet another typical product
&lt;/h4&gt;

&lt;p&gt;In hindsight, Google could have easily done this years ago but as usual, it went with that typical product philosophy, "First, get the user to the ecosystem, iterate and provide quality service, keep them happy, make it difficult for the user to leave the ecosystem, now charge for the service". I feel Google Photos has passed the test with flying colours. Now is the time for Google Photos to reap the rewards for all the hard work.&lt;/p&gt;

&lt;p&gt;Btw, ICYMI Amazon Photos provides unlimited storage for its prime users. Heard that somewhere before?😉 &lt;/p&gt;

&lt;p&gt;I'd be expecting your views on this too. &lt;/p&gt;

</description>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Understanding Offset vs Cursor based pagination</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Sat, 31 Oct 2020 18:27:01 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/understanding-offset-vs-cursor-based-pagination-1582</link>
      <guid>https://dev.to/rahul_ramfort/understanding-offset-vs-cursor-based-pagination-1582</guid>
      <description>&lt;h4&gt;
  
  
  What is Pagination?
&lt;/h4&gt;

&lt;p&gt;Pagination comes into the picture when you have a large dataset to be sent to the user and you choose to send it in chunks instead of sending all of it in a single response. It is pretty common that you would find its implementation in most of the sites that you visit on a daily basis.&lt;/p&gt;

&lt;p&gt;There are two types of paginations that gets implemented.&lt;/p&gt;

&lt;h3&gt;
  
  
  Offset based pagination
&lt;/h3&gt;

&lt;p&gt;Let's take dev.to for our 1st type example.&lt;/p&gt;

&lt;p&gt;When you open &lt;code&gt;dev.to&lt;/code&gt;, in the feed, you see a list of posts sorted by some filter. This is the request that fetches the posts,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

https://dev.to/search/feed_content?per_page=15&amp;amp;page=1&amp;amp;sort_by=hotness_score&amp;amp;sort_direction=desc&amp;amp;approved=&amp;amp;class_name=Article


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There aren't page numbers on &lt;code&gt;dev.to&lt;/code&gt; similar to the ones you find on Amazon product listing page or a google search page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fmysql%2F2_pagination%2Famazon_pagination.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fmysql%2F2_pagination%2Famazon_pagination.png%3Fraw%3Dtrue" alt="Multiple requests from dev.to"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason is that &lt;code&gt;dev.to&lt;/code&gt; has implemented an infinite scrolling wherein the next set of items are fetched in the background periodically to give the user a seamless experience.&lt;/p&gt;

&lt;p&gt;When you scroll the feed continuously, these are the requests that are sent in the background.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fmysql%2F2_pagination%2Fdev_to_pagination.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fmysql%2F2_pagination%2Fdev_to_pagination.png%3Fraw%3Dtrue" alt="Multiple requests from dev.to"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Only two parameters change for every request in this type of pagination&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;per_page - number of items required per page&lt;/li&gt;
&lt;li&gt;page - current page number&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the backend, the query would look something similar to this, (made it simple to understand the query better)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 select * from posts order by rank limit 15 offset 0;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is for the first page.&lt;br&gt;
For any page, the variable that changes is the offset.&lt;/p&gt;

&lt;p&gt;offset = current_page_number * number of items per page&lt;/p&gt;

&lt;p&gt;The query for different pages would look like this,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 select * from posts order by rank limit 15 offset 750; #page50
 select * from posts order by rank limit 15 offset 775; #page51
 select * from posts order by rank limit 15 offset 1500; #page100



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There are a few important caveats of using this approach,&lt;/p&gt;

&lt;h4&gt;
  
  
  Performance Hit:
&lt;/h4&gt;

&lt;p&gt;Before the explanation, hit these APIs on the browser directly and observe the time taken for each.&lt;/p&gt;

&lt;p&gt;Request for page 1&lt;br&gt;
&lt;a href="https://dev.to/search/feed_content?per_page=15&amp;amp;page=1&amp;amp;sort_by=hotness_score&amp;amp;sort_direction=desc&amp;amp;approved=&amp;amp;class_name=Article"&gt;https://dev.to/search/feed_content?per_page=15&amp;amp;page=1&amp;amp;sort_by=hotness_score&amp;amp;sort_direction=desc&amp;amp;approved=&amp;amp;class_name=Article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Request for page 650&lt;br&gt;
&lt;a href="https://dev.to/search/feed_content?per_page=15&amp;amp;page=650&amp;amp;sort_by=hotness_score&amp;amp;sort_direction=desc&amp;amp;approved=&amp;amp;class_name=Article"&gt;https://dev.to/search/feed_content?per_page=15&amp;amp;page=650&amp;amp;sort_by=hotness_score&amp;amp;sort_direction=desc&amp;amp;approved=&amp;amp;class_name=Article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should feel a slight delay in the second response. &lt;br&gt;
Now you would also get an idea why some of your pagination APIs are taking more time as the page number increases.&lt;/p&gt;

&lt;h5&gt;
  
  
  Explanation:
&lt;/h5&gt;

&lt;p&gt;If you look at the offset for 650th page, it is 9750. &lt;br&gt;
What this essentially means is that to pick 15 items for the 650th page, MySQL has to pick 9750 records from the table and discard the rest of 9735 records one by one. &lt;/p&gt;

&lt;p&gt;Imagine a use case where the offset is even higher, let's say 100,000. In this case, MySQL has to discard 99975 records to pick just 15 records and this happens on each request.&lt;/p&gt;

&lt;p&gt;There has to be an efficient way to pick these 15 records right?&lt;/p&gt;

&lt;h4&gt;
  
  
  Blind Pagination
&lt;/h4&gt;

&lt;p&gt;Since it relies on the limit and offset, this has a chance of showing duplicate data and skipping some data. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Skipping some data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's say the user has seen the response of first two pages(30 posts), now consider I delete my post that was earlier on the first page. &lt;/p&gt;

&lt;p&gt;Now when the posts are fetched for the third page, the user wouldn't notice any change but behind the scenes, &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

select * from posts order by rank limit 15 offset 30;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The post which was at 31st position would have now gone to the 30th position in the table since I deleted my post which was at 7th position.&lt;/p&gt;

&lt;p&gt;As a result, the user would never be able to see this post unless he refreshes the page.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Duplicate data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The same applies when a post jumps from 31st position to 30th position and now the 30th position post which the user has seen already will be shown again at the 31st position.&lt;/p&gt;

&lt;p&gt;Having said these, offset based pagination is easy to implement and gives the flexibility to the user to jump to any specific page.&lt;/p&gt;

&lt;p&gt;If you're using Ruby, there are a lot of gems that help you do offset based pagination within a few minutes. Here a few &lt;a href="https://github.com/kaminari/kaminari" rel="noopener noreferrer"&gt;kaminari&lt;/a&gt;, &lt;a href="https://github.com/mislav/will_paginate/" rel="noopener noreferrer"&gt;will_paginate&lt;/a&gt;,&lt;a href="https://github.com/ddnexus/pagy" rel="noopener noreferrer"&gt;pagy&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cursor based pagination
&lt;/h3&gt;

&lt;p&gt;Let's take Twitter feed for this example.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

https://twitter.com/i/api/2/timeline/home.json?&amp;amp;cursor=HBbM%2Fv%2FnttbR2iQAAA%3D%3D&amp;amp;lca=true&amp;amp;ext=mediaStats%2ChighlightedLabel&amp;amp;pc=0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The cursor is the key parameter in this approach. The client receives a variable called cursor with the response. The cursor is a pointer to a specific item which is to be sent with the following request. The server uses the cursor to fetch the next set of items.&lt;/p&gt;

&lt;p&gt;Here's an example of the response.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "posts": [...],
    "next_cursor": "123456",  # the post id of the next item
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For example, this cursor could be the id of the first element in the next dataset. The simplified query,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

select * from posts where id &amp;gt;= 123456 limit 15;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Furthermore, if the visited page is the last page, then the next_cursor will be empty.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "posts": [...],
    "next_cursor": ""
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The advantage of this approach is that MySQL picks only the required records (15 records) from the database. &lt;/p&gt;

&lt;p&gt;Performance-wise, this approach gives the result in a constant time O(limit) irrespective of the page number.&lt;/p&gt;

&lt;p&gt;Consider for the 10 millionth request&lt;br&gt;&lt;br&gt;
offset based approach - O(limit+offset) = O(15+10M) MySQL picks 10M records from the database and discards nearly everything.&lt;br&gt;
cursor based approach - O(limit) = O(15) MySQL picks only 15 records&lt;/p&gt;

&lt;p&gt;Look at the performance uplift. Cursor based approach is mighty impressive but there are a few things that stop developers from using it without a second thought.&lt;/p&gt;

&lt;p&gt;It is not possible to jump to a random page in this approach as the server relies on the cursor to fetch the records.&lt;/p&gt;

&lt;p&gt;This approach is not straightforward like offset and is trickier to implement sometimes.&lt;/p&gt;

&lt;p&gt;The cursor must be based on a unique or sequential column in the table and pagination goes for a toss if the edge cases aren't tested.&lt;/p&gt;

&lt;p&gt;There aren't a lot of gems/libraries that support the cursor-based approach compared to the offset based approach. So development time increases as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Uysim/pagy-cursor" rel="noopener noreferrer"&gt;pagy-cursor&lt;/a&gt;, &lt;a href="https://github.com/barkbox/paging_cursor" rel="noopener noreferrer"&gt;paging-cursor&lt;/a&gt; support cursor based pagination.&lt;/p&gt;

&lt;p&gt;Cursor based pagination is an outright better option over offset based pagination in terms of performance but the decision to choose the approach depends on the use case and the kind of impact the pagination is going to have on the product itself. For much simpler paginations with comparatively less dataset, one might still prefer offset based pagination.&lt;/p&gt;

&lt;p&gt;What is more important is to, not choose a particular pagination blindly. Instead, understand the trade-offs between the two and choose the one that suits the most.&lt;/p&gt;

</description>
      <category>database</category>
      <category>performance</category>
      <category>pagination</category>
    </item>
    <item>
      <title>Don't be proud of pulling off an all-nighter</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Sat, 24 Oct 2020 18:22:38 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/don-t-be-proud-of-pulling-off-an-all-nighter-2c29</link>
      <guid>https://dev.to/rahul_ramfort/don-t-be-proud-of-pulling-off-an-all-nighter-2c29</guid>
      <description>&lt;p&gt;When I decided to start writing, I consciously told myself to write on other topics too, apart from code. Like things that I wish someone told me when I was a fresher.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sleep is one of them.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I have done so many all-nighters, during my college days and during the early years at my job. I used to feel so proud for pulling off those for some unknown reasons. Maybe I felt like being a * &lt;em&gt;techie&lt;/em&gt; * requires you to do all-nighters. Maybe to show off your friends that you could do all-nighters. Interestingly, I did not even have comp-offs for those all-nighters yet I impulsively did those.&lt;/p&gt;

&lt;p&gt;In hindsight, I regret the decision, pulling off all-nighters isn't good for you after all. Depriving oneself of sleep is the worst thing that mankind could do to itself. Today I would prioritise a better sleep over an all-nighter any day. &lt;/p&gt;

&lt;p&gt;Here are some of my observations after quite a few all-nighters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decision Making/ Quality of Code:
&lt;/h3&gt;

&lt;p&gt;Decision making is poor during all-nighters. Many a time, I had to redesign database design/code structures that I did during the all-nighter. I have found myself unanswerable to a few decision making questions that I have the following day or a few days later. &lt;/p&gt;

&lt;p&gt;To overcome this, I decided that all the decision making has to be completed beforehand so that I could just code during the all-nighter. But the quality of the code that you write during an all-nighter is never the best code that you could write. More often than not, the bugs that I find in my code would have been written during an all-nighter.&lt;/p&gt;

&lt;h4&gt;
  
  
  How productive are you the day after you pull off an all-nighter?
&lt;/h4&gt;

&lt;p&gt;For me, the next day has always been a zero productivity day. I have looked back at those days, thinking I could have completed the same job without doing an all-nighter.&lt;/p&gt;

&lt;p&gt;Apart from these productivity issues, sleep deprivation is bad for your health which is the primary reason to not do an all-nighter ever.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sleep Deprivation is injurious to health:
&lt;/h3&gt;

&lt;p&gt;There are a lot of bad effects of sleep deprivation. It drains your mental abilities and puts your physical health at risk. Scientifically sleep deprivation has been the cause of many problems, from weight gain to a weak immune system, from heart attacks to memory issues and lowered thinking ability.&lt;/p&gt;

&lt;p&gt;You will never know the effects when you're young because you're full of energy and your thought process goes something like this,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;To compensate for the lost sleep during the weekday,
I will sleep for more hours during the weekend. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is wrong on so many levels.&lt;/p&gt;

&lt;p&gt;Sleep once lost, is lost forever. It is not like a bank where you put more sleep to compensate for the loss. It doesn't work like that. Moreover, oversleeping will only make you unproductive that day as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regularity is the King:
&lt;/h3&gt;

&lt;p&gt;Always remember this and make sure you have a good sleep every single day. The number of hours required for good sleep is subjective. It varies from 6-9 hours depending on the individual. So find what's comfortable for you and hit that number of hours every single day.&lt;/p&gt;

&lt;p&gt;The regularity of sleep is the secret behind good mental health and being productive during the day. Try to regularize your sleep time by sleeping at the same time. For instance, from 10:30 pm to 6:30 am every day. Over time, your mind respects sleep time and falls asleep effortlessly. &lt;/p&gt;

&lt;p&gt;Sleep is vital for the human body to function properly, akin to how we eat, how we breathe. Never take it for granted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The hard part isn't getting your body in shape.
The hard part is getting your mind in shape. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sleep has been the underrated reason behind people not in a good mental space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Respect sleep time:
&lt;/h3&gt;

&lt;p&gt;The reason for us staying up late is mostly because of the fact that you're not able to get up early. But the focus should not be to get up early, it has to be * &lt;em&gt;to sleep on time&lt;/em&gt; *. Once you regularise your sleep time and sleep on time, you will see yourself get up on time as well. &lt;/p&gt;

&lt;p&gt;As a practice, you should always have alarms, not to wake you up early but to * &lt;em&gt;remind you to sleep on time&lt;/em&gt; *. If you ask all those people who start their day so early, the reason for them being consistent is because they sleep on time consistently no matter what.&lt;/p&gt;

&lt;p&gt;Consistency means sleeping on time every single day, not just on weekdays and being a night owl during the weekends. Make it a habit, keep an alarm 5-10 minutes before your bedtime and turn off everything and hit the bed at that time. This has to be your P0 task.&lt;/p&gt;

&lt;p&gt;To sleep faster/better, avoid using mobile/laptop at least 15-30 minutes before your sleep time. &lt;/p&gt;

&lt;p&gt;Instead, try to&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read books,&lt;/li&gt;
&lt;li&gt;spend time with your family, &lt;/li&gt;
&lt;li&gt;walk around, &lt;/li&gt;
&lt;li&gt;meditate, &lt;/li&gt;
&lt;li&gt;retrospect your day,&lt;/li&gt;
&lt;li&gt;jot down tasks for next day, &lt;/li&gt;
&lt;li&gt;play musical instruments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;or anything that calms your mind. A healthy mind is one that has a healthy sleep.&lt;/p&gt;

&lt;p&gt;** &lt;em&gt;Again, never lose your sleep for anything&lt;/em&gt;. **&lt;/p&gt;

</description>
      <category>mentalhealth</category>
      <category>productivity</category>
    </item>
    <item>
      <title>A silly bug that sneaked into production</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Fri, 16 Oct 2020 11:15:55 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/a-silly-bug-that-sneaked-into-production-o4p</link>
      <guid>https://dev.to/rahul_ramfort/a-silly-bug-that-sneaked-into-production-o4p</guid>
      <description>&lt;p&gt;So, this is what happened.&lt;/p&gt;

&lt;p&gt;I was in the middle of some task and suddenly had to make a context switch to a different task that I had pushed earlier. I forgot to handle a failure case and the code went to production already. But the change was pretty simple.&lt;/p&gt;

&lt;p&gt;I had to add a condition to some existing code that looked similar to this,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F2_til_precedence%2Fno_error.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F2_til_precedence%2Fno_error.png%3Fraw%3Dtrue" alt="Good code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Consider this is the condition that I had to add.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F2_til_precedence%2Fthe_bug.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F2_til_precedence%2Fthe_bug.png%3Fraw%3Dtrue" alt="Buggy code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this done, I asked myself if I should test this out before pushing and the genius in me replied that this was a simple change and I could go ahead without any testing. I also had this&lt;a href="https://dev.to/rahul_ramfort/a-simple-pre-commit-hook-that-saves-you-big-time-5g9e"&gt; pre-commit hook&lt;/a&gt; to save me from any syntax errors. So, I felt I was pretty much covered and good to go.&lt;/p&gt;

&lt;p&gt;I pushed it, the pre-commit hook passed, shipped to prod as well 🚀 and I went back to my other task.&lt;/p&gt;

&lt;p&gt;Within a few minutes, our error monitoring tool started showing even more errors, this time 100% of the requests were failing compared to the negligible 1% requests that were failing earlier.&lt;/p&gt;

&lt;p&gt;Guessed what could have gone wrong already? Kudos to you.👏&lt;/p&gt;

&lt;p&gt;If not, here's the error that will help you find it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TypeError (class or module required)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you think that &lt;code&gt;MyObject&lt;/code&gt; is not a class or module, &lt;em&gt;you're wrong&lt;/em&gt;. Because that piece of code was already in production and is syntactical fine.&lt;/p&gt;

&lt;p&gt;So what could have gone wrong? Give it a moment to look into it again. &lt;/p&gt;

&lt;p&gt;Okay, here it goes, I didn't tell ruby explicitly about the execution precedence and as a result, it considered &lt;code&gt;MyObject &amp;amp;&amp;amp; (self.some_validation? || self.other_validation?)&lt;/code&gt; as the argument to the method &lt;code&gt;self.is_a?&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;So the result of &lt;code&gt;MyObject &amp;amp;&amp;amp; (self.some_validation? || self.other_validation?)?&lt;/code&gt; is either one of these.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MyObject &amp;amp;&amp;amp; true #true
MyObject &amp;amp;&amp;amp; false #false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, the argument to &lt;code&gt;self.is_a?&lt;/code&gt; is now a boolean,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;self.is_a? true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which throws the error, * TypeError (class or module required) *&lt;/p&gt;

&lt;p&gt;The fix is pretty simple though, just add a bracket to let ruby know the precedence of the execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F2_til_precedence%2Ffix_bug.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F2_til_precedence%2Ffix_bug.png%3Fraw%3Dtrue" alt="Fix for the bug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Days like these are bound to happen and it just makes you more humble and makes you learn the nuances of the language. &lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>ruby</category>
    </item>
    <item>
      <title>Automating my zoom stand-up meeting</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Sat, 10 Oct 2020 08:32:19 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/automating-my-zoom-stand-up-meeting-4ll2</link>
      <guid>https://dev.to/rahul_ramfort/automating-my-zoom-stand-up-meeting-4ll2</guid>
      <description>&lt;p&gt;WFH has become the new normal for all of us and so are the daily stand-up meetings. &lt;/p&gt;

&lt;p&gt;As standups happen at the same time on all working days, the need to find the zoom link every day and open it became a boring task for me. More than that, I also had to keep an eye on the clock so that I don't make my team wait.&lt;/p&gt;

&lt;p&gt;Being a lazy guy, it didn't like doing this daily. I just wanted it to be automated once for all. 🤷🏻‍♂️&lt;/p&gt;

&lt;h1&gt;
  
  
  Cron Jobs 😎
&lt;/h1&gt;

&lt;p&gt;Cron jobs are specifically designed to run a job at a particular time. For example, you can configure a job to run every hour or every 10th minute or every Tuesday and more.&lt;/p&gt;

&lt;p&gt;Now that you know what cron jobs are, there's a software utility &lt;code&gt;crontab&lt;/code&gt; available in Unix operating systems. It is the daemon that monitors the list of scheduled jobs and runs it at the specified time. Jobs can be configured by firing,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;crontab -e 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The config is a one-liner with two values, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The time to run&lt;/li&gt;
&lt;li&gt;An executable script that has to be run at that time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each line in this file represents a job and there can be &lt;code&gt;n&lt;/code&gt; number of jobs.&lt;/p&gt;

&lt;p&gt;The job that I had to do was straightforward. A simple script that can open the zoom app and join my standup meeting.&lt;br&gt;
Here's the code for doing that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#file_name: join_zoom_standup.sh
open 'zoommtg://us04web.zoom.us/join?action=join&amp;amp;confno=&amp;lt;conf_id&amp;gt;&amp;amp;pwd=&amp;lt;pwd&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Make sure the script is executable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x join_zoom_standup.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The only thing left is to configure the cron job to run from Monday to Friday at 10.30 am. (If you're not comfortable with cron expressions, head over &lt;a href="https://crontab.guru"&gt;here&lt;/a&gt;, a neat editor that helps you generate the expressions.)&lt;/p&gt;

&lt;p&gt;Finally, this is the configuration for the same.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;30 10 * * 1-5 ./custom_jobs/join_zoom_standup.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;All set.&lt;/p&gt;

&lt;p&gt;From now, you can just do your work and all of a sudden, the zoom app pops up, you hear people talking and realise it's time for the standup call.😉&lt;br&gt;
This is worth spending 5 minutes. Isn't it!&lt;/p&gt;

&lt;p&gt;You can view the list of scheduled jobs using &lt;code&gt;crontab -l&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Apart from this, there are a lot of tasks that you can force yourself to do just by automating it. I have done a few of them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;30 22 * * Wed ./custom_jobs/reading_list.sh
15 15 * * * ./custom_jobs/open_duo.sh
30 15 * * * ./custom_jobs/da_chore.sh
45 10 * * Fri ./custom_jobs/jira_board.sh
35 10 * * Mon ./custom_jobs/dev_scripts.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Quick info on what these are,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open my reading list from multiple places. Eg: twitter bookmarks page, &lt;a href="https://www.one-tab.com"&gt;onetab&lt;/a&gt; page, dev.to reading list page and more.&lt;/li&gt;
&lt;li&gt;Making sure that I have updated my JIRA board every Friday.&lt;/li&gt;
&lt;li&gt;Booting all applications that I work with, running all the dev scripts on Monday mornings. Eg: Starting dev servers, Starting Docker containers and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Being developers, we should always try automating the tasks that can be automated and focus on tasks that demand our focus.💪 &lt;/p&gt;

&lt;p&gt;** &lt;em&gt;Happy Automating!&lt;/em&gt; **&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>showdev</category>
      <category>cron</category>
    </item>
    <item>
      <title>Why do microservices need an API Gateway?</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Fri, 02 Oct 2020 13:22:39 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/why-do-microservices-need-an-api-gateway-503i</link>
      <guid>https://dev.to/rahul_ramfort/why-do-microservices-need-an-api-gateway-503i</guid>
      <description>&lt;p&gt;API Gateways are becoming increasingly popular with the microservice architecture. Recently, Google announced its own &lt;a href="https://cloud.google.com/api-gateway" rel="noopener noreferrer"&gt;api-gateway&lt;/a&gt;. The time is ripe to take a look at why microservice architecture needs them and how they currently look without the api gateway in place.&lt;/p&gt;

&lt;p&gt;Let's look at a microservice architecture without an api gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdevops%2F3_api_gateway%2Fdark_without_gateway.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdevops%2F3_api_gateway%2Fdark_without_gateway.png%3Fraw%3Dtrue" alt="Architecture without api gateway"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each microservice apart from its core functionality, traditionally have been doing these as well,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication of requests based on OAuth or JWT token or a simple key-based authentication. (This authentication is to verify if other services or users have access to this service, not the typical user authentication that happens based on the application data.) &lt;/li&gt;
&lt;li&gt;Allow &lt;a href="https://dev.to/rahul_ramfort/cors-preflight-request-oii"&gt;CORS requests&lt;/a&gt; from other microservices&lt;/li&gt;
&lt;li&gt;Allow/Deny requests based on their IPS.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rate Limiting - Allow only a certain number of requests. Requests over the specified limit will be responded with a status code 429 (Too many requests). These rules also protect the microservice from &lt;a href="https://dev.to/rahul_ramfort/what-is-a-ddos-attack-anyway-9kj"&gt;DDOS attacks&lt;/a&gt; that happen at the application layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring -  Collecting metrics from the requests/response to gain valuable insights. Eg: number of requests per minute, number of requests per API, number of requests a particular user has hit above the rate limit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alerting - This is a subset of monitoring, where alerts are generated for specific events. Eg: Generating an alert when the response time is over 500ms for 1000 requests. Using an observability tool like Prometheus helps in both monitoring and alerting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logging - Logging all the requests that are made to the server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Request Termination - Temporarily disable the request for some APIs or disable the service during downtime.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alright, these are a few of them. There might be even more to it as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Major Disadvantages of this approach:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When a new microservice comes up, all these functionalities need to be replicated.&lt;/li&gt;
&lt;li&gt;Any change to one functionality should be repeated across all services. Eg: Moving the logging of requests from Loggly to StatsD.&lt;/li&gt;
&lt;li&gt;Logically looking, all these functionalities are not specific to the underlying application. These can be decoupled from the application itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  API Gateway:
&lt;/h3&gt;

&lt;p&gt;API Gateway doesn't need any introduction now. An API gateway can be considered as yet another microservice in your architecture that does all the aforementioned functionalities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdevops%2F3_api_gateway%2Fwith_gateway_dark.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdevops%2F3_api_gateway%2Fwith_gateway_dark.png%3Fraw%3Dtrue" alt="Architecture with API gateway"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is the entry point for your microservices and acts as a gatekeeper doing all the basic functionalities before passing the request to the respective microservice. &lt;/li&gt;
&lt;li&gt;All the functionalities now reside at a centralised place, making it easy to maintain and analyse them. &lt;/li&gt;
&lt;li&gt;When a new microservice comes up, all it has to do is to process the requests and send the response back to the gateway. API gateway takes care of the rest.&lt;/li&gt;
&lt;li&gt;With API gateway in place, functionalities like Request/Response Transformation, rolling out canary deployments become possible as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have jotted down some of the problems that an API gateway solves. Having said these, it's really up to the architecture to decide if an API gateway is a must to have or good to have. They are a must to have especially when there are a lot of microservices in the architecture. &lt;/p&gt;

&lt;p&gt;Irrespective of whether there's an absolute need for an API gateway or not, just by looking closely at the design before the existence of API gateway, it's evident that it was violating &lt;a href="https://en.wikipedia.org/wiki/Single-responsibility_principle" rel="noopener noreferrer"&gt;Single Responsibility Principle&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="noopener noreferrer"&gt;Don't repeat yourself&lt;/a&gt; and &lt;a href="https://stackoverflow.com/questions/14000762/what-does-low-in-coupling-and-high-in-cohesion-mean" rel="noopener noreferrer"&gt;High Cohesion and low coupling&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>CORS &amp; Preflight Request!</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Fri, 25 Sep 2020 11:28:44 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/cors-preflight-request-oii</link>
      <guid>https://dev.to/rahul_ramfort/cors-preflight-request-oii</guid>
      <description>&lt;h3&gt;
  
  
  What is CORS?
&lt;/h3&gt;

&lt;p&gt;CORS - Cross-Origin Resource Sharing &lt;br&gt;
In simple terms, when you want to allow requests from a  different domain (read origin) to your server, CORS comes into the picture. CORS is a policy that is enforced by the browser.&lt;/p&gt;
&lt;h3&gt;
  
  
  But what is Cross-Origin? 🤔
&lt;/h3&gt;

&lt;p&gt;Let's say you're reading this post on &lt;code&gt;Dev.to&lt;/code&gt;. &lt;code&gt;Dev.to&lt;/code&gt; is the origin here and it's allowed to request for resources (make https calls) that are present in its origin only. If it's making calls to any other origin, even to its sub-domain, the request will be termed cross-origin request.&lt;/p&gt;
&lt;h3&gt;
  
  
  Show me a use case!
&lt;/h3&gt;

&lt;p&gt;In the world of microservices, even within your architecture, you might have different services talking to multiple servers. CORS is a mechanism to let only the trusted origins make the Cross-Origin HTTP request to your server.&lt;/p&gt;

&lt;p&gt;Consider this naive example where there's an application running at &lt;code&gt;rahul.dev.to&lt;/code&gt; and there's a functionality to edit my posts. Once the post is edited, I have to update the post across all my blogging sites - dev.to, medium.com, blogger.com&lt;/p&gt;

&lt;p&gt;For this hypothetical case to work, I would need to hit this patch API on &lt;code&gt;dev.to&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PATCH https://dev.to/rahul_ramfort/post/20
Request-Headers: User-Agent, api_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(for brevity, ignoring medium and blogger API calls)&lt;/p&gt;

&lt;p&gt;I am trying to post the data from my server (rahul.dev.to) to another server (dev.to) and I might or might not be allowed to actually make this request on &lt;code&gt;dev.to&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;This is the problem at hand. Browsers do not know if it's safe to make this request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter Preflight Requests! ✈️
&lt;/h3&gt;

&lt;p&gt;To solve this, Browsers for security reasons, do not directly allow this cross-origin requests to go through. Before firing the actual &lt;code&gt;patch&lt;/code&gt; request, it instead fires an OPTIONS request to the cross-origin (dev.to) with all the details of the CORS request.&lt;/p&gt;

&lt;p&gt;The details include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Origin of the requested server - rahul.dev.to
Method rahul.dev.to is trying to hit - PATCH
Headers rahul.dev.to is trying to send - User-Agent, api_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dev.to, the cross-origin receives the OPTIONS request and can deny or allow the origin (rahul.dev.to) to make requests.&lt;/p&gt;

&lt;p&gt;In this case, &lt;code&gt;dev.to&lt;/code&gt; would have configured a list of &lt;em&gt;trusted&lt;/em&gt; origins that can make the CORS requests at its application layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#sample configuration at the nginx layer (dev.to)
'Access-Control-Allow-Origin' "https://api.dev.to, https://rahul.dev.to"
'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE, PUT, PATCH'
'Access-Control-Allow-Headers' 'User-Agent,Content-Type,access-key,api-key'
'Access-Control-Max-Age' 86400
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;rahul.dev.to&lt;/code&gt; is listed as one of the trusted origins, the browser receives a successful 204. Now the browser understands that it is safe to allow the CORS request and fires the actual &lt;code&gt;PATCH&lt;/code&gt; request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdevops%2F2_cors%2Fcors_request_flow.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fdevops%2F2_cors%2Fcors_request_flow.png%3Fraw%3Dtrue" alt="CORS request"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If &lt;code&gt;rahul.dev.to&lt;/code&gt; is not listed in the allow-origin, the server denies the OPTIONS request.&lt;/p&gt;

&lt;p&gt;The browser considering this as a potential threat, will not fire the actual &lt;code&gt;PATCH&lt;/code&gt; request throwing an error,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Preflight response is not successful
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding the CORS response headers:
&lt;/h3&gt;

&lt;p&gt;These are the headers received for the preflight request. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access-Control-Allow-Origin - specifies the requested origin if it has access.&lt;/li&gt;
&lt;li&gt;Access-Control-Allow-Methods - specifies which methods are allowed for CORS.&lt;/li&gt;
&lt;li&gt;Access-Control-Allow-Headers - specifies which headers can be used with the actual CORS request.&lt;/li&gt;
&lt;li&gt;Access-Control-Max-Age - specifies how much time (in seconds) the response of the preflight request can be cached. The browser will skip further preflight requests and directly hit the actual request during that time period.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Note:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Access-Control-Allow-Origin: '*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is pretty common to see people configuring like this as a workaround to allow CORS requests.&lt;/p&gt;

&lt;p&gt;What this essentially means is that your server is allowing all the origins to hit CORS requests. No, do not do this. 🙅🏻‍♂️ Allow only trusted origins here and using &lt;code&gt;'*'&lt;/code&gt; should totally be avoided. &lt;/p&gt;

&lt;p&gt;Further, if you want to reduce the frequency of preflight requests for your trusted origins, you can set the &lt;code&gt;Access-Control-Max-Age&lt;/code&gt; header to a higher value.&lt;/p&gt;

&lt;p&gt;To read more: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="noopener noreferrer"&gt;📖&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>security</category>
    </item>
    <item>
      <title>Recent Favorite Rails Tip!</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Fri, 18 Sep 2020 04:09:34 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/recent-favorite-rails-tip-4ml9</link>
      <guid>https://dev.to/rahul_ramfort/recent-favorite-rails-tip-4ml9</guid>
      <description>&lt;p&gt;Rails is a powerful framework known for boosting productivity and being developer-friendly. There are many traits that help you in being productive. Let's talk about the one that I found very recently and how difficult was it to get the job done without it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Rails migrations&lt;/em&gt;. If you haven't heard this term, in short, migrations help in changing the schema of your database. Read more &lt;a href="https://edgeguides.rubyonrails.org/active_record_migrations.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; in case you're not familiar with it.&lt;/p&gt;

&lt;p&gt;Rails provides scaffolding to create migration files. You need not create a migration file from scratch nor copy from the existing files. This is pretty standard.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rails generate migration add_email_to_users email:string
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;generates the following file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class AddEmailToUsers &amp;lt; ActiveRecord::Migration[5.0]
 def change
   add_column :users, :email, :string
 end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you just run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rake db:migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Making a change to the database hardly takes a minute!&lt;/p&gt;

&lt;p&gt;But consider this scenario where you created a table and started working on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F1_migrate_redo%2Fredo%2Freference_type.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F1_migrate_redo%2Fredo%2Freference_type.png%3Fraw%3Dtrue" alt="source code 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now for some reasons, you want to rename the column names and also add another column to it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F1_migrate_redo%2Fredo%2Flogger_type.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F1_migrate_redo%2Fredo%2Flogger_type.png%3Fraw%3Dtrue" alt="source code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the migration already ran, these are the steps you're required to do to run the migration again.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get the migration version from the file name &lt;code&gt;20200827072540&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Remove the entry from the &lt;code&gt;schema_migrations&lt;/code&gt; table&lt;/li&gt;
&lt;li&gt;Drop the table &lt;code&gt;transaction_dumps&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run the migration &lt;code&gt;rake db:migrate&lt;/code&gt; again&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This set of things gets really frustrating if you want to play around in your development or you're not really sure of the final schema.&lt;/p&gt;

&lt;p&gt;A better way to do this is,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rake db:rollback #rolls back the latest migration change
rake db:migrate #runs all the pending migrations again
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Neat right.&lt;/p&gt;

&lt;p&gt;Hang on. This post ain't about this.&lt;/p&gt;

&lt;p&gt;If you want to rollback multiple migration files and run them again, the headache is back. You have to get the migration versions and do,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rake db:migrate:down VERSION=20200827072540
rake db:migrate:down VERSION=20200825121103
rake db:migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a &lt;em&gt;Rails&lt;/em&gt; way to do this and this is my recent favourite go-to option when I am playing with the migrations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rake db:migrate:redo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This rolls back the latest migration and runs the migration again. Here's the elegant part, if you want to redo the last &lt;code&gt;n&lt;/code&gt;  migration files, you could just feed that as a variable &lt;code&gt;STEP&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rake db:migrate:redo STEP=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F1_migrate_redo%2Fredo%2Foutput.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frahulramfort%2Fblog%2Fblob%2Fmaster%2Fruby%2F1_migrate_redo%2Fredo%2Foutput.png%3Fraw%3Dtrue" alt="output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Easy and painless! &lt;/p&gt;

&lt;p&gt;Always remember this, if you're doing something that takes too much time and makes you feel that Rails isn't fun to work with, you might be doing it the wrong way 😉&lt;/p&gt;

&lt;p&gt;Thanks &lt;a href="https://dev.to/prathamesh"&gt;@Prathamesh&lt;/a&gt; for this tip.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>ruby</category>
      <category>productivity</category>
    </item>
    <item>
      <title>What is a DDoS attack anyway?</title>
      <dc:creator>Rahul</dc:creator>
      <pubDate>Sat, 12 Sep 2020 16:32:45 +0000</pubDate>
      <link>https://dev.to/rahul_ramfort/what-is-a-ddos-attack-anyway-9kj</link>
      <guid>https://dev.to/rahul_ramfort/what-is-a-ddos-attack-anyway-9kj</guid>
      <description>&lt;p&gt;Recently in my organization, we decided to have an API gateway and we were wondering if we could also remove Cloudflare WAF layer which had very recently averted a few DDoS attacks on our website.&lt;/p&gt;

&lt;p&gt;I previously had read a &lt;a href="https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/"&gt;nginx article&lt;/a&gt; and I was of the thought that DDoS attack was all about the application layer attacks and adding a rate-limiting at the Nginx layer could prevent the DDoS attacks.&lt;/p&gt;

&lt;p&gt;I was totally wrong!&lt;/p&gt;

&lt;p&gt;Here's a brief of types and a variety of DDoS attacks.&lt;/p&gt;

&lt;p&gt;Before jumping in, DoS stands for Denial of Service attack that aims at crashing the target server or disrupting its normal functioning by creating congestion on its network. &lt;br&gt;
DDoS is a &lt;em&gt;distributed DoS&lt;/em&gt; attack where multiple systems attack the target server.&lt;/p&gt;

&lt;p&gt;There are three categories of DDoS attacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Protocol Attacks&lt;/li&gt;
&lt;li&gt;Application Layer Attacks&lt;/li&gt;
&lt;li&gt;Volumetric Attacks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Protocol Attacks:
&lt;/h3&gt;

&lt;p&gt;Protocol attacks happen in layer 3 (Network Layer) or layer 4 (Transport Layer) of the OSI Model. These attacks aim at creating congestion or exhaust the server resources by attacking the intermediaries between the server and the website like firewall and load balancers.&lt;/p&gt;

&lt;p&gt;Before going into individual attacks, let's revisit how the TCP handshake works. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SYN - the client requests for a new connection by sending a SYN with its random value - RANDOM_A&lt;/li&gt;
&lt;li&gt;SYN-ACK - the server keeps a port ready and replies with a SYN-ACK, acknowledging the client's SYN and sending back its random value RANDOM_B&lt;/li&gt;
&lt;li&gt;ACK - the client acknowledges the server's SYN and sends back an ACK and the connection is successfully made.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Technically, an ACK/SYN packet is a TCP packet with the "ACK" or "SYN" flag set in the header.&lt;/p&gt;

&lt;h4&gt;
  
  
  SYN Flood
&lt;/h4&gt;

&lt;p&gt;This attack uses the vulnerability of TCP handshake by sending&lt;br&gt;
the server a large number of SYN packet. These SYN packets are normally sent using spoofed IP addresses. This results in the server sending a SYN-ACK to the spoofed IP address and waiting for the final ACK forever. The purpose of the attack is to exhaust the server resources by opening a useless connection every time and prevent it from creating new requests for the actual users.&lt;/p&gt;

&lt;h4&gt;
  
  
  ACK Flood
&lt;/h4&gt;

&lt;p&gt;This attack sends an enormous number of ACK packet to the server. The server has to process each packet to find if the packet is a legitimate one or not. The idea is to use up the resources of the server by sending junk ACK packets that are of no use to the server.&lt;/p&gt;

&lt;h4&gt;
  
  
  SYN ACK Flood
&lt;/h4&gt;

&lt;p&gt;This attack is slightly different as the clients never send SYN-ACK to the server during the three-way handshake. The objective remains the same, it is to waste resources of the server.&lt;/p&gt;

&lt;h4&gt;
  
  
  QUIC Flood
&lt;/h4&gt;

&lt;p&gt;Quic is a new transport protocol and it's a faster and secure way to send data. It does not use TCP, hence there's no three-way handshake here. It instead uses UDP. Since UDP is less reliable it sends multiple streams of data to make up for any loss of packets. Using TCP, the data has to be sent over HTTPS to be encrypted, whereas, in QUIC, the data is encrypted by default using TLS encryption. &lt;/p&gt;

&lt;p&gt;This is briefly how the QUIC protocol works. QUIC flood aims at sending overwhelming data over QUIC. Since all the QUIC data is encrypted, the server has to spend even more resource to actually authenticate the legitimacy of the request.&lt;/p&gt;

&lt;p&gt;Unlike other floods where the server sends comparatively less data(ACK or SYN), here the server has to send back its TLS certificate in the first request itself. Spoofing the IPs and flooding with QUIC, results in the server sending a larger unwanted data to the spoofed IPs.&lt;/p&gt;

&lt;h4&gt;
  
  
  UDP Flood
&lt;/h4&gt;

&lt;p&gt;When a server receives a UDP packet, it has to check if there are any processes listening at the specified port and if it doesn't find one, the server pings the client that the destination was unreachable. &lt;br&gt;
As this process has to be followed for each packet, UDP flood is effective in depleting the server resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  DNS Flood
&lt;/h4&gt;

&lt;p&gt;The functionality of the Domain name system is to translate the website names to its IP addresses. This attack uses devices that have high bandwidth connections and hits a huge number of requests so as to prevents legitimate users from accessing it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ping Flood
&lt;/h4&gt;

&lt;p&gt;Normally PING(ICMP) is used to test the health status and connectivity of the server. This attack sends out overwhelming ping requests to use the server resources. This is the least effective mode of attack as the attacker has to send the maximum number of requests to bring the server down. This is because the cost of sending the ping reply is pretty low.&lt;/p&gt;

&lt;h3&gt;
  
  
  Volumetric Attacks:
&lt;/h3&gt;

&lt;p&gt;These are the most common types of DDoS attacks. These attacks are reflection-based in nature where the attacker spoofs the IP and makes the server respond to the spoofed IP.&lt;/p&gt;

&lt;h4&gt;
  
  
  Amplification Technique:
&lt;/h4&gt;

&lt;p&gt;Volumetric attacks use amplification technique where the attacker requires less cost to make the target server send large amounts of data. As it amplifies the effect on the server by using fewer resources, it is called Amplification attack.&lt;/p&gt;

&lt;p&gt;As Cloudflare neatly puts it,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Amplification can be thought of in the context of a malicious teenager calling a restaurant and saying “I’ll have one of everything, please call me back and tell me my whole order.” When the restaurant asks for a callback number, the number given is the targeted victim’s phone number. The target then receives a call from the restaurant with a lot of information that they didn’t request.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To create a large amount of traffic, the attacker structures the request in a way that generates a large response from the server as possible.&lt;/p&gt;

&lt;h4&gt;
  
  
  DNS Amplification
&lt;/h4&gt;

&lt;p&gt;This attack differs from DNS flood as it primarily uses amplification to make many small requests for very large DNS records data. As it's reflective in nature, the server sends the response to the spoofed IPs.&lt;/p&gt;

&lt;h4&gt;
  
  
  NTP Amplification
&lt;/h4&gt;

&lt;p&gt;Network Time Protocol servers have a functionality &lt;code&gt;Get Monlist&lt;/code&gt; that sends out the history of last 600 IP addresses that hit the NTP server. &lt;/p&gt;

&lt;p&gt;This is a massive increase in the request-to-response size ratio. The server will be sending 200 times larger data than the request size. This effectively means with just 1 GB of bandwidth, the attacker can create 200 GB attack on the server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application layer Attacks:
&lt;/h3&gt;

&lt;p&gt;These are the attacks that happen at layer 7 (Application Layer) which involves sending the HTTP requests (GET, POST) and receiving the data.&lt;/p&gt;

&lt;p&gt;The idea is the same here, by consuming the server CPU resources, the server will not be able to process the genuine requests.&lt;/p&gt;

&lt;p&gt;Two frequently used techniques here are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Bombarding the server with a very huge number of request that the server will not be able to handle and goes down in the process. Considering the computing power of any normal application, it will be too difficult for it to handle these attacks. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Slow attacks - that keeps the connection open by sending partial headers to the server or delaying the data to be sent for a post request and making the server wait for longer periods. These kinds of attacks are generally difficult to identify and the server might drop genuine slow requests as well if it restricts slow requests altogether.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are some of the common DDoS attacks and the server should be equipped to protect itself from any of these attacks. &lt;/p&gt;

&lt;p&gt;For instance, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Layer 7 attacks can be protected by adding a rate-limiting that prevents IPs from hitting more than the specified number of 
requests. &lt;/li&gt;
&lt;li&gt;NTP Amplification on &lt;code&gt;Get Monlist&lt;/code&gt; can be averted by disabling the functionality itself.&lt;/li&gt;
&lt;li&gt;Having a Web Application Firewall(WAF) like CloudFare WAF, 
AWS Shield protects us from a variety of these attacks.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>ddos</category>
    </item>
  </channel>
</rss>
