<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nixon Islam</title>
    <description>The latest articles on DEV Community by Nixon Islam (@nixon1333).</description>
    <link>https://dev.to/nixon1333</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nixon1333"/>
    <language>en</language>
    <item>
      <title>Little GRPC History</title>
      <dc:creator>Nixon Islam</dc:creator>
      <pubDate>Fri, 19 Mar 2021 06:41:28 +0000</pubDate>
      <link>https://dev.to/nixon1333/little-grpc-history-3271</link>
      <guid>https://dev.to/nixon1333/little-grpc-history-3271</guid>
      <description>&lt;p&gt;This post in based on another youtube video. I am writing my summery of the video's content.&lt;/p&gt;

&lt;h3&gt;
  
  
  Learning:
&lt;/h3&gt;

&lt;p&gt;Here the author explains why gPRC is being created. What type problem it going to solve, what are the problems with multiple client server communication protocol&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Problem:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are so many client server protocol exits. Like, http/1, http/2, tcp/ip etc. These are the basic protocols which are used to communicate between multiple servers to server, server to client, client to client. But they are not languages agnostic. Their implementation of each programming language is very different. Plus there are so many libraries to support. If one protocol is being changed the whole supported library needs to adapt that feature and update the library. And the consumer also needs to adapt the libraries too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution:
&lt;/h3&gt;

&lt;p&gt;So the gRPC solves the loads of client libraries for different protocols, different types of languages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One client side library for all languages. No need to change all libraries for new changes like before. It has a generator which will generate client side libraries for all types of major languages.&lt;/li&gt;
&lt;li&gt;It’s using http/2 under the hood. So no need to add new codes to adapt the changes for converting http/1 to http/2. Or in future if the protocol is being changed to http/3, it will be the same too for the client side.&lt;/li&gt;
&lt;li&gt;The messaging format is using protobuf or protocol buffers. So any client with supported grpc client libraries can communicate with each other&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The original Link can be found here, &lt;a href="https://www.youtube.com/watch?v=u4LWEXDP7_M"&gt;This is why gRPC was invented&lt;/a&gt;&lt;/p&gt;

</description>
      <category>grpc</category>
      <category>http2</category>
      <category>http</category>
      <category>tcp</category>
    </item>
    <item>
      <title>Redis Based Lightweight Microservices</title>
      <dc:creator>Nixon Islam</dc:creator>
      <pubDate>Fri, 19 Mar 2021 05:41:10 +0000</pubDate>
      <link>https://dev.to/nixon1333/redis-based-lightweight-microservices-327f</link>
      <guid>https://dev.to/nixon1333/redis-based-lightweight-microservices-327f</guid>
      <description>&lt;p&gt;This is another blog post is based on a youtube video I have watched in youtube. This whole post is the summarised version of the video and my understanding of the use-cases. :) &lt;/p&gt;

&lt;h3&gt;
  
  
  Summery:
&lt;/h3&gt;

&lt;p&gt;The main discussion is about how to use a single redis instance to build a system of microservices. &lt;/p&gt;

&lt;h3&gt;
  
  
  Learning:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To build microservices we gonna need these things

&lt;ul&gt;
&lt;li&gt;Service discovery&lt;/li&gt;
&lt;li&gt;Messaging&lt;/li&gt;
&lt;li&gt;Load balancing&lt;/li&gt;
&lt;li&gt;Health&lt;/li&gt;
&lt;li&gt;Presence&lt;/li&gt;
&lt;li&gt;Logging queuing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;How to &lt;strong&gt;share key space&lt;/strong&gt; with all of the services. To do that we need to maintain a specific type of pattern while saving/retrieving keys in redis. Like in the video the author used the pattern like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prefix:service_name:instance_unique_id:type&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hydra:service:order-svc:289347928374982:health:log&lt;/li&gt;
&lt;li&gt;hydra:service:order-svc:service&lt;/li&gt;
&lt;li&gt;hydra:service:order-svc:2893479223474982:presence&lt;/li&gt;
&lt;li&gt;hydra:service:order-svc:routes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way in a shared redis all of the information can be stored and all other services can access/filter these without any hassle. Like &lt;code&gt;service 1&lt;/code&gt; can create routes of order service, &lt;code&gt;service 2&lt;/code&gt; can get the routes of order service by accessing the &lt;strong&gt;order-svc routes&lt;/strong&gt; keys without using extra api calls or other stuff.&lt;/p&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Check if a service is present This is needed for load balancing, service discovery, routing etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;So the idea is when a service is being upped it will create a presence key with it’s instance id in the redis key store with a ttl. After the ttl the key will expire. It will be the service's responsibility to update the ttl or regenerate the presence key if needed.&lt;/li&gt;
&lt;li&gt;So using this other services can find a service instance just by filtering proper keys. Like filtering &lt;strong&gt;hydra:service:order-svc&lt;/strong&gt; will provide the list of services&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service health check: A shared key similar to presence can be used to store a service's health after each 5 sec. This way a key can be used in any place to check service wise health checks, or other information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service discovery: for service discovery using some customized query can help find you service instances, instance presences, routes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service routes: again this can be easily achieved by querying proper way. We can find all service’s routes by querying generic routes type query, ger service specific routes by querying service routes query. We can easily store a list of routes in each service keys.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Load balancing: This can be done by service discovery, presence, routing features. Redis keys, list, string etc feature can be used for this&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Messaging: We can use redis pubshub to use messaging between microservices. Like each service will listen to two channels one is a channel name of its own instance id, service name and another is service name. Whenever a normal message needs to be processed, services should go through the message to the global service channel, if the message needs to be sent to an individual service that message will be sent to an instance id specific channel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Queues: using redis message queue can be a very easy solution for us. We can use redis lpush and rpoplpush along with service sacrifice queues. Publishers can push data/messages in service specific queues by lpush then listeners can consume those by pop methods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logging: can be used as a distributed logging system. Same as the queuing process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Config management: can be used as shared config management across the whole system. Can store config using service keys prefix.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The original Link can be found here, &lt;a href="https://www.youtube.com/watch?v=z25CPqJMFUk"&gt;Building Lightweight Microservices Using Redis&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>redis</category>
      <category>architecture</category>
      <category>design</category>
    </item>
    <item>
      <title>5 Redis Use Cases - Redis Lab</title>
      <dc:creator>Nixon Islam</dc:creator>
      <pubDate>Sat, 27 Feb 2021 12:02:56 +0000</pubDate>
      <link>https://dev.to/nixon1333/5-redis-use-cases-redis-lab-3ck9</link>
      <guid>https://dev.to/nixon1333/5-redis-use-cases-redis-lab-3ck9</guid>
      <description>&lt;p&gt;So I was seeing a video in youtube about redis use case for a specific scenarios or a system design. So I am writing down my understanding of those use-cases&lt;/p&gt;

&lt;h3&gt;
  
  
  Learning:
&lt;/h3&gt;

&lt;p&gt;The key redis use-cases are being used in the video&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Caching&lt;/li&gt;
&lt;li&gt;Queuing&lt;/li&gt;
&lt;li&gt;Locking&lt;/li&gt;
&lt;li&gt;Throttling &lt;/li&gt;
&lt;li&gt;pubsub’ing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary:
&lt;/h3&gt;

&lt;p&gt;They were explaining how they integrated redis in their system in multiple layer as explaining their's system workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They &lt;code&gt;cached&lt;/code&gt; the db queries in redis. &lt;/li&gt;
&lt;li&gt;Then when a user requests a query it &lt;code&gt;queues&lt;/code&gt; in redis and a consumer consumed the request and start processing for the query. &lt;/li&gt;
&lt;li&gt;Before starting the query they &lt;code&gt;locked&lt;/code&gt; the user request so that his further requests are not being processed by other consumers. If another consumer fails to &lt;code&gt;lock&lt;/code&gt; a query(means it is already being locked) they &lt;code&gt;re-queue&lt;/code&gt; the query again with fibonacci series seconds. &lt;/li&gt;
&lt;li&gt;After they get a response from the db they generate the &lt;code&gt;cache&lt;/code&gt; for the query and send back the request to redis by publishing(&lt;code&gt;pubsub&lt;/code&gt;) it and on the other side the web server was listening(&lt;code&gt;pubsub&lt;/code&gt;) for the response in the redis(it started listening right after it made the request to that service). When it gets back the result it responds back to the client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The original Link can be found here, &lt;a href="https://www.youtube.com/watch?v=znjGckK8abw"&gt;5 Redis Use Cases with Gur Dotan&lt;/a&gt;&lt;/p&gt;

</description>
      <category>redis</category>
      <category>architecture</category>
      <category>design</category>
    </item>
    <item>
      <title>Clean Sentry On-Premise Database</title>
      <dc:creator>Nixon Islam</dc:creator>
      <pubDate>Mon, 22 Feb 2021 07:50:41 +0000</pubDate>
      <link>https://dev.to/nixon1333/clean-sentry-database-on-premise-28b</link>
      <guid>https://dev.to/nixon1333/clean-sentry-database-on-premise-28b</guid>
      <description>&lt;p&gt;So we track our error logs via Sentry on-premises which we hosted on our own. Recently it started to give error, the disk space is running out!!!&lt;/p&gt;

&lt;h3&gt;
  
  
  WHY??
&lt;/h3&gt;

&lt;p&gt;Apparently we needed to clear the database after certain time. Otherwise it will stack up all of the error logs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Solution??
&lt;/h3&gt;

&lt;p&gt;We log into our server. Sentry was running in docker. We went to the docker folder ran this &lt;br&gt;
&lt;code&gt;docker-compose exec worker bash&lt;/code&gt;&lt;br&gt;
After that from the worker bash we ran&lt;br&gt;
&lt;code&gt;sentry cleanup --days 30&lt;/code&gt;&lt;br&gt;
basically this will clean up all the events data before 30 days.&lt;/p&gt;

&lt;p&gt;After this we go inside of the database by running these &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker-compose exec postgres bash&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;psql -U postgres&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VACUUM FULL;&lt;/code&gt;
Point to be noted, &lt;code&gt;VACUUM FULL;&lt;/code&gt; will lock your db tables unless the full vacuum is being done.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Voila! Database and hard driver storage cleaned up! :)  &lt;/p&gt;

</description>
      <category>devops</category>
      <category>devjournal</category>
      <category>docker</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Best Practices Working with Billion-row Tables in Databases</title>
      <dc:creator>Nixon Islam</dc:creator>
      <pubDate>Sun, 21 Feb 2021 13:13:48 +0000</pubDate>
      <link>https://dev.to/nixon1333/best-practices-working-with-billion-row-tables-in-databases-1pp8</link>
      <guid>https://dev.to/nixon1333/best-practices-working-with-billion-row-tables-in-databases-1pp8</guid>
      <description>&lt;p&gt;This whole post is about a &lt;a href="https://www.youtube.com/watch?v=wj7KEMEkMUE"&gt;video&lt;/a&gt; from Hussein Nasser I saw from youtube. This is just the summarise version of the video along with my key takeaways from the video &lt;/p&gt;

&lt;h2&gt;
  
  
  Learning:
&lt;/h2&gt;

&lt;p&gt;The discussion is on&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to handle data in billion row based table&lt;/li&gt;
&lt;li&gt;What kind of approach can be taken&lt;/li&gt;
&lt;li&gt;How to redesign the table to handle 2 billion rows based table&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary:
&lt;/h2&gt;

&lt;p&gt;Here the discussion is being started with how a twitter follower table has been designed. A simple approach, make a table with which person follows whom. A 2-3 columns based rows. But it will be a huge table in the long term in sense if we use this for twitter. So what can be done &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Do normal query without the concept of &lt;code&gt;indexing&lt;/code&gt;. Just &lt;code&gt;brute forcing&lt;/code&gt; the data without the concept of anything. Do multi threading, multi processing and find the data from the table using lots of machines (&lt;code&gt;map reduce&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;indexing&lt;/code&gt; on the table and find the data from the table using the indexed data.&lt;/li&gt;
&lt;li&gt;Now you have billions of data so indexing is huge, to search that use database &lt;code&gt;partitioning&lt;/code&gt; in the same disk. Use pair partitioning&lt;/li&gt;
&lt;li&gt;Now to optimize more use &lt;code&gt;sharding&lt;/code&gt; in the system(multiple host). But it adds more &lt;code&gt;complexity&lt;/code&gt; in the system. The client needs to be aware of the shard info before querying, then needs to find the proper &lt;code&gt;partition&lt;/code&gt; for the query, then make an actual query. Which makes another layer of logics along with business logics.&lt;/li&gt;
&lt;li&gt;Another way is to &lt;code&gt;redesign&lt;/code&gt; the system like in the profile table add 2 more rows to hold db columns like follower count, followers(in json). In that way a profile holds all information about followers of a profile. The problem might generate how to write/edit this data. But that is another kind of system design like querying, CQRS, event based solution. It solves current issues.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>sql</category>
      <category>database</category>
      <category>architecture</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Circuit Breaker Pattern in Nutshell</title>
      <dc:creator>Nixon Islam</dc:creator>
      <pubDate>Sun, 21 Feb 2021 13:00:12 +0000</pubDate>
      <link>https://dev.to/nixon1333/circuit-breaker-pattern-in-nutshell-12ni</link>
      <guid>https://dev.to/nixon1333/circuit-breaker-pattern-in-nutshell-12ni</guid>
      <description>&lt;p&gt;Recently I was reading a blog on &lt;a href="https://martinfowler.com/bliki/CircuitBreaker.html"&gt;Circuit Breaker&lt;/a&gt; by Martin Fowler. I decided to summarise the content based on the blog and my experiences on this topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning and Summery:
&lt;/h2&gt;

&lt;p&gt;In distributed systems a system often needs to call remote service calls to get proper data. But it can generate critical resource outages, cascading failures across multiple systems if the remote services went down and lots of service calls are being queued up till timeout happens. These situations can be avoided easily by implementing &lt;strong&gt;circuit breaker patterns&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Circuit breaker pattern&lt;/strong&gt; is simply like this, if there is too much error(based on desired error threshold) simply terminate the call to remote service immediately after a remote call is being received. It will start making the calls after it restores the communication with the remote service. It should check the service health after certain period to check the service availability after enabling the circuit breaker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Example:
&lt;/h2&gt;

&lt;p&gt;Let's say we implement a circuit breaker on a remote service call. Whenever a request needs to be executed it sends to a queue. Then a consumer reads the queue later and executes the service call one by one or multiple threads and response back. &lt;/p&gt;

&lt;p&gt;So in this situation we set rules like if 20 service calls in the last 60 seconds are being timed out or get network/gateway errors like(5xx errors, 422 etc) we will enable circuit breaker for 60 sec. &lt;/p&gt;

&lt;p&gt;Means for next 60 sec no service calls will be executed or the consumer will not consume any service request from the queue. &lt;/p&gt;

&lt;p&gt;In the background there will be a worker who will start checking the health status of the remote service in every 5 sec right after the circuit breaker is being enabled. Once it gets the healthy service response it will monitor till 20 more sec. After that it will reset the circuit breaker or increase the circuit breaker timeout for more 60 sec.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefit:&lt;/strong&gt; Assume we had million service requests in the queue, and the consumer has 50 threads to process the queue. Without the circuit breaker it would try to execute all the requests even if the remote service died or made errors. Right after we got errors(till the threshold) for the service calls, we started holding the requests till the service came back online and had a smooth response. This makes a smooth transaction between distributed services and less error handling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Package to Check:&lt;/strong&gt; I checked one of the python packages for &lt;a href="http://fabfuel/circuitbreaker:%20Python%20%22Circuit%20Breaker%22%20implementation"&gt;circuit breakers&lt;/a&gt;. Thought i did not use that personally, but seems it has some functionality out of the bat. Will definitely try later.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>architecture</category>
      <category>design</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
