<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ahmed Gulab Khan</title>
    <description>The latest articles on DEV Community by Ahmed Gulab Khan (@ahmedgulabkhan).</description>
    <link>https://dev.to/ahmedgulabkhan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ahmedgulabkhan"/>
    <language>en</language>
    <item>
      <title>A Fundamental Guide To Docker For Beginners</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Fri, 18 Mar 2022 16:16:23 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/a-fundamental-guide-to-docker-for-beginners-55m6</link>
      <guid>https://dev.to/ahmedgulabkhan/a-fundamental-guide-to-docker-for-beginners-55m6</guid>
      <description>&lt;p&gt;In my &lt;a href="https://medium.com/codex/what-is-docker-and-why-do-we-need-it-7dedc616366e" rel="noopener noreferrer"&gt;previous article&lt;/a&gt;, I had gone over what is Docker and why do we need it. I recommend you go over that article to understand what Docker is and the need to containerize your applications using Docker&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1shinhhgg3ifokjrsi43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1shinhhgg3ifokjrsi43.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker
&lt;/h3&gt;

&lt;p&gt;To begin with, &lt;strong&gt;Docker&lt;/strong&gt; is a tool used for developing, shipping, and running applications inside loosely coupled isolated environments called &lt;strong&gt;containers&lt;/strong&gt;. Containers are lightweight processes that include all the dependencies or libraries that your application needs to run without interfering with the other containers running on the same host machine. The advantage of using containers is that your application runs the same way on any machine irrespective of its OS or conflicting dependencies on the host machine.&lt;/p&gt;

&lt;p&gt;Now, let's discuss the main components that Docker comprises of&lt;/p&gt;




&lt;h3&gt;
  
  
  The Docker Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Docker Engine&lt;/strong&gt;&lt;br&gt;
The &lt;strong&gt;Docker Engine&lt;/strong&gt; is the core containerization technology responsible for creating and managing all the containers and other Docker objects. It acts as a client-server application consisting of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Docker Daemon&lt;/strong&gt;, which is a daemon process that runs in the background, keeps listening for any API requests and manages the Docker objects accordingly&lt;/li&gt;
&lt;li&gt;A set of &lt;strong&gt;APIs&lt;/strong&gt; to communicate with the Docker daemon&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Docker CLI client&lt;/strong&gt; helps users to communicate with the docker daemon and carry out the user's requests by using these Docker APIs&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqs5tpdsa9u6w3pdzd0gg.png" alt="Image description"&gt;
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Docker Objects
&lt;/h3&gt;

&lt;p&gt;All the files and individual components that are managed by the Docker engine like images, containers, dockerfiles, volumes, and networks are called objects in the Docker context. Let's discuss each one of these objects and understand what they represent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;: A Dockerfile is a text file with instructions and commands that a user intends to perform as part of building a docker image. Dockerfile is first used to build an image, which is then used to create the container itself. A simple Dockerfile would look something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu
RUN apt-get update
RUN apt-get install –y nginx
CMD ["echo","Image created"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Docker Image&lt;/strong&gt;: A Docker image is just a template that includes all the set of instructions and commands from the Dockerfile, and this in turn is used to create the actual container. We can use the same docker image to run the same set of instructions and create multiple docker containers on the same host given they run on different ports&lt;br&gt;
&lt;strong&gt;Docker Container&lt;/strong&gt;: A Docker container is the actual instance that is based on the docker image. It packages all the dependencies, libraries that an application needs and runs it in loosely coupled isolation.&lt;/p&gt;

&lt;p&gt;Now that we understand what dockerfiles, images, and containers stand for, let's understand what the Dockerfile cited above does&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM ubuntu&lt;/code&gt;: This statement tells docker which base image to use while building your current image. In this example, we are the ubuntu image, which is pulled from the configured docker registry while building the image. It is required that you specify the FROM command to tell docker which base image your current image is based on&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN apt-get update&lt;/code&gt;: Runs the update command on the ubuntu system&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN apt-get install -y nginx&lt;/code&gt;: Installs the nginx server on the system&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CMD ["echo","Image created"]&lt;/code&gt;: Prints "Image created" message on the terminal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Volume&lt;/strong&gt;: By default, the data related to a container is not persisted when the container is terminated or restarted. Volumes in docker help store the container-related data inside the host machine's file system which is then managed by Docker&lt;br&gt;
&lt;strong&gt;Docker Network&lt;/strong&gt;: For the individual containers to communicate with each other or to communicate with the host machine, docker provides the concept of networking. The different network drivers that can be used are - bridge (default), host, overlay, ipvlan, macvlan, and none (disables all networking)&lt;/p&gt;




&lt;h3&gt;
  
  
  Docker Registries
&lt;/h3&gt;

&lt;p&gt;Docker registries are a place where you can store all your docker images. The official registry of Docker i.e; Docker Hub is a public registry that allows any user to pull or push images to the registry. It is also possible to create your own private registry. Docker Hub is set as the default registry when you first install Docker and all the pull and push commands you run in order to upload or download your images use Docker Hub&lt;/p&gt;




&lt;h3&gt;
  
  
  Docker Compose
&lt;/h3&gt;

&lt;p&gt;So you have created a few dockerfiles, built images based on these dockerfiles, and also ran containers using these images. But using this approach, it becomes difficult to manage all the containers, their configuration, and the network links between individual containers to establish communication, this is where Docker compose comes into the picture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt; is a container orchestration tool that is used to run multi-container applications on a single host machine. Docker compose helps users to start multiple dependent containers by using just a single file and a single command. This single file is used to maintain all the configuration, network links, volume, and port mappings for the required containers of your application and is usually written in YAML format&lt;/p&gt;




&lt;h3&gt;
  
  
  Docker Swarm
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Docker Swarm&lt;/strong&gt;, similar to Docker compose is a container orchestration tool that makes it simpler to manage multiple containers that run on multiple host machines. This group of machines is called a cluster and each of them needs to have Docker running on them. Individual machines that make up the cluster are referred to as nodes and all the activities of the cluster are controlled by a node which is referred to as the swarm manager&lt;/p&gt;

&lt;p&gt;More Docker articles you can check out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/codex/what-is-docker-and-why-do-we-need-it-7dedc616366e" rel="noopener noreferrer"&gt;What Is Docker And Why Do We Need It?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;https://docs.docker.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tutorialspoint.com/docker/index.htm" rel="noopener noreferrer"&gt;https://www.tutorialspoint.com/docker/index.htm&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Follow for more articles related to Docker and Software Engineering in general :)&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>What Is Docker And Why Do We Need It?</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Thu, 17 Mar 2022 16:42:21 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/what-is-docker-and-why-do-we-need-it-5dje</link>
      <guid>https://dev.to/ahmedgulabkhan/what-is-docker-and-why-do-we-need-it-5dje</guid>
      <description>&lt;p&gt;In this article, we shall be going through a brief introduction to what Docker is and why we need it. But first, let’s define the problem statement and see how Docker solves it&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem Statement
&lt;/h3&gt;

&lt;p&gt;Let’s consider that you’re working on an application which requires multiple technologies to be used for developing the different components like the Frontend, Backend, Database etc… Everything seems to be good, but while you’re working with all these technologies on your machine, most often there arises a problem where one technology needs one version of a dependency and some other technology needs a different one to work. Adding to this, it might also happen that your application may not work the same way on someone else’s machine or when you deploy it to different environments with different OS or hardware, it fails to run.&lt;/p&gt;

&lt;p&gt;In order to solve all these frequent problems we use something called virtualization, let’s discuss this in more detail&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution: Virtualization
&lt;/h3&gt;

&lt;p&gt;Virtualization is a process whereby a software is used to create an abstraction layer over computer hardware that allows the hardware elements of a single computer to be divided into multiple virtual computers. The main idea of virtualization would be to isolate the components of our application and their dependencies into individual self-contained units that can run anywhere without any dependency or OS conflicts.&lt;/p&gt;

&lt;p&gt;Now that we know what virtualization means, we can use it in order to solve our above stated problem. There are two ways to achieve this by leveraging the concept virtualization:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Virtual Machines&lt;/strong&gt;&lt;br&gt;
A Virtual Machine is essentially an emulation of a real computer that executes programs like a real computer. Virtual machines run on top of a physical machine using a Hypervisor. A &lt;strong&gt;hypervisor&lt;/strong&gt;, in turn, runs on a host machine.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Hypervisor&lt;/strong&gt; is a piece of software, firmware, or hardware that a virtual machine runs on top of. The host machine provides the virtual machines with resources, including RAM and CPU. These resources are divided between virtual machines and can be distributed based on the applications that run on individual virtual machines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ui80tlk3lvzk8lj3pgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ui80tlk3lvzk8lj3pgi.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Even though, virtual machines have solved the problem by now making our applications to run in isolation, with their own set of dependencies/libraries and OS requirements, the main issue here is that they’re very heavy since each virtual machine has it’s own Guest operating system thereby consuming a higher portion of the host machine’s resources and so take up a lot of time to start or create. This is where containers come to the rescue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Containers&lt;/strong&gt;&lt;br&gt;
Containers are a light-weight, more agile way of handling virtualization, and since they don’t use a software like hypervisor, you can enjoy faster resource provisioning and speedier availability of new applications. Unlike virtual machines which provide a hardware level virtualization, containers provide an operating-system level virtualization due to which they’re very simple and easy to work with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2aw1qzg7s1ldc31k9g3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2aw1qzg7s1ldc31k9g3.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Since containers are lightweight, they consume a lot less resources of the host machine compared to virtual machines. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way irrespective of version conflicts between dependencies or OS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Now, What is Docker?
&lt;/h3&gt;

&lt;p&gt;Docker is a tool which helps in developing, shipping, and running your applications on containers and enables you to separate your applications from your infrastructure so you can deliver software quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqj1gj8pukomegsy5lnn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqj1gj8pukomegsy5lnn.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
It provides the ability to package and run an application in an isolated environment i.e; containers. The isolation and security allows you to run many containers simultaneously on a given host.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some basic Docker terminology
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Docker Engine&lt;/strong&gt;&lt;br&gt;
The Docker Engine is the core containerization technology responsible for creating and managing all the containers and other Docker objects. It acts a client-server application consisting of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Docker Daemon&lt;/strong&gt;, which is a daemon process that runs in the background, keeps listening for any API requests and manages the Docker objects accordingly&lt;/li&gt;
&lt;li&gt;A set of &lt;strong&gt;APIs&lt;/strong&gt; in order to communicate with the Docker daemon&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Docker CLI client&lt;/strong&gt; which helps users to communicate with the docker daemon and carry out the user’s requests by using these Docker APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Image&lt;/strong&gt;&lt;br&gt;
A Docker image is just a template which includes a set of instructions or commands that are used in order to create the actual container. We can use the same docker image to create multiple docker containers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Container&lt;/strong&gt;&lt;br&gt;
A Docker container is the actual instance which is based off of an image, and packages all the dependencies, libraries that an application needs and runs it in loosely coupled isolation&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/get-started/overview/" rel="noopener noreferrer"&gt;https://docs.docker.com/get-started/overview/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ibm.com/cloud/blog/containers-vs-vms" rel="noopener noreferrer"&gt;https://www.ibm.com/cloud/blog/containers-vs-vms&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://opensource.com/resources/virtualization" rel="noopener noreferrer"&gt;https://opensource.com/resources/virtualization&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Follow for more articles related to Docker and Software Engineering in general :)&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>virtualization</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>9 Insanely Helpful Kafka Commands Every Backend Developer Must Know</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Fri, 11 Mar 2022 14:15:01 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/9-insanely-helpful-kafka-commands-every-developer-must-know-1246</link>
      <guid>https://dev.to/ahmedgulabkhan/9-insanely-helpful-kafka-commands-every-developer-must-know-1246</guid>
      <description>&lt;p&gt;In this article, I'm going to list out the most popular Kafka CLI commands you should know as a Developer. Before we begin, I'd recommend you to go over &lt;a href="https://medium.com/@ahmedgulabkhan/a-basic-introduction-to-kafka-a7d10a7776e6"&gt;this article&lt;/a&gt; in order to get a brief understanding of what Kafka is and how it works.&lt;/p&gt;

&lt;p&gt;If you want to setup Kafka locally, you can check &lt;a href="https://medium.com/towardsdev/3-simple-steps-to-set-up-kafka-locally-using-docker-b07f71f0e2c9"&gt;this article&lt;/a&gt; out which helps you setup both Kafka and Zookeeper locally in just 3 simple steps. And for running the below commands, I'm going to use the same Kafka setup as mentioned in that article. Which means that the Kafka version being used here would be &lt;strong&gt;0.10.1.0&lt;/strong&gt; built for the Scala version &lt;strong&gt;2.11&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now without any further delay, let's go through the list of commands&lt;/p&gt;

&lt;h3&gt;
  
  
  1. List all the Kafka topics
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-topics.sh --list --zookeeper localhost:2181
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Create a topic
&lt;/h3&gt;

&lt;p&gt;Creates a Kafka topic named my-first-kafka-topic with partitions and replication factor both set as 1&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-topics.sh --create --topic my-first-kafka-topic --zookeeper localhost:2181 --partitions 1 --replication-factor 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For simplicity, I have set the partitions and the replication factor as 1 for the topic, but you can always play around with this configuration. If you don't know what &lt;strong&gt;partitions&lt;/strong&gt; and &lt;strong&gt;replication factor&lt;/strong&gt; mean in the Kafka context, I'd recommend you to go through &lt;a href="https://medium.com/@ahmedgulabkhan/a-basic-introduction-to-kafka-a7d10a7776e6"&gt;this article&lt;/a&gt; in order to get a good understanding&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Describe a topic
&lt;/h3&gt;

&lt;p&gt;Describes the topic mentioned in the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-first-kafka-topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Update topic Configuration
&lt;/h3&gt;

&lt;p&gt;Update the configuration of the mentioned topic (&lt;strong&gt;my-first-kafka-topic&lt;/strong&gt; in this case). Here, we are updating the property cleanup.policy to be compact, the compression.type to be gzip and the retention.ms to be 3600000&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-type topics --entity-name my-first-kafka-topic --add-config cleanup.policy=compact,compression.type=gzip,retention.ms=3600000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Delete a topic
&lt;/h3&gt;

&lt;p&gt;Deletes the topic mentioned in the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic my-first-kafka-topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Produce messages to a topic
&lt;/h3&gt;

&lt;p&gt;Opens a prompt where you can type any message and hit enter to publish it to the mentioned topic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-first-kafka-topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can keep typing any number of messages and hit enter after every message to publish all the typed messages sequentially. If you're done you can exit using &lt;code&gt;Ctrl+C&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Consume messages from a topic
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Consume messages from the mentioned topic
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-first-kafka-topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command only starts consuming messages from the topic from the instant the command is executed and none of the previous messages are consumed. When you do not specify the Consumer Group for the consumer, Kafka automatically creates a random Consumer Group called &lt;strong&gt;console-consumer-&lt;/strong&gt; for the consumer. And every time you run the above command a new random consumer group is created for the consumer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consume messages from the mentioned topic where the consumer belongs to the mentioned Consumer Group (my-first-consumer-group in this case)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-first-kafka-topic --consumer-property group.id=my-first-consumer-group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  8. Consume messages from a topic from the beginning
&lt;/h3&gt;

&lt;p&gt;From the above point we see that irrespective of whether our consumer belongs to a random consumer group or a consumer group created by us, the messages are only consumed from the instant that the consumer starts, and no previous messages are consumed. The &lt;code&gt;--from-beginning&lt;/code&gt; argument makes sure that when a consumer group is created, it starts consuming all the messages from beginning.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consume messages from the mentioned topic from the beginning
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-first-kafka-topic --from-beginning
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: Once we run the above command, we see that a random consumer group is created and our consumer falls under this random consumer group, so the messages are consumed from the beginning. Now if we run the above command once again, a new random consumer group is created once again and the new consumer falls under this new consumer group and all the messages are consumed again from the beginning. If we only want our consumer to consume the messages from the start when we first start the consumer (i.e; the first time the consumer group is created and our consumer joins this consumer group), then we have to make sure that we keep using the same consumer group in the above command the next time we start our consumer; this way our consumer only consumes messages from the start the first time it starts up, and in the case when this consumer is restarted, the messages are only consumed from the last committed offset but not from the very beginning. So, it is important that you specify a consumer group, since if it's not specified a random consumer group is created each time the above command is run and the messages are consumed from the very beginning every single time&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consume messages from the mentioned topic where the consumer belongs to the mentioned Consumer Group
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-first-kafka-topic --consumer-property group.id=my-first-consumer-group --from-beginning
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  9. List all the Consumer Groups
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-consumer-groups.sh --list --bootstrap-server localhost:9092
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More Kafka articles that you can go through:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://medium.com/@ahmedgulabkhan/a-basic-introduction-to-kafka-a7d10a7776e6"&gt;Apache Kafka: A Basic Intro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/javarevisited/kafka-partitions-and-consumer-groups-in-6-mins-9e0e336c6c00"&gt;Kafka Partitions and Consumer Groups in 6 mins&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/towardsdev/3-simple-steps-to-set-up-kafka-locally-using-docker-b07f71f0e2c9"&gt;3 Simple steps to set up Kafka locally using Docker&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Follow for the next Kafka blog in the series. I shall also be posting more articles talking about Software engineering concepts.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>apachekafka</category>
      <category>kafkacli</category>
      <category>kafkacheatsheet</category>
    </item>
    <item>
      <title>3 Simple steps to set up Kafka locally using Docker</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Fri, 04 Mar 2022 12:09:34 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/3-simple-steps-to-set-up-kafka-locally-using-docker-459l</link>
      <guid>https://dev.to/ahmedgulabkhan/3-simple-steps-to-set-up-kafka-locally-using-docker-459l</guid>
      <description>&lt;p&gt;In this article, let’s go over how you can set up Kafka and have it running on your local environment. For this, make sure that you have Docker installed on your machine.&lt;/p&gt;

&lt;p&gt;If you want to get a brief understanding of Kafka and how it works, I’d recommend you go through &lt;a href="https://medium.com/@ahmedgulabkhan/a-basic-introduction-to-kafka-a7d10a7776e6" rel="noopener noreferrer"&gt;this article&lt;/a&gt;, as it’d help you get a good understanding of Kafka and some basic kafka terminology.&lt;/p&gt;

&lt;p&gt;Now, Let’s get started with setting up Kafka locally using Docker&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femv7evqm1x6ct9q66nu0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femv7evqm1x6ct9q66nu0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Download and Install Kafka:
&lt;/h2&gt;

&lt;p&gt;With Docker installed, you can follow the below steps in order to download the &lt;strong&gt;spotify/kafka&lt;/strong&gt; image on your machine and run the image as a docker container&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download spotify/kafka image using docker
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull spotify/kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create the docker container using the downloaded image
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 2181:2181 -p 9092:9092 --name kafka-docker-container --env ADVERTISED_HOST=127.0.0.1 --env ADVERTISED_PORT=9092 spotify/kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What the above steps do is — a. Download the spotify/kafka image and b. Using this image, create a docker container named &lt;strong&gt;kafka-docker-container&lt;/strong&gt; (you can name the container anything you prefer) with ports 2181, 9092 of your machine mapped to 2181, 9092 of the container.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;One nice thing about the spotify/kafka docker image is that it comes with both Kafka and Zookeeper configured in the same image, so you don’t have to worry about having to configure and start Kafka and Zookeeper separately.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With that, you have Kafka running on your machine and open to listening and storing messages from producers which can then be consumed by the consumers.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Create your very first Kafka topic:
&lt;/h2&gt;

&lt;p&gt;Since, there are currently no topics present on the Kafka broker, you can go ahead and create one to see if everything works as expected&lt;/p&gt;

&lt;p&gt;To use the Kafka CLI, you have to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open your terminal and exec inside the kafka container
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it kafka-docker-container bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once in the container, goto the below path
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /opt/kafka_2.11-0.10.1.0/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here 2.11 is the Scala version and 0.10.1.0 is the Kafka version that is used by the spotify/kafka docker image&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a topic
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-topics.sh —create —topic my-first-kafka-topic —zookeeper localhost:2181 —partitions 1 —replication-factor 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command would have created your first topic with the name &lt;strong&gt;my-first-kafka-topic&lt;/strong&gt; in your kafka container.&lt;/p&gt;

&lt;p&gt;For simplicity, I have set the partitions and the replication factor as 1 for the topic, but you can always play around with this configuration. If you don’t know what &lt;strong&gt;partitions&lt;/strong&gt; and &lt;strong&gt;replication factor&lt;/strong&gt; mean in the Kafka context, I’d recommend you to go through &lt;a href="https://medium.com/@ahmedgulabkhan/a-basic-introduction-to-kafka-a7d10a7776e6" rel="noopener noreferrer"&gt;this article&lt;/a&gt; in order to get a good understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. List all the Kafka topics:
&lt;/h2&gt;

&lt;p&gt;After you’ve created the topic as mentioned above, you can run the below command in order to list all the topics present on your locally running Kafka container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-topics.sh —list —zookeeper localhost:2181
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if everything goes well, you should be able to see the topic you just created being listed after you run the above command.&lt;/p&gt;

&lt;p&gt;More Kafka articles that you can go through:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://medium.com/@ahmedgulabkhan/a-basic-introduction-to-kafka-a7d10a7776e6" rel="noopener noreferrer"&gt;Apache Kafka: A Basic Intro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@ahmedgulabkhan/kafka-partitions-and-consumer-groups-in-6-mins-9e0e336c6c00" rel="noopener noreferrer"&gt;Kafka Partitions and Consumer Groups in 6 mins&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Follow for the next Kafka blog in the series. I shall also be posting more articles talking about Software engineering concepts.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>apachekafka</category>
      <category>kafkadocker</category>
      <category>kafkaconsumer</category>
    </item>
    <item>
      <title>Kafka Partitions and Consumer Groups</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Mon, 28 Feb 2022 10:34:58 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/kafka-partitions-and-consumer-groups-2aff</link>
      <guid>https://dev.to/ahmedgulabkhan/kafka-partitions-and-consumer-groups-2aff</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/ahmedgulabkhan/basic-kafka-terminology-43e6"&gt;previous article&lt;/a&gt;, we had discussed how Kafka works and went through some basic Kafka terminology. In this article we would go over how Partitions and Consumer Groups work in Kafka.&lt;/p&gt;

&lt;p&gt;If you haven’t gone through my &lt;a href="https://dev.to/ahmedgulabkhan/basic-kafka-terminology-43e6"&gt;previous article&lt;/a&gt; or if you’re new to Kafka, I recommend you to go through it as it’d help you get a basic understanding of how Kafka works.&lt;/p&gt;

&lt;p&gt;You can find the complete article with some common Q&amp;amp;A's &lt;a href="https://medium.com/@ahmedgulabkhan/kafka-partitions-and-consumer-groups-in-6-mins-9e0e336c6c00" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what is a Partition?
&lt;/h2&gt;

&lt;p&gt;Before talking about partitions we need to understand what a &lt;strong&gt;topic&lt;/strong&gt; is. In Kafka, a topic is basically a storage unit where all the messages sent by the producer are stored. Generally, similar data is stored in individual topics. For example, you can have a topic named “user” where you only store the details of your users, or you can have a topic named “payments” where you only store all the payment related details. A topic can be further subdivided into multiple storage units and these subdivisions of a topic are known as &lt;strong&gt;partitions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By default a topic is created with only 1 partition and whatever messages are published to this topic are stored in that partition. If you configure a topic to have multiple partitions then the messages sent by the producers would be stored in these partitions such that no two partitions would have the same message/event.&lt;/p&gt;

&lt;p&gt;All the partitions in a topic would also have their own offsets (If you don’t know what an offset is, I recommend you check out this article where I have discussed about it)&lt;/p&gt;

&lt;p&gt;As an example, a producer producing messages to a kafka topic with 3 partitions would look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bqdcyxmneodi7zhdpfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bqdcyxmneodi7zhdpfg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now, what is a Consumer Group?
&lt;/h2&gt;

&lt;p&gt;A bunch of consumers can form a group in order to cooperate and consume messages from a set of topics. This grouping of consumers is called a &lt;strong&gt;Consumer Group&lt;/strong&gt;. If two consumers have subscribed to the same topic and are present in the same consumer group, then these two consumers would be assigned a different set of partitions and none of these two consumers would receive the same messages.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Consumer Groups can help attain a higher consumption rate, if multiple consumers are consuming from the same topic.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now, let’s go through a few scenarios to better understand the above concepts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1:&lt;/strong&gt; Let’s say we have a topic with 4 partitions and 1 consumer group consisting of only 1 consumer. The consumer has subscribed to the TopicT1 and is assigned to consume from all the partitions. This scenario can be depicted by the picture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofc66c71w8dd8pephml3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofc66c71w8dd8pephml3.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Scenario 2:&lt;/strong&gt; Now let’s consider we have 2 consumers in our consumer group. These 2 consumers would be assigned to read from different partitions — Consumer1 assigned to read from partitions 0, 2; and Consumer2 assigned to read from partitions 1, 3.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Kafka assigns the partitions of a topic to the consumer in a consumer group, so that each partition is consumed by exactly one consumer in the consumer group. Kafka guarantees that a message is only ever read by a single consumer in the consumer group.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Since the messages stored in individual partitions of the same topic are different, the two consumers would never read the same message, thereby avoiding the same messages being consumed multiple times at the consumer side. This scenario can be depicted by the picture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnw3qoidc59lpjw1oh56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnw3qoidc59lpjw1oh56.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;But, what if the number of consumers in a consumer group is more than the number of partitions? Check out Scenario 3&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3:&lt;/strong&gt; Let’s say we have 5 consumers in the consumer group which is more than the number of partitions of the TopicT1, then every consumer would be assigned a single partition and the remaining consumer (Consumer5) would be left idle. This scenario can be depicted by the picture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumhvu7j7qd4wt8vwlxv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumhvu7j7qd4wt8vwlxv4.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Okay, and what if you want multiple consumers to read from the same partition? Check out Scenario 4&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 4:&lt;/strong&gt; If you want to assign multiple consumers to read from the same partition, then you can add these consumers to different consumer groups, and have both of these consumer groups subscribed to the TopicT1. Here, the messages from Partition0 of TopicT1 are read by Consumer1 of ConsumerGroup1 and Consumer1 of ConsumerGroup2. This scenario can be depicted by the picture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghuffw7aul0fwo9fuphj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghuffw7aul0fwo9fuphj.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
You can check out my previous article &lt;a href="https://dev.to/ahmedgulabkhan/basic-kafka-terminology-43e6"&gt;Apache Kafka: Basic Terminology&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow me for the next Kafka blog in the series. I shall also be posting more articles talking about Software engineering concepts.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>apachekafka</category>
      <category>kafkapartitions</category>
      <category>kafkaconsumergroups</category>
    </item>
    <item>
      <title>Apache Kafka: Basic terminology</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Sat, 26 Feb 2022 14:46:56 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/basic-kafka-terminology-43e6</link>
      <guid>https://dev.to/ahmedgulabkhan/basic-kafka-terminology-43e6</guid>
      <description>&lt;p&gt;In this blog post I’ll be giving a brief and basic introduction regarding &lt;em&gt;Apache Kafka&lt;/em&gt; and the terminology that would be necessary to know in order to get started with Kafka.&lt;/p&gt;

&lt;p&gt;You can check out the full article &lt;a href="https://medium.com/@ahmedgulabkhan/a-basic-introduction-to-kafka-a7d10a7776e6"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka — What is it?
&lt;/h2&gt;

&lt;p&gt;In a nutshell, Kafka is a distributed system that allows multiple services to communicate with each other via its queue based architecture. With that out of the way, let’s get to know about some basic Kafka terminology. Let’s get started ;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f6pS_AFj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8d7ux4r9fqqbb5ub4bqq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f6pS_AFj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8d7ux4r9fqqbb5ub4bqq.jpg" alt="Image description" width="880" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Broker&lt;/strong&gt;: A &lt;em&gt;Broker&lt;/em&gt; is a server which has Kafka running on it and is responsible for the communication between multiple services. Multiple brokers would form a Kafka cluster.&lt;br&gt;
&lt;strong&gt;Event&lt;/strong&gt;: The messages that are produced to or consumed from the Kafka broker are called &lt;em&gt;events&lt;/em&gt;. These messages are stored in the form of bytes in the broker’s disk storage.&lt;br&gt;
&lt;strong&gt;Producer and Consumer&lt;/strong&gt;: The services that produce these events to Kafka broker are referred to as &lt;em&gt;Producers&lt;/em&gt; and those which consume these events are referred to as &lt;em&gt;Consumers&lt;/em&gt;. It could also be possible that the same service can both produce and consume messages from Kafka.&lt;br&gt;
&lt;strong&gt;Topic&lt;/strong&gt;: In order to differentiate the type of events stored in Kafka, &lt;em&gt;topics&lt;/em&gt; are used. In short, a topic is like a folder in a file system where only events or messages related to a specific type are stored. For example: “payment-details”, “user-details”, etc.&lt;br&gt;
&lt;strong&gt;Partition&lt;/strong&gt;: A topic can be further divided into &lt;em&gt;partitions&lt;/em&gt; in order to attain higher throughput. It is the smallest storage unit which holds a subset of data of a topic.&lt;br&gt;
&lt;strong&gt;Replication Factor&lt;/strong&gt;: A replica of a partition is a backup of that partition. The &lt;em&gt;replication factor&lt;/em&gt; of a topic decides how many replicas of a partition in that topic should be maintained by the Kafka cluster. A topic with partition as 1 and replication factor as 2 would mean that two copies of the same partition with same data would be stored in the Kafka cluster.&lt;br&gt;
&lt;strong&gt;Offset&lt;/strong&gt;: To keep a track of which events have already been consumed by the consumer, an index pointing to the latest consumed message is stored inside Kafka, this index is called the &lt;em&gt;offset&lt;/em&gt; and helps keep a track of which events have already been consumed by the consumer. So if a consumer were to go down, this offset value would help us know exactly from where the consumer has to start consuming events. A producer producing messages to a kafka topic with 3 partitions would look like this:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N4uwGFpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/enzn2m3wpjo5ah61yu03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N4uwGFpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/enzn2m3wpjo5ah61yu03.png" alt="Image description" width="416" height="267"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Zookeeper&lt;/strong&gt;: &lt;em&gt;Zookeeper&lt;/em&gt; is an extra service present in the Kafka cluster that helps maintain the cluster ACLs, , stores the offsets for all the partitions of all the topics, used to track the status of the Kafka broker nodes and maintain the client quotas (how much data a producer/consumer is allowed to read/write)&lt;br&gt;
&lt;strong&gt;Consumer Group&lt;/strong&gt;: A bunch of consumers can join a group in order to cooperate and consume messages from a set of topics. This grouping of consumers is called a &lt;em&gt;Consumer Group&lt;/em&gt;. If two consumers have subscribed to the same topic and are present in the same consumer group, then these two consumers would be assigned a different set of partitions and none of these two consumers would receive the same messages. Consumer Groups can help attain higher consumption rate, if multiple consumers are subscribed to the same topic.&lt;/p&gt;

&lt;p&gt;You can check out the next article of the Kafka series &lt;a href="https://dev.to/ahmedgulabkhan/kafka-partitions-and-consumer-groups-2aff"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow for the next Kafka blog in the series. I shall also be posting more articles talking about Software engineering concepts.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>kafkaintro</category>
      <category>kafkabasics</category>
    </item>
    <item>
      <title>What even is GitHub Codespaces?</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Sat, 05 Sep 2020 11:01:39 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/what-is-github-codespaces-2lhp</link>
      <guid>https://dev.to/ahmedgulabkhan/what-is-github-codespaces-2lhp</guid>
      <description>&lt;p&gt;So GitHub has recently announced a new way for users to edit their code online, through something called the Codespaces, which is currently in beta. Here is an overview of what it's all about.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;b&gt;What is Codespaces?&lt;/b&gt;
&lt;/h3&gt;

&lt;p&gt;According to GitHub, "Codespaces is an online dev environment which gives a complete Visual Studio Code like experience without leaving GitHub". This means that all the users who have signed up for Codespaces, can edit their code and get access to all the features of VS Code along with it's marketplace on GitHub itself.&lt;/p&gt;

&lt;p&gt;Well currently Codespaces is in beta and you can sign up for early access &lt;a href="https://github.com/features/codespaces/signup"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;b&gt;Is Codespaces available to everyone, and for all repositories?&lt;/b&gt;
&lt;/h3&gt;

&lt;p&gt;Currently Codespaces is only available for a limited number of users and over time more users will start gaining access to it based on availability and sign up date.&lt;/p&gt;

&lt;p&gt;While in beta, Codespaces will be available for the repositories that you own and public repositories. Additional support will be available as the beta progresses, but for now, Codespaces will not be available for private repositories that belong to organizations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;b&gt;Pricing&lt;/b&gt;
&lt;/h3&gt;

&lt;p&gt;Codespaces will be free for the limited beta, and GitHub is planning to make it a pay-as-you-go service in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;b&gt;Codespaces vs Visual Studio Code&lt;/b&gt;
&lt;/h3&gt;

&lt;p&gt;Codespaces sets up a cloud-hosted, containerized, and customizable VS Code environment. After set up, you can connect to a codespace through the browser or through VS Code.&lt;/p&gt;

&lt;p&gt;For more information about Codespaces, and to know about the FAQ's related to Codespaces, you can go to &lt;a href="https://github.com/features/codespaces"&gt;this&lt;/a&gt; link.&lt;/p&gt;

&lt;p&gt;Source - &lt;a href="https://github.com/features/codespaces"&gt;GitHub Codespaces&lt;/a&gt;&lt;/p&gt;

</description>
      <category>codespaces</category>
      <category>github</category>
      <category>opensource</category>
      <category>vscode</category>
    </item>
    <item>
      <title>Restaurant reviews app made using Flutter and Firebase</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Thu, 03 Sep 2020 19:26:10 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/food-app-made-using-flutter-and-firebase-3fab</link>
      <guid>https://dev.to/ahmedgulabkhan/food-app-made-using-flutter-and-firebase-3fab</guid>
      <description>&lt;p&gt;So, a couple of days ago I developed a food restaurant reviews app using Flutter and Firebase, and made it open-source by uploading the source code on my GitHub profile.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the app
&lt;/h2&gt;

&lt;p&gt;Foodspace is an app made using Flutter, where people can register and start exploring wide categories of restaurants present in their cities and also check the reviews and feedback for a specific restaurant. There is also a ‘likes section’ where all the restaurants liked by the user are displayed.&lt;/p&gt;

&lt;p&gt;I was using the Zomato API for fetching details of these restaurants by place, and using the co-ordinates that I received from the API, created a map that shows a pointer which points to the location of the restaurant that you have clicked using flutter_map dependency.&lt;/p&gt;

&lt;p&gt;Firebase was used to store the login credentials and all the restaurants any particular user has liked. Stored user login data using shared_preferences to avoid users to login everytime they close and reopen the app. Used firebase_auth for user authentication.&lt;/p&gt;

&lt;p&gt;Here is the link to the source code: &lt;a href="https://www.github.com/ahmedgulabkhan/Foodspace" rel="noopener noreferrer"&gt;https://www.github.com/ahmedgulabkhan/Foodspace&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you like the project, you can star the repository on GitHub, or contribute to it by making Pull requests.&lt;/p&gt;

&lt;p&gt;The final app looks something likes this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fahmedgulabkhan%2FFoodspace%2Fmaster%2Fsnapshots%2Ffoodspace_snapshots.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fahmedgulabkhan%2FFoodspace%2Fmaster%2Fsnapshots%2Ffoodspace_snapshots.png" alt="Markdown Monster icon"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependecies used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;introduction_screen&lt;/li&gt;
&lt;li&gt;firebase_auth&lt;/li&gt;
&lt;li&gt;google_sign_in&lt;/li&gt;
&lt;li&gt;cloud_firestore&lt;/li&gt;
&lt;li&gt;flutter_spinkit&lt;/li&gt;
&lt;li&gt;shared_preferences&lt;/li&gt;
&lt;li&gt;font_awesome_flutter&lt;/li&gt;
&lt;li&gt;shimmer&lt;/li&gt;
&lt;li&gt;flutter_map&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;b&gt;I also made a medium article explaining how I implemented most of the things, you can check it out &lt;a href="https://medium.com/@ahmedgulabkhan/food-app-made-using-flutter-and-firebase-fd4cb77d29a2" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Hope you like this project, would love to hear your feedback on this :).&lt;/p&gt;

</description>
      <category>flutter</category>
      <category>flutterdev</category>
      <category>opensource</category>
      <category>github</category>
    </item>
    <item>
      <title>GroupChatApp - A group chatting app using Flutter</title>
      <dc:creator>Ahmed Gulab Khan</dc:creator>
      <pubDate>Sun, 16 Aug 2020 09:12:41 +0000</pubDate>
      <link>https://dev.to/ahmedgulabkhan/groupchatapp-a-group-chatting-app-using-flutter-2gif</link>
      <guid>https://dev.to/ahmedgulabkhan/groupchatapp-a-group-chatting-app-using-flutter-2gif</guid>
      <description>&lt;p&gt;Hello guys, a few days ago I developed a group chatting app using Flutter and Firebase, and made it open-source by uploading the source code on my GitHub profile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is the link to the Source code:&lt;/strong&gt;  &lt;a href="https://github.com/ahmedgulabkhan/GroupChatApp"&gt;https://github.com/ahmedgulabkhan/GroupChatApp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you like the project, you can star the repository on GitHub, or contribute to it by making Pull requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Sign in/Register&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users can register on the app by entering their Email, Full name and Password.&lt;/li&gt;
&lt;li&gt;If already registered, they can simply sign in by entering Email and Password.&lt;/li&gt;
&lt;li&gt;Used Firebase to implement sign in/register for users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Creating Groups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After being successfully logged in, users can create a new group with any name.&lt;/li&gt;
&lt;li&gt;When a user creates a group, they automatically become a member of the group.&lt;/li&gt;
&lt;li&gt;All the groups that a user is a member of, are displayed on the user's home screen page.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Search for Groups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users can also search for any content related groups created by other users.&lt;/li&gt;
&lt;li&gt;If you come across a group and like the content, you can join the group and start chatting with other members of the group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Chatting in groups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only the members of a group can chat within the group.&lt;/li&gt;
&lt;li&gt;All the chats related to a group will be private to that group.&lt;/li&gt;
&lt;li&gt;All the details of a group are stored in a Firebase collection and all the messages related to that group are stored in the sub-collection of that group's collection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Logout&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logout functionality implemented using Firebase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the journey of making this app, I learned a lot of new things. So, I decided to make this project open-source so that other people can also learn and benefit from it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source code/GitHub repo:&lt;/strong&gt;  &lt;a href="https://github.com/ahmedgulabkhan/GroupChatApp"&gt;https://github.com/ahmedgulabkhan/GroupChatApp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you like the project, you can star the repo on GitHub by going to the above link.&lt;/p&gt;

&lt;p&gt;Would love to hear your valuable feedback on this.&lt;/p&gt;

</description>
      <category>flutter</category>
      <category>flutterdev</category>
      <category>opensource</category>
      <category>github</category>
    </item>
  </channel>
</rss>
