<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MD RAHIM IQBAL</title>
    <description>The latest articles on DEV Community by MD RAHIM IQBAL (@superiqbal7).</description>
    <link>https://dev.to/superiqbal7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/superiqbal7"/>
    <language>en</language>
    <item>
      <title>Setting Up Kafka on MacOS M1 using Homebrew</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Sat, 29 Jul 2023 06:25:30 +0000</pubDate>
      <link>https://dev.to/superiqbal7/setting-up-kafka-on-macos-m1-using-homebrew-3bkj</link>
      <guid>https://dev.to/superiqbal7/setting-up-kafka-on-macos-m1-using-homebrew-3bkj</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This guide provides a step-by-step walkthrough of installing Apache Kafka on a MacOS M1 system. However, before we can install Kafka, we need to ensure Homebrew is installed on your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Homebrew Installation
&lt;/h2&gt;

&lt;p&gt;Homebrew is a package manager for MacOS that simplifies the installation of software. To install Homebrew, open Terminal and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install Homebrew on your machine. For more information about Homebrew and how to use it, check out the official &lt;a href="https://brew.sh/"&gt;Homebrew documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka Installation
&lt;/h2&gt;

&lt;p&gt;With Homebrew installed, we can now install Java and Kafka. Run the following commands in your Terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install java
brew install kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands install both Java and Kafka on your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Kafka and Zookeeper
&lt;/h2&gt;

&lt;p&gt;To start Kafka, we need to run Kafka and Zookeeper services separately. Open two Terminal windows for this purpose.&lt;/p&gt;

&lt;p&gt;Start Zookeeper:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zookeeper-server-start /opt/homebrew/etc/kafka/zookeeper.properties
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start Kafka:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kafka-server-start /opt/homebrew/etc/kafka/server.properties
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure that both services are running successfully before proceeding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Kafka Topic
&lt;/h2&gt;

&lt;p&gt;Before interacting with Kafka using any producer or consumer API, you need to create a Kafka topic. A topic is essentially a category or feed name to which records get published. Here is how to create a topic named '&lt;strong&gt;foobar&lt;/strong&gt;':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kafka-topics --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic foobar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: We no longer use the --zookeeper flag because newer Kafka versions for MacOS do not require it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Kafka Producer and Consumer APIs
&lt;/h2&gt;

&lt;p&gt;To validate your Kafka setup, let's test the Kafka producer and consumer APIs:&lt;/p&gt;

&lt;p&gt;Open two new Terminal windows.&lt;/p&gt;

&lt;p&gt;In the first terminal, initialize a producer console for the '&lt;strong&gt;foobar&lt;/strong&gt;' topic and send some test messages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kafka-console-producer --broker-list localhost:9092 --topic foobar
&amp;gt; foo
&amp;gt; bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the second terminal, initialize a consumer console for the 'foobar' topic. This will listen to the bootstrap server on &lt;strong&gt;port 9092&lt;/strong&gt; and the 'foobar' topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kafka-console-consumer --bootstrap-server localhost:9092 --topic foobar --from-beginning
foo
bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see these outputs, congratulations! Kafka is now set up and running smoothly on your MacOS M1 system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up Kafka on a MacOS M1 system is a straightforward process, especially with the help of Homebrew. With Kafka properly installed, you can now use it to manage real-time data pipelines and streams. Happy streaming!&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>m1</category>
      <category>brew</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Managing Multiple GitHub Accounts with SSH Keys on Your Device</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Fri, 28 Jul 2023 12:30:21 +0000</pubDate>
      <link>https://dev.to/superiqbal7/managing-multiple-github-accounts-with-ssh-keys-on-your-device-59p</link>
      <guid>https://dev.to/superiqbal7/managing-multiple-github-accounts-with-ssh-keys-on-your-device-59p</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Managing different GitHub accounts is a common scenario for many developers. Perhaps you maintain a personal GitHub account to showcase your school projects and a work account for your professional work. To switch between these accounts easily and securely on the same computer, you can use SSH keys. These keys serve as a secure method of authentication, much like a physical key can be used to lock or unlock a specific door.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Secure Shell (SSH)&lt;/strong&gt; is a cryptographic protocol enabling secure remote access to servers. Widely used by developers and system administrators, SSH authenticates two parties (client and server) and encrypts data that passes between them using a pair of cryptographic keys: a &lt;strong&gt;public key for distribution&lt;/strong&gt; and a &lt;strong&gt;private one kept secret&lt;/strong&gt;.&lt;br&gt;
You can generate your SSH keys using the &lt;code&gt;ssh-keygen&lt;/code&gt; command:&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t ed25519
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This command creates a new set of keys using the &lt;strong&gt;ED25519&lt;/strong&gt; algorithm. You can specify where to save the keys and whether to use a passphrase for added security. &lt;br&gt;
After generating your keys, distribute your public key to any system you need to access. When initiating an SSH session, the server uses the public key to encrypt messages in a way that can only be decrypted with the private key. The client uses the private key to authenticate its identity to the server.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 1: Generating SSH Keys
&lt;/h2&gt;

&lt;p&gt;Generating SSH keys for each GitHub account involves creating a pair of keys – one &lt;strong&gt;private&lt;/strong&gt; and one &lt;strong&gt;public&lt;/strong&gt; – for each account. The private key is kept secret on your computer, and the public key is added to your GitHub account.&lt;/p&gt;

&lt;p&gt;Open Terminal on your device and use the following commands to generate SSH keys for your personal and work GitHub accounts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For your personal account:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t ed25519 -C "your_personal_email@example.com" -f ~/.ssh/github-personal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;And for your work account:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t ed25519 -C "your_work_email@example.com" -f ~/.ssh/github-work
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;-t&lt;/strong&gt; flag specifies the type of key to create, &lt;strong&gt;ed25519&lt;/strong&gt; in this case. The &lt;strong&gt;-C&lt;/strong&gt; flag lets you label the key with a comment, typically the associated email address. Finally, the &lt;strong&gt;-f&lt;/strong&gt; flag lets you specify the filename of the key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Add SSH keys to SSH Agent
&lt;/h2&gt;

&lt;p&gt;Now we have the keys but it cannot be used until we add them to the SSH Agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-add -K ~/.ssh/github-personal
ssh-add -K ~/.ssh/github-work
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Adding SSH Keys to GitHub Accounts
&lt;/h2&gt;

&lt;p&gt;The public keys (not the private ones!) generated in Step 1 need to be added to the respective GitHub accounts. This process is a way to introducing GitHub to your magic box and telling it that messages from these keys should go to your respective accounts.&lt;/p&gt;

&lt;p&gt;To display the contents of your public key, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano ~/.ssh/id_rsa_personal.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Copy the content and exit using &lt;strong&gt;ctrl+X&lt;/strong&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to personal GitHub account, click on your avatar &amp;gt; Settings &amp;gt; SSH and GPG keys &amp;gt; New SSH key. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Paste the key, give it a name and save it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Repeat the process for your work account using the id_rsa_work.pub key.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 4: Configuring SSH for Different Accounts
&lt;/h2&gt;

&lt;p&gt;With the SSH keys added to the respective GitHub accounts, we now need to create an SSH configuration file to map each key to the correct account.&lt;/p&gt;

&lt;p&gt;Create and open a new config file in &lt;code&gt;.ssh&lt;/code&gt; diectory by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch ~/.ssh/config
nano ~/.ssh/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then add the following configurations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Work account
Host github.com-work
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_rsa_work

# Personal account
Host github.com-personal
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_rsa_personal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and exit the file &lt;code&gt;(ctrl + X, then Y, then enter)&lt;/code&gt;. SSH now knows which key to use for which GitHub account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Using the Configurations in Practice
&lt;/h2&gt;

&lt;p&gt;With everything set up, you can now use these configurations to clone repositories, make commits, and push changes using each GitHub account.&lt;/p&gt;

&lt;p&gt;For instance, if you wanted to clone a repository from your personal account, you would use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com-personal:username/repo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And to clone a repository from your work account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com-work:username/repo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace username and repo with your GitHub username and repository name, respectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Git Local &amp;amp; Global Configurations
&lt;/h2&gt;

&lt;p&gt;The next step involves telling Git who we are when we commit changes using these accounts. We do this by setting our name and email for each repositories. If you want to set this config for a specific repository only, navigate to that repository in your terminal and execute the below commands.&lt;/p&gt;

&lt;p&gt;For the repositories have access with personal account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git config user.name "Your Name"
git config user.email "your_personal_email@example.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And for the repositories have access with work account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git config user.name "Your Work Name"
git config user.email "your_work_email@example.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands set your Git username and email respectively for a specific repository only. If you want same config globally(for all repositories) then the &lt;code&gt;--global&lt;/code&gt; flag applies these settings across all repositories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git config --global user.name "Your Name"
git config --global user.email "your_personal_email@example.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To push or pull to the correct account from the existing cloned repositories we need to add the remote origin to the project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//for repositories associated with personal account
git remote add origin git@github.com-personal:username/repo.git

//for repositories associated with work account    
git remote add origin git@github.com-work:username/repo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Managing multiple GitHub accounts on a single machine might seem complicated, but with the help of SSH keys and some simple Git configurations, you can easily switch between your accounts. This is a powerful and secure way to maintain separate identities for personal and professional projects. So go forth and code, knowing that you have a robust and flexible system for managing your GitHub accounts!&lt;/p&gt;

&lt;p&gt;If you'd like, let's connect on &lt;a href="https://www.linkedin.com/in/superiqbal7/"&gt;LinkedIn&lt;/a&gt; and grow our community bigger.&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>ssh</category>
      <category>development</category>
    </item>
    <item>
      <title>Docker: The Ultimate Guide to Streamline Application Development</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Thu, 13 Apr 2023 20:34:48 +0000</pubDate>
      <link>https://dev.to/superiqbal7/docker-the-ultimate-guide-to-streamline-application-development-351e</link>
      <guid>https://dev.to/superiqbal7/docker-the-ultimate-guide-to-streamline-application-development-351e</guid>
      <description>&lt;p&gt;In recent years, Docker has become a popular tool for developers looking to streamline their application development process. In this blog post, we will delve into the fundamentals of Docker, including its core components and how it compares to virtual machines. We will also touch on key terms and concepts that will help you understand how Docker can revolutionize your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81vhu30o9p51fbdoi3gq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81vhu30o9p51fbdoi3gq.png" alt="Docker"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now what is docker?
&lt;/h2&gt;

&lt;p&gt;Docker is a platform designed to simplify the process of building, running, and shipping applications. It achieves this by utilizing containerization technology, which allows developers to create isolated environments called containers for running applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Imagine you're moving to a new house, and you have a room with specific furniture, decorations, and settings that you want to recreate in your new place. Transporting everything individually, making sure nothing gets damaged, and setting it all up again in the new house can be a tedious and time-consuming process.&lt;/p&gt;

&lt;p&gt;In this scenario, think of Docker as a portable room that you can use to pack all your belongings together. Docker allows you to put the furniture, decorations, and settings inside a container, which can then be sealed and transported to your new house without worrying about compatibility or damage. Upon arrival, you simply "unpack" the container, and your room is set up exactly as it was before.&lt;/p&gt;

&lt;p&gt;In the world of software development, Docker works in a similar manner. Applications often depend on specific libraries, configurations, and runtime environments. Setting up these dependencies manually on different systems (e.g., development, testing, and production environments) can be complex and time-consuming, and may lead to inconsistencies or errors.&lt;/p&gt;

&lt;p&gt;Docker containers encapsulate everything an application needs to run, including the operating system, libraries, dependencies, and application code. This ensures that the application runs consistently across different environments, regardless of the underlying system. With Docker, developers can build a container once and then deploy it to various stages of the development process (e.g., testing, staging, and production) without worrying about compatibility issues or discrepancies.&lt;/p&gt;

&lt;p&gt;In summary, just as a portable room allows you to move your belongings easily and consistently between houses, Docker enables developers to build, run, and ship applications consistently across various environments, simplifying the deployment process and reducing potential errors.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1j287fymeq2m2oa4zbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1j287fymeq2m2oa4zbq.png" alt="Container"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers vs. Virtual Machines
&lt;/h2&gt;

&lt;p&gt;While both containers and virtual machines (VMs) enable running applications in isolated environments, they differ significantly in their underlying architecture and resource consumption.&lt;/p&gt;

&lt;p&gt;A virtual machine is an abstraction of hardware resources, created and managed by hypervisors such as VirtualBox, VMware, or Hyper-V (Windows-only). VMs emulate a complete system, including the operating system, and can be resource-intensive and slow to start.&lt;/p&gt;

&lt;p&gt;On the other hand, containers are lightweight and launch quickly, as they do not include a full-fledged operating system and share the host's operating system, specifically the kernel and only require an operating-system process with its own file system. This results in more efficient resource utilization and faster startup times compared to VMs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The kernel is the core component of an operating system, responsible for managing both applications and hardware resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvb22gm3ylymldh2ttzl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvb22gm3ylymldh2ttzl.jpeg" alt="container vs Vm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Architecture
&lt;/h2&gt;

&lt;p&gt;Docker operates using a client-server architecture, consisting of a client component and a server component that communicate through a REST API. The server, also known as the Docker engine, runs in the background, handling the tasks of building and running containers. Docker does have the following components:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Docker Client:&lt;/strong&gt; The Docker client is the primary interface through which users interact with Docker. It provides a command-line interface (CLI) and allows users to issue commands for building, running, and managing containers. The client communicates with the Docker daemon (or Docker Engine) via a RESTful API to perform various tasks, such as creating containers, pulling images, and managing container lifecycles. In addition to the CLI, Docker also provides a graphical user interface (GUI) called Docker Desktop for Windows and macOS users, which makes it easier to manage containers and images visually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Daemon (Engine):&lt;/strong&gt; The Docker daemon, also known as the Docker Engine, is a background process that runs on the host machine and manages the entire lifecycle of containers. The daemon listens for API requests from the Docker client and performs the required tasks, such as building images, creating containers, and managing container lifecycles. It is responsible for performing the actual work in the Docker system, including managing container isolation, networking, and storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Images:&lt;/strong&gt; Docker images are the building blocks of containers. They are read-only templates that contain the operating system, runtime environment, libraries, and application code necessary to run an application. Images can be created from a Dockerfile, which is a script that contains instructions for building the image. Docker images can be stored and shared through Docker registries, such as Docker Hub or private registries, allowing users to easily distribute and deploy applications across different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Containers:&lt;/strong&gt; Docker containers are lightweight, portable, and isolated runtime environments for running applications. They are created from Docker images and have a writable layer on top of the image, which allows them to store runtime data and maintain their state. Containers run in isolation from each other, sharing only the host's kernel, which makes them highly efficient compared to virtual machines. Containers can be easily managed using Docker commands, such as docker start, docker stop, and docker rm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Networking:&lt;/strong&gt;&lt;br&gt;
Docker provides a robust networking model that enables communication between containers and the outside world. It supports multiple network drivers, such as bridge, host, overlay, and Macvlan, which offer different levels of isolation and performance. By default, Docker creates a virtual network called a bridge network, which allows containers to communicate with each other and the host. Users can also create custom networks to isolate containers or connect them to external networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Volumes and Storage:&lt;/strong&gt;&lt;br&gt;
Docker provides a flexible storage system for managing data within containers. It supports various storage drivers, such as overlay2, aufs, and btrfs, which determine how data is stored and managed on the host system. Docker also supports volumes, which are a way to persist data generated by containers and share it between containers. Volumes can be managed using Docker commands, such as docker volume create, docker volume ls, and docker volume rm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Registries:&lt;/strong&gt; Repositories for storing and sharing Docker images. Docker Hub is the most popular registry, providing a platform for storing and distributing images.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Docker Containers vs. Images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A fundamental distinction in Docker terminology is the difference between containers and images. An image is a read-only template that includes the operating system, runtime environment, libraries, and application code necessary to run an application. A container, on the other hand, is a running instance of an image. When a container is created from an image, a writable layer is added on top of the image, allowing the container to store runtime data and maintain its state. Technically container is a operating system process which is special kind because it has its own file system that is provided by the image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Docker
&lt;/h2&gt;

&lt;p&gt;To install Docker on your machine, follow the official installation guides for your operating system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.docker.com/docker-for-windows/install/" rel="noopener noreferrer"&gt;Docker for Windows&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.docker.com/docker-for-mac/install/" rel="noopener noreferrer"&gt;Docker for Mac&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.toDocker%20for%20Linux"&gt;Docker for Linux&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Docker Container Lifecycle
&lt;/h2&gt;

&lt;p&gt;The lifecycle of a Docker container typically consists of several stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create:&lt;/strong&gt; A container is created from a Docker image using the &lt;code&gt;docker create&lt;/code&gt; or &lt;code&gt;docker run&lt;/code&gt; command. When created, the container has a unique ID assigned by the Docker daemon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start:&lt;/strong&gt; A created container can be started using the &lt;code&gt;docker start&lt;/code&gt; command, or it can be created and started in a single command using &lt;code&gt;docker run&lt;/code&gt;. Once started, the container runs the entrypoint command specified in the Dockerfile (or a custom command provided in the &lt;code&gt;docker run&lt;/code&gt; command).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pause/Resume:&lt;/strong&gt; Containers can be paused using the &lt;code&gt;docker pause&lt;/code&gt; command, which temporarily suspends all processes inside the container. To resume a paused container, use the &lt;code&gt;docker unpause&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop:&lt;/strong&gt; A running container can be stopped using the &lt;code&gt;docker stop&lt;/code&gt; command. This command sends a &lt;strong&gt;SIGTERM&lt;/strong&gt; signal to the main process inside the container, allowing it to perform a graceful shutdown. After a grace period, if the container has not exited, a &lt;strong&gt;SIGKILL&lt;/strong&gt; signal is sent to forcefully terminate the container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Restart:&lt;/strong&gt; A stopped container can be restarted using the &lt;code&gt;docker restart&lt;/code&gt; command, which stops and starts the container again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove:&lt;/strong&gt; A container that is no longer needed can be removed using the &lt;code&gt;docker rm&lt;/code&gt; command. This permanently deletes the container and its writable layer, freeing up system resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6xe8vo6xo0l5aiv2lqw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6xe8vo6xo0l5aiv2lqw.jpeg" alt="Docker lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Workflow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Write application code and create a Dockerfile:&lt;/strong&gt; Start by writing your application code and creating a &lt;code&gt;Dockerfile&lt;/code&gt; that defines the instructions for building the Docker image.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;A Dockerfile is a text file containing a series of instructions used by Docker to build a new image. It automates the process of setting up an environment, installing dependencies, and configuring the application within a container. Docker reads the instructions from the Dockerfile and executes them in order, creating a new image as a result.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build the Docker image:&lt;/strong&gt; Use the &lt;code&gt;docker build&lt;/code&gt; command to create a Docker image from the Dockerfile.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run the container:&lt;/strong&gt; Use the docker run command to start a container from the Docker image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test the application:&lt;/strong&gt; Test the application running inside the container to ensure it functions as expected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Publish the Docker image:&lt;/strong&gt; If the application works as expected, publish the Docker image to a registry like Docker Hub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deploy the container:&lt;/strong&gt; Deploy the container to your desired environment using tools like Docker Compose, Kubernetes, or other orchestration platforms.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Dockerizing a Node.js Application
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create a Node.js application:&lt;/strong&gt; Start by creating a simple Node.js application with an app.js file and a package.json file.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Here's a simple Node.js application that serves a "Hello, World!" message on port 3000.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;First, make sure you have Node.js installed on your system. If not, you can download and install it from the official Node.js website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new directory for your Node.js application, and navigate to it in your terminal or command prompt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initialize a new Node.js project by running the following command:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a &lt;code&gt;package.json&lt;/code&gt; file with default values.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Create a new file named app.js in your project directory and add the following code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const http = require('http');

const hostname = '0.0.0.0';
const port = 3000;

const server = http.createServer((req, res) =&amp;gt; {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello, World!\n');
});

server.listen(port, hostname, () =&amp;gt; {
  console.log(`Server running at http://${hostname}:${port}/`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code creates a basic HTTP server that listens on port &lt;strong&gt;3000&lt;/strong&gt; and responds with "Hello, World!" to any incoming requests.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Update the package.json file to include a start script. Add the following line to the scripts section:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"start": "node app.js"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your package.json file should now look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "your-project-name",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "start": "node app.js",
    "test": "echo \"Error: no test specified\" &amp;amp;&amp;amp; exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the Node.js application locally, execute the following command:&lt;br&gt;
&lt;code&gt;npm start&lt;/code&gt;&lt;br&gt;
You should see the message &lt;code&gt;"Server running at http://0.0.0.0:3000/"&lt;/code&gt; in your terminal. Open your web browser and navigate to &lt;code&gt;http://localhost:3000/&lt;/code&gt; to see the "Hello, World!" message.&lt;/p&gt;

&lt;p&gt;Now that you have a simple Node.js application, you can proceed to create a Dockerfile and Dockerize the application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Create a Dockerfile:
&lt;/h2&gt;

&lt;p&gt;In the same directory as your Node.js application, create a Dockerfile with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Set the base image
FROM node:latest

# Set the working directory
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the application code to the working directory
COPY . .

# Expose the application port
EXPOSE 3000

# Start the application
CMD ["npm", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Dockerfile uses the latest official Node.js image as a base, sets the working directory to /app, copies the package.json and package-lock.json files, installs the required dependencies, copies the application code, exposes port 3000, and runs the npm start command. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Here is a brief overview of some common instructions used in a Dockerfile:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FROM&lt;/strong&gt;: Specifies the base image to be used as a starting point for the new image. Examples include official images like ubuntu, alpine, or node. &lt;code&gt;FROM node:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WORKDIR&lt;/strong&gt;: Sets the working directory within the container. Any subsequent instructions that use relative paths will be executed relative to this directory. &lt;code&gt;WORKDIR /app&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;COPY&lt;/strong&gt;: Copies files or directories from the local machine (the build context) to the container's filesystem. &lt;code&gt;COPY package.json .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ADD&lt;/strong&gt;: Similar to COPY, but it can also download files from a URL and extract compressed files. &lt;code&gt;ADD https://example.com/file.tar.gz /app&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RUN&lt;/strong&gt;: Executes a command within the container, typically used for installing packages or running build scripts. &lt;code&gt;RUN npm install&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CMD&lt;/strong&gt;: Provides the default command to run when the container is started. If the user specifies a command when running the container, it will override this default command. There can be only one CMD instruction in a Dockerfile. &lt;code&gt;CMD ["npm", "start"]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ENTRYPOINT&lt;/strong&gt;: Similar to CMD, but it is not overridden when the user specifies a command when running the container. This is useful for defining a default executable for the container. &lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENTRYPOINT ["npm"]
CMD ["start"]
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;ENV&lt;/strong&gt;: Sets an environment variable within the container, which can be used by applications running inside the container. &lt;code&gt;ENV NODE_ENV=production&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EXPOSE&lt;/strong&gt;: Informs Docker that the container listens on the specified network ports at runtime. This does not actually publish the port; it serves as documentation and a reminder to publish the port using the -p flag when running the container. &lt;code&gt;EXPOSE 80&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VOLUME&lt;/strong&gt;: Creates a mount point for a volume to persist data outside of the container. This is useful for sharing data between containers or retaining data when a container is removed. &lt;code&gt;VOLUME /app/data&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build the Docker image:&lt;/strong&gt; Run the following command in the terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t your-image-name .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace your-image-name with a descriptive name for your image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Run the Docker container:&lt;/strong&gt; Start the container using the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 3000:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pushing this Docker Image to Docker Hub
&lt;/h2&gt;

&lt;p&gt;Docker Hub is a public registry service that allows you to store and share Docker images. To push a Docker image to Docker Hub, you need to follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Docker Hub account:&lt;/strong&gt; If you don't already have a Docker Hub account, sign up for one at &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;&lt;/a&gt;. You'll need your Docker ID and password for subsequent steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log in to Docker Hub:&lt;/strong&gt; Open a terminal or command prompt and log in to Docker Hub using your Docker ID and password:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter your Docker ID and password when prompted. You should see a message indicating that you have successfully logged in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tag your Docker image:&lt;/strong&gt; Before pushing the image to Docker Hub, you need to tag it with your Docker Hub username and a repository name. Use the following command to tag your image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag your-image-name your-docker-id/your-repository-name:your-tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace your-image-name with the name of the image you built in step 4, your-docker-id with your Docker Hub username, your-repository-name with a desired repository name, and your-tag with a version or any descriptive tag (e.g., 'latest').&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag myapp-image johnsmith/myapp:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Push the Docker image to Docker Hub:&lt;/strong&gt; Finally, push the tagged image to Docker Hub using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push your-docker-id/your-repository-name:your-tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace your-docker-id, your-repository-name, and your-tag with the values you used in step 5. Docker will upload your image to your Docker Hub account.&lt;br&gt;
For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push johndoe/myapp:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify the image on Docker Hub:&lt;/strong&gt; Log in to your Docker Hub account and navigate to the "Repositories" section. You should see the newly pushed image listed under your repositories.&lt;br&gt;
Now, your Docker image is successfully pushed to Docker Hub and can be easily pulled and run by others using the &lt;code&gt;docker pull&lt;/code&gt; command, followed by &lt;code&gt;docker run&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Docker Containers
&lt;/h2&gt;

&lt;p&gt;Docker containers offer several advantages over traditional deployment methods and virtual machines:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency:&lt;/strong&gt; Containers package all application dependencies and configurations, ensuring consistent behavior across various environments, from development to production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolation:&lt;/strong&gt; Containers run in isolated environments, preventing conflicts and ensuring that each application has access to its required resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portability:&lt;/strong&gt; Docker images can be easily shared and run on any system with Docker installed, making it easy to deploy applications across different platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Efficiency:&lt;/strong&gt; Containers share the host operating system's kernel and resources, resulting in less overhead compared to virtual machines. This enables higher density and more efficient resource utilization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Containers can be easily scaled up or down to meet the changing demands of an application, making it easier to build and deploy microservices and distributed applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version Control and Rollback:&lt;/strong&gt; Docker images can be versioned, allowing for easy rollback to previous versions if needed. This can be particularly useful in case of application updates that introduce bugs or performance issues.&lt;/p&gt;

&lt;p&gt;In summary, Docker containers provide an efficient, portable, and consistent environment for application deployment. By leveraging containerization, developers and operations teams can streamline the development, testing, and deployment process, ultimately leading to more reliable and maintainable software.&lt;/p&gt;

&lt;p&gt;If you'd like, let's connect on &lt;a href="https://www.linkedin.com/in/superiqbal7/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and grow our community bigger.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>node</category>
      <category>dockerhub</category>
    </item>
    <item>
      <title>Catching Unhandled Promise Rejections and uncaughtException in Node.js</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Sun, 02 Apr 2023 11:24:38 +0000</pubDate>
      <link>https://dev.to/superiqbal7/catching-unhandled-promise-rejections-and-uncaughtexception-in-nodejs-2403</link>
      <guid>https://dev.to/superiqbal7/catching-unhandled-promise-rejections-and-uncaughtexception-in-nodejs-2403</guid>
      <description>&lt;p&gt;Node.js is an event-driven platform that executes code asynchronously. This means that errors thrown in a callback or a promise chain that are not caught will not be handled by the uncaughtException event-handler and will disappear without warning. Although recent versions of Node.js added a warning message when an unhandled rejection occurs, this does not constitute proper error handling. &lt;/p&gt;

&lt;p&gt;This can be a significant issue for developers, who may overlook adding catch clauses to promise chains. Fortunately, Node.js provides a mechanism for catching these unhandled rejections, the unhandledRejection event. In this blog post, we will discuss how to catch unhandled promise rejections in Node.js and why it is essential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Catching Unhandled Promise Rejections and uncaughtException is Important
&lt;/h2&gt;

&lt;p&gt;Uncaught errors in Node.js can cause serious issues, such as memory leaks, unexpected termination, and server downtime. Therefore, it is essential to catch these errors in a controlled manner to minimize their impact on the system. By subscribing to the &lt;code&gt;process.on('unhandledRejection', callback)&lt;/code&gt; and &lt;code&gt;process.on('uncaughtException', callback)&lt;/code&gt; events, you can catch these errors and handle them appropriately.&lt;/p&gt;

&lt;p&gt;Here's an example code snippet that could result in an unhandled promise rejection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { User } from './models/user.model';

async function getUserById(id: string) {
  return await User.findById(id);
}

async function updateUserName(id: string, name: string) {
  const user = await getUserById(id);
  user.name = name;
  return user.save();
}

// Call updateUserName function with invalid ID
updateUserName('invalidId', 'John Doe');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snippet, the &lt;code&gt;getUserById&lt;/code&gt; function returns a promise that resolves with the user object with the given ID. The updateUserName function uses this function to find the user with the given ID, updates the user's name, and saves the changes to the database.&lt;/p&gt;

&lt;p&gt;However, if the updateUserName function is called with an invalid ID, the getUserById function will throw an error, resulting in an unhandled promise rejection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Unhandled Promise Rejections
&lt;/h2&gt;

&lt;p&gt;The simplest way to handle unhandled promise rejections is to add a .catch clause within each promise chain call and redirect it to a centralized error handler. However, relying solely on developer discipline is a fragile way of building an error handling strategy. Hence, using a graceful fallback to subscribe to process.on('unhandledRejection', callback) is a recommended approach.&lt;/p&gt;

&lt;p&gt;Here is an example of how to catch unhandled promise rejections:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;process.on('unhandledRejection', (reason: string, p: Promise&amp;lt;any&amp;gt;) =&amp;gt; {
  console.error('Unhandled Rejection at:', p, 'reason:', reason);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code subscribes to the unhandledRejection event and prints the unhandled rejection's reason and promise to the console. By doing this, you can identify where the rejection occurred and handle it appropriately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Uncaught Exceptions
&lt;/h2&gt;

&lt;p&gt;Uncaught exceptions occur when an error is thrown but not caught by any try-catch block or error handler. These exceptions can lead to the application's unexpected termination or cause memory leaks that can lead to server downtime. To handle these uncaught exceptions, you can subscribe to the process.on('uncaughtException', callback) event.&lt;/p&gt;

&lt;p&gt;Here is an example of how to catch uncaught exceptions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;process.on('uncaughtException', (error: Error) =&amp;gt; {
  console.error(`Caught exception: ${error}\n` + `Exception origin: ${error.stack}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code subscribes to the uncaughtException event and prints the exception's stack trace to the console. By doing this, you can identify where the exception occurred and handle it appropriately.&lt;/p&gt;

&lt;p&gt;Catching unhandled promise rejections and uncaught exceptions is crucial to avoid unexpected behavior in Node.js applications. By subscribing to the &lt;code&gt;process.on('unhandledRejection', callback)&lt;/code&gt; and &lt;code&gt;process.on('uncaughtException', callback)&lt;/code&gt; events and using graceful fallbacks, you can catch and handle these errors in a controlled manner. The examples provided in this article demonstrate how to catch these errors and handle them appropriately, thereby ensuring the stability and reliability of your Node.js applications.&lt;/p&gt;

</description>
      <category>node</category>
      <category>express</category>
      <category>unhandledrejection</category>
      <category>uncaughtexception</category>
    </item>
    <item>
      <title>Graceful Shutdown in Node.js: Handling Stranger Danger</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Sat, 01 Apr 2023 18:10:03 +0000</pubDate>
      <link>https://dev.to/superiqbal7/graceful-shutdown-in-nodejs-handling-stranger-danger-29jo</link>
      <guid>https://dev.to/superiqbal7/graceful-shutdown-in-nodejs-handling-stranger-danger-29jo</guid>
      <description>&lt;p&gt;When a stranger comes to town, it's important to know how to handle them gracefully. The same goes for processes in Node.js - sometimes we need to shut them down gracefully when we detect a problem or when we receive a signal to terminate the process. In this article, we'll explore how to implement a graceful shutdown in a Node.js application using TypeScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a graceful shutdown means
&lt;/h2&gt;

&lt;p&gt;When a Node.js process is terminated, there might be some ongoing tasks or connections that need to be closed before the process can exit completely. A graceful shutdown ensures that these tasks are completed before the process is terminated. It also avoids any abrupt closing of connections, which can lead to data loss or corruption.&lt;/p&gt;

&lt;p&gt;To implement a graceful shutdown, we need to handle the &lt;strong&gt;SIGINT&lt;/strong&gt; and &lt;strong&gt;SIGTERM&lt;/strong&gt; signals that are sent to the process when it's time to terminate. &lt;strong&gt;SIGINT&lt;/strong&gt; and &lt;strong&gt;SIGTERM&lt;/strong&gt; are signals used in Unix-based systems to interrupt or terminate a process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SIGINT:&lt;/strong&gt; This is a signal that is typically sent to a process when a user types Ctrl+C in the terminal. It is often used to request that a process terminate gracefully. When a process receives a SIGINT signal, it can catch it and perform any necessary cleanup operations before terminating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SIGTERM:&lt;/strong&gt; This is a signal that is typically sent to a process by the operating system to request that the process terminate. It is often used as a graceful way to ask a process to terminate, allowing it to perform any necessary cleanup operations before exiting. Processes can catch this signal and perform cleanup operations before terminating.&lt;/p&gt;

&lt;p&gt;In Node.js, you can listen for these signals using the &lt;code&gt;process.on&lt;/code&gt; method. We can define a function that handles these signals and performs the necessary cleanup operations before exiting the process.&lt;/p&gt;

&lt;p&gt;Here is an example of how to handle SIGINT and SIGTERM signals:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Server } from 'http';

const server: Server = /* create your server here */;

function gracefulShutdown() {
  console.log('Shutting down gracefully...');

  server.close(() =&amp;gt; {
    console.log('Server closed.');

    // Close any other connections or resources here

    process.exit(0);
  });

  // Force close the server after 5 seconds
  setTimeout(() =&amp;gt; {
    console.error('Could not close connections in time, forcefully shutting down');
    process.exit(1);
  }, 5000);
}

process.on('SIGTERM', gracefulShutdown);
process.on('SIGINT', gracefulShutdown);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snippet, we're defining a gracefulShutdown function that logs a message to the console and then closes the HTTP server. After the server is closed, any other connections or resources can be closed as well. Finally, we call &lt;code&gt;process.exit(0)&lt;/code&gt; to exit the process with a success code.&lt;/p&gt;

&lt;p&gt;We also set a timeout of 5 seconds to force close the server if it's not closed gracefully within that time. This ensures that the process exits even if there are ongoing connections or resources that cannot be closed gracefully.&lt;/p&gt;

&lt;p&gt;A graceful shutdown is important to ensure that a Node.js application terminates cleanly and without any data loss or corruption. By handling SIGINT and SIGTERM signals, we can implement a graceful shutdown that closes ongoing tasks and connections before exiting the process. With this knowledge, we can handle stranger danger in our Node.js applications with grace and elegance.&lt;/p&gt;

</description>
      <category>node</category>
      <category>gracefulshutdown</category>
      <category>javascript</category>
      <category>express</category>
    </item>
    <item>
      <title>Separating 'app' and 'server' in Express: Why it matters and how it benefits your application</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Fri, 31 Mar 2023 20:57:38 +0000</pubDate>
      <link>https://dev.to/superiqbal7/separating-app-and-server-in-express-why-it-matters-and-how-it-benefits-your-application-149d</link>
      <guid>https://dev.to/superiqbal7/separating-app-and-server-in-express-why-it-matters-and-how-it-benefits-your-application-149d</guid>
      <description>&lt;p&gt;When building a Node.js application using Express, one of the key decisions you'll need to make is whether to keep your application logic in a single file, such as app.js, or to separate it out into multiple files, including a dedicated server.js file. While it may be tempting to keep everything in one place for the sake of simplicity, separating your code into distinct files can actually improve the quality and maintainability of your application in the long run. &lt;/p&gt;

&lt;p&gt;While separating your code into different files is a good idea in general, there's also a specific reason why you should consider separating app.js and server.js. In this blog post, we'll explore why separating app.js and server.js is a good practice and provide examples and code snippets to illustrate the advantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Separation of concerns
&lt;/h2&gt;

&lt;p&gt;One of the main reasons to separate app.js and server.js is to divide the application's concerns. app.js is responsible for defining the routes, middleware, and other application-level functionality. server.js, on the other hand, is responsible for creating the server, listening for incoming requests, and handling errors.&lt;/p&gt;

&lt;p&gt;Thus we can keep our code organized and maintainable. If we need to make changes to the application logic or routes, we can do so in the app.js file without worrying about the server setup. Similarly, if we need to make changes to the server configuration, we can do so in the server.js file without affecting the application logic.&lt;/p&gt;

&lt;p&gt;Here is an example of what a &lt;code&gt;app.js&lt;/code&gt; and &lt;code&gt;server.js&lt;/code&gt; file might look like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.js&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const app = express();

app.get('/', (req, res) =&amp;gt; {
  res.send('Hello World!');
});

module.exports = app;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;server.js&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const app = require('./app');

const PORT = process.env.PORT || 3000;

app.listen(PORT, () =&amp;gt; {
  console.log(`Server listening on port ${PORT}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snippet, the &lt;code&gt;app.js&lt;/code&gt; file is responsible for defining the Express application and its routes, while &lt;code&gt;server.js&lt;/code&gt; creates a new HTTP server instance and passes it to the app module. This approach separates the server configuration from the application's functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Easier testing
&lt;/h2&gt;

&lt;p&gt;Separating app.js and server.js not only draws a clean separation of concerns but also significantly eases mocking and testing the system by testing the API in-process, without performing network calls, with all the benefits that it brings to the table: fast testing execution and getting coverage metrics of the code. &lt;/p&gt;

&lt;p&gt;For example, you can write unit tests for app.js to ensure that each route and middleware function is working as intended. You can also write integration tests for server.js to ensure that the server is properly configured and handling requests correctly.&lt;/p&gt;

&lt;p&gt;Here is an example of a test file for &lt;code&gt;app.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const request = require('supertest');
const app = require('../app');

describe('GET /', () =&amp;gt; {
  it('should respond with "Hello, world!"', (done) =&amp;gt; {
    request(app)
      .get('/')
      .expect(200)
      .end((err, res) =&amp;gt; {
        if (err) return done(err);
        expect(res.text).toBe('Hello, world!');
        done();
      });
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snippet, we're using the supertest library to make HTTP requests to our app module and test its responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improved scalability
&lt;/h2&gt;

&lt;p&gt;Finally, separating app.js and server.js can improve the application's scalability. By breaking up the code into smaller modules, it becomes easier to add new features or modify existing ones without having to touch the entire application codebase.&lt;/p&gt;

&lt;p&gt;In summary, separating the app.js and server.js files in an Express.js application can improve code organization, maintainability, and testability.&lt;/p&gt;

</description>
      <category>node</category>
      <category>express</category>
      <category>javascript</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How to Learn Anything Faster and More Effectively as a Software Developer</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Fri, 31 Mar 2023 19:06:46 +0000</pubDate>
      <link>https://dev.to/superiqbal7/how-to-learn-anything-faster-and-more-effectively-3188</link>
      <guid>https://dev.to/superiqbal7/how-to-learn-anything-faster-and-more-effectively-3188</guid>
      <description>&lt;p&gt;When it comes to learning a new technology or skill, many of us often turn to courses or books to get started. However, this approach may not always yield the desired results. While there are certainly benefits to taking courses or reading tech books, there are also some potential drawbacks that are worth considering.&lt;/p&gt;

&lt;p&gt;One of the main downsides of relying solely on courses or books is that they often only provide a theoretical understanding of the material. While this can be a great way to get started and gain a foundational understanding of a topic, it can also lead to a lack of practical experience. Without practical experience, it can be difficult to apply what you've learned to real-world situations and to troubleshoot problems that arise.&lt;/p&gt;

&lt;p&gt;Another downside is that courses and books can be time-consuming and expensive. Depending on the course or book, it may take weeks or months to complete, and the cost can add up quickly. This can make it difficult for people with busy schedules or limited budgets to pursue their interests or advance their careers.&lt;/p&gt;

&lt;p&gt;In addition, courses and books may not always provide the most up-to-date or relevant information. Technology is constantly evolving, and what was cutting-edge a few years ago may now be outdated or even irrelevant. This can be especially problematic if you're trying to keep up with the latest developments in your field.&lt;/p&gt;

&lt;p&gt;Overall, while courses and books can be a valuable tool for learning, they should be supplemented with practical experience and other resources to ensure a well-rounded understanding of the topic at hand.&lt;/p&gt;

&lt;p&gt;In this blog, we will explore a more practical and effective approach to learning, one that involves hands-on practice, problem-solving, and continuous exploration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: Understanding the Purpose and Benefits of the Technology
&lt;/h2&gt;

&lt;p&gt;Before diving into any new technology, it's important to understand its purpose and potential benefits. This can help you set clear goals and expectations for your learning journey. For example, if you want to learn Terraform, you could start by googling &lt;strong&gt;What is Terraform?&lt;/strong&gt; and &lt;strong&gt;What problems it solves?&lt;/strong&gt;. You could also read case studies or success stories of companies that have used Terraform to improve their infrastructure management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: Getting Hands-On with Practice
&lt;/h2&gt;

&lt;p&gt;Once you have a basic understanding of the technology, it's time to get hands-on with practice. This involves &lt;strong&gt;installing the software or tool on your device and trying out different commands and configurations&lt;/strong&gt;. Don't be afraid to make mistakes or encounter errors – this is all part of the learning process. In fact, by troubleshooting errors and finding solutions through documentation and online resources, you can gain a deeper understanding of the technology and its inner workings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 3: Building Your Knowledge Tree
&lt;/h2&gt;

&lt;p&gt;As you practice and experiment with the technology, you will naturally start to build your own knowledge tree, with branches and leaves that connect different concepts and ideas. For example, when learning Terraform, you may also become more familiar with Linux commands, AWS services, and infrastructure design patterns. &lt;/p&gt;

&lt;p&gt;After getting familiar with the basics of Terraform, the next step is to move on to more advanced topics. For example, you can try provisioning an EC2 instance with Terraform, which will give you hands-on experience with using variables, providers, and other advanced features. As you work through these tasks over the course of a week or so, you'll gain a deeper understanding of how Terraform works and how to apply it to real-world scenarios. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep a record of your progress&lt;/strong&gt; and insights, whether it's through notes, diagrams, or code snippets. This can help you track your progress and identify areas that need more attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 4: Growing Your Knowledge with Real-World Examples
&lt;/h2&gt;

&lt;p&gt;To truly master a technology or skill, you need to be able to apply it in real-world scenarios. This could involve building a project from scratch, contributing to an open-source project, or solving a complex problem using the technology. &lt;/p&gt;

&lt;p&gt;To further cement your knowledge, it can be helpful to &lt;strong&gt;seek out Terraform interview questions&lt;/strong&gt; and practice answering them. By doing so, you'll identify gaps in your knowledge and gain a deeper understanding of the concepts and techniques you need to master. Jotting down the questions and answers you encounter can also help you solidify your knowledge and make it easier to recall later on.&lt;/p&gt;

&lt;p&gt;By testing your knowledge and skills in these scenarios, you can gain valuable experience and confidence in your abilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 5: Continuing Your Learning Journey
&lt;/h2&gt;

&lt;p&gt;Learning is a continuous journey, and it doesn't stop once you've completed a course or mastered a technology. To stay current and relevant, you need to keep exploring and experimenting with new tools and techniques. This could involve attending meetups or conferences, reading blogs or articles, or taking on new projects that challenge you to learn new skills.&lt;/p&gt;

&lt;p&gt;Learning a new technology or skill can be a daunting task, but by following a practical and hands-on approach, you can accelerate your learning and achieve your goals more effectively. Remember to focus on understanding the purpose and benefits of the technology, getting hands-on with practice, building your knowledge tree, testing your knowledge with real-world examples, and continuing your learning journey. With these tips and techniques, you'll be well on your way to becoming a lifelong learner and a master of your craft.&lt;/p&gt;

</description>
      <category>learning</category>
      <category>terraform</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Software Development Engineer - Full Stack Interview Review</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Mon, 20 Mar 2023 11:05:03 +0000</pubDate>
      <link>https://dev.to/superiqbal7/software-development-engineer-full-stack-interview-review-3a91</link>
      <guid>https://dev.to/superiqbal7/software-development-engineer-full-stack-interview-review-3a91</guid>
      <description>&lt;p&gt;As a software developer, there are few things more exciting than being invited to participate in a multi-stage technical interview. Recently, I had the opportunity to go through a three-stage interview process at &lt;strong&gt;Craftsmen Ltd.&lt;/strong&gt; that not only challenged me technically, but also allowed me to get to know the company and their culture better.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Stage
&lt;/h2&gt;

&lt;p&gt;This was a technical assessment sent via email and time limit was 3 hours. &lt;/p&gt;

&lt;p&gt;I was given a set of requirements of creating REST APIs that allows users to Confidential create, read, update, and delete blog posts. and asked to implement the solution using Node.js and the Express framework. &lt;/p&gt;

&lt;p&gt;Then create a React app that allows the user to view and interact with the list of blog posts from the REST APIs created and asked to integrate redux to manage the state of the blog posts. &lt;/p&gt;

&lt;p&gt;Also asked to write tests to ensure that the REST API, Redux store, and React app work as expected. &lt;/p&gt;

&lt;p&gt;This was an intense experience, but it allowed me to showcase my technical skills and demonstrate my ability to work under pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Second Stage
&lt;/h2&gt;

&lt;p&gt;It was an onsite interview, which allowed me to meet the team in person and get a better understanding of their work culture. &lt;/p&gt;

&lt;p&gt;During this stage, I was asked a range of technical questions, including ones related to AWS services, Lambda, S3, REST, Node.js, React, Redux, and lifecycle methods. The team was friendly, engaging, and clearly invested in getting to know me both as a developer and as a person. &lt;/p&gt;

&lt;p&gt;One of the questions that particularly stood out to me was the one about why I used a custom error handler in my demo project. This question allowed me to demonstrate my understanding of the importance of handling errors properly in a production environment, and how a custom error handler can help to provide a better user experience. &lt;br&gt;
Followings are a list of questions/topics we discussed there.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Can you introduce yourself?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What are the products you have worked on?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What are your strengths in frontend/backend development? (One interviewer mentioned that CSS is like black magic to him.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What AWS services have you used?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If millions of files are uploaded to S3 and a Lambda function needs to be triggered, how would Lambda handle the workload? How would it work with SQS? [This was a dream question for me based on my work expectations]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can you explain how dead letter queues work?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How does Lambda work, and what are its limitations?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can you explain how to upload a file from your application to a private S3 bucket?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What are the advantages of REST, and how does it work?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why did you use a custom error handler in your assignment project?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Are you satisfied with your demo project? If not, how would you improve it?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How often have you used unit tests in practice?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How is your code deployed, and from your answer, is it possible to run end-to-end tests before merging a pull request? If so, how?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why are unit tests necessary?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can you explain how a pure component works?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have you encountered any scenarios where you needed to customize the Shadow DOM library in React?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can you explain the lifecycle methods of a React class component?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can you achieve the same behavior of lifecycle methods in a functional component?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What are the alternatives to Redux?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can you share data between components?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can you explain how the Context API works?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is Kubernetes, and what problems does it solve?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the event that a pod crashes in Kubernetes, who is responsible for fixing its state?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In one of your project, you used AWS ECS to deploy your application. Can you explain the reason behind this choice? Is it more expensive compared to other options?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why did you leave your previous job, and what issues did you encounter? How did you try to communicate these issues to your superiors? If you encountered the same issues at this company, what would you do?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Are you interested in pursuing an MSC degree? If so, why? What are your goals after completing the degree?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How did you utilize your BSC credits in your practical experience, and how would pursuing an MSC degree for two more years help you?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another highlight of the interview was when the team discussed their products and the kind of challenges they face and solve. This showed me that they are passionate about what they do, and that they have a deep understanding of the industry and the technologies they work with.&lt;/p&gt;

&lt;p&gt;I appreciated the way they took the time to understand my experience, interests, and goals, and asked thoughtful questions that allowed me to showcase my skills and knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Stage
&lt;/h2&gt;

&lt;p&gt;This was a non-technical interview where we discussed the company's facilities, work culture, and the negotiation round. This allowed me to get a better sense of the company's values, and to see whether I would be a good fit for their team.&lt;/p&gt;

&lt;p&gt;Overall, the interview process was challenging but rewarding, and allowed me to showcase my skills and get to know the company and the team better. I appreciated the friendly and engaging nature of the interviews, and the way that the team took the time to understand my interests and goals.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Read/Download S3 files using Lambda Functions</title>
      <dc:creator>MD RAHIM IQBAL</dc:creator>
      <pubDate>Wed, 15 Feb 2023 17:35:49 +0000</pubDate>
      <link>https://dev.to/superiqbal7/readdownload-s3-files-using-lambda-functions-53k8</link>
      <guid>https://dev.to/superiqbal7/readdownload-s3-files-using-lambda-functions-53k8</guid>
      <description>&lt;p&gt;To get started with this tutorial, you'll first need to create an Amazon S3 bucket and upload a sample object to it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 Create a bucket and upload a sample object:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the Amazon S3 console and choose "Create bucket".&lt;/li&gt;
&lt;li&gt;Give your bucket a name (e.g. MyTestBucket07) and choose a region.&lt;/li&gt;
&lt;li&gt;Click "Create bucket".&lt;/li&gt;
&lt;li&gt;Select the bucket you just created and go to the "Objects" tab.&lt;/li&gt;
&lt;li&gt;Choose "Upload" and select a test file (e.g. MyTestFile.txt) from your local machine. This file can be a text file containing anything you want.&lt;/li&gt;
&lt;li&gt;Click "Upload".&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once your bucket is set up, you can create a Lambda function using a function blueprint. A blueprint is a sample function that demonstrates how to use Lambda with other AWS services. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 Create an IAM role:
&lt;/h2&gt;

&lt;p&gt;Create an IAM role with a policy that provides read and write access to S3&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 Create the Lambda Function:
&lt;/h2&gt;

&lt;p&gt;Here are the steps to create a Lambda function in the console:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;Functions&lt;/strong&gt; page on the Lambda console.&lt;/li&gt;
&lt;li&gt;Choose "Create function".&lt;/li&gt;
&lt;li&gt;Select "Use a blueprint".&lt;/li&gt;
&lt;li&gt;Search for "s3" under Blueprints and choose the s3-get-object blueprint for Node.js.&lt;/li&gt;
&lt;li&gt;Click "Configure".&lt;/li&gt;
&lt;li&gt;Under "Basic information", &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Enter a name for your function (e.g. ReadS3File)&lt;/li&gt;
&lt;li&gt;Choose "Create a new role from AWS policy templates" for the execution role.&lt;/li&gt;
&lt;li&gt;Enter a name for your role (e.g. S3ReadAccess).&lt;/li&gt;
&lt;li&gt;From Policy template select &lt;strong&gt;Amazon S3 object read-only permissions&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Under "S3 trigger", choose the bucket you created earlier.&lt;/li&gt;
&lt;li&gt;Click "Create function".
Next, you can review the function code, which retrieves the source S3 bucket name and the key name of the uploaded object from the event parameter it receives. The function uses the Amazon S3 getObject API to retrieve the content type of the object.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: To view the function code in the Lambda console:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to the "Code" tab while viewing your function.&lt;/li&gt;
&lt;li&gt;Look under "Code source".&lt;/li&gt;
&lt;li&gt;AWS provided code need some changes,
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require('aws-sdk');
const s3 = new AWS.S3({apiVersion: '2006-03-01'});

exports.handler = async (event, context) =&amp;gt; {
    // Set the bucket name and object key based on the file name
    const bucketName = "MyTestBucket07";
    const objectKey = event.filePath;

    const params = {
        Bucket: bucketName,
        Key: objectKey
    };

    try {
        // Read the object
        const s3Object = await s3.getObject(params).promise();
        console.log(s3Object.Body.toString());

        return s3Object.Body.toString();
    } catch (err) {
        console.log(err);
        throw err;
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Test the Lambda Function
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;On the Code tab, under Code source, choose the arrow next to Test, and then choose Configure test events from the dropdown list.&lt;/li&gt;
&lt;li&gt;In the Configure test event window, do the following:
Choose Create new test event named 'TestCase01'&lt;/li&gt;
&lt;li&gt;Add the following code in Event JSON
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "filePath": "MyTestFile.txt"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Choose Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;To invoke the function with your test event, under Code source, choose Test. The Execution results tab displays the response, function logs, and request ID&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>lambda</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
