<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sha Md Nayeem</title>
    <description>The latest articles on DEV Community by Sha Md Nayeem (@shamdnayeem).</description>
    <link>https://dev.to/shamdnayeem</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shamdnayeem"/>
    <language>en</language>
    <item>
      <title>Mastering Git: 18 Essential Commands for Becoming a Version Control Pro</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Sat, 08 Apr 2023 10:26:08 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/mastering-git-18-essential-commands-for-becoming-a-version-control-pro-mdh</link>
      <guid>https://dev.to/shamdnayeem/mastering-git-18-essential-commands-for-becoming-a-version-control-pro-mdh</guid>
      <description>&lt;p&gt;Git is a popular version control tool that makes it easier for developers to collaborate effectively on a project. It enables multiple developers to collaborate on the same codebase without eliminating one another's changes. To properly take advantage of Git's potential, one needs to grasp a few key commands that can turn them into version control experts.&lt;/p&gt;

&lt;p&gt;We'll go through some important Git commands in this article to help you become an expert in version control.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. git config
&lt;/h3&gt;

&lt;p&gt;To configure Git in your local machine, the &lt;code&gt;git config&lt;/code&gt; command is used. It enables you to set up your name and email, define aliases for frequently used commands, and set various preferences.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#To display the current configuration
git config --list
#To set up your name and email address
git config --global user.name "Your Name"
git config --global user.email "youremail@example.com"
#To define an alias for frequently used commands
git config --global alias.co checkout

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;git config --global alias.co checkout&lt;/code&gt; will create an alias "co" for the "checkout" command. So, now you can type "git co" instead of typing "git checkout".&lt;/p&gt;

&lt;h3&gt;
  
  
  2. git init
&lt;/h3&gt;

&lt;p&gt;A new Git repository is set up using the git init command. In the current directory, it generates a new .git subdirectory that contains all the files required to manage the repository. To create a Git repository, this command must be used once at the beginning of a project.&lt;/p&gt;

&lt;p&gt;Imagine you wish to use Git for version control when starting a new software project. The first thing you would do is to initialize a new Git repository using the &lt;code&gt;git init&lt;/code&gt; command in the directory (Let's say ~/Desktop/my-project) where you want to keep that project. So, you need to go to that directory and run the following command to initialize the Git which will only create a local git repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git init

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. git remote
&lt;/h3&gt;

&lt;p&gt;If you want to push your commits to a remote repository like GitHub, GitLab, or others, the first thing you need to do is to create a remote repository there and then configure the local repository you have just created to push to that remote repository. To do this, you need to run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote add origin https://github.com/&amp;lt;your-username&amp;gt;/my-project.git

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will add a remote called "origin" to the local repository, which directs to the URL of your remote repository on GitHub. After executing this, you can push your local changes to the remote repository by running &lt;code&gt;git push&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you want to remove the local Git repository from a remote origin repository, you can use the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote remove origin

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check if there are any remote repositories associated with your local Git repository, you can use the following command which will list all the remote repositories associated with your local Git repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote -v

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. git add
&lt;/h3&gt;

&lt;p&gt;To add files to the staging area, use the &lt;code&gt;git add&lt;/code&gt; command. Changes are temporarily held in the staging area before being committed. Every time a file needs to be committed after being modified, this command needs to be run.&lt;/p&gt;

&lt;p&gt;Let's say you have a project that contains the file named main.txt. You wish to commit the file's modifications to your Git repository because you made some changes to it. But, you must first add the file to the staging area using the &lt;code&gt;git add&lt;/code&gt; command before you can commit the modifications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add main.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case you have modified multiple files in your local repository and want to commit all those files to your Git repository, you can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. git commit
&lt;/h3&gt;

&lt;p&gt;The local repository is updated using the &lt;code&gt;git commit&lt;/code&gt; command. It is used to create a snapshot of the staged modifications along a timeline of a Git project's history. Each commit includes a message that details the modifications made.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -m "Added new content to main.txt"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The -m option allows you to add a commit message that details the changes made in the commit. It is recommended that the commit message should be descriptive and summarize the changes you made.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. git push
&lt;/h3&gt;

&lt;p&gt;To submit local repository changes to a remote repository, the &lt;code&gt;git push&lt;/code&gt; command is used. After a commit, this command must be executed to make the changes available to other developers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin master

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. git pull
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;git pull&lt;/code&gt; command is used to update the changes from the remote repository to the local repository. Before making any changes to the local repository, this command must be executed to make sure it is up-to-date.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git pull origin

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  8. git status
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;git status&lt;/code&gt; command displays the local Git repository's current status. It displays information about any deleted or untracked files as well as changes made to the local working directory, staging area, and repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git status

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  9. git clone
&lt;/h3&gt;

&lt;p&gt;A copy of a remote repository can be made on a local machine using the &lt;code&gt;git clone&lt;/code&gt; command. When creating a new development environment or working on a project with multiple developers, this command is helpful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone &amp;lt;URL of the repository&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  10. git branch
&lt;/h3&gt;

&lt;p&gt;You can create, list, or delete branches with the &lt;code&gt;git branch&lt;/code&gt; command. It allows the developer to work on different features or versions of the project without affecting the main branch. To start a new branch or change branches, this command must be used.&lt;/p&gt;

&lt;p&gt;To list all the branches we can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git branch -a -v

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To delete a local branch, you need to run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git branch -d new-branch
#To delete a local branch forcefully
git branch -D new-branch

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Git will not allow you to delete the local branch if there are unmerged changes in that branch. In that case, you need to use -D flag that will forcefully delete that branch.&lt;/p&gt;

&lt;p&gt;To delete a remote branch, the following command needs to be executed but if the remote branch needs to be deleted forcefully then the --force/--f flag should be used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin --delete new-branch
#To delete the remote branch forcefully
git push origin --delete --f new-branch

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  11. git diff
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;git diff&lt;/code&gt; command displays the modifications made to a file or set of files between two versions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git diff

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  12. git log
&lt;/h3&gt;

&lt;p&gt;A list of commits made to the repository is displayed by the &lt;code&gt;git log&lt;/code&gt; command. It shows the information about the commit, including the author, date, and commit message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git log

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  13. git merge
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;git merge&lt;/code&gt; command is used to combine changes from one branch into another branch. When working on the same code, developers can utilize this to integrate their changes before putting them up in a branch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git merge

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  14. git show
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;git show&lt;/code&gt; command provides information about a particular commit. It display the changes made in the commit and other metadata, such as the commit message, author, and date.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git show

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  15. git reset
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;git reset&lt;/code&gt; command resets the repository's state to a particular commit. It can be applied to undo repository modifications or to undo erroneous commits. With the &lt;code&gt;git log&lt;/code&gt; command, you can get the commit id.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#To reset the reposiotry to a specific commit id
git reset [commit id]
#To reset the repository to the previous commit
git reset HEAD^

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;git reset HEAD^&lt;/code&gt; will reset the repository to the previous commit and move the HEAD pointer back to one commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  16. git stash
&lt;/h3&gt;

&lt;p&gt;Changes that are not yet ready to be committed are stored temporarily using the &lt;code&gt;git stash&lt;/code&gt; command. It can be used to save modifications before merging or to transition between branches without committing changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git stash

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  17. git checkout
&lt;/h3&gt;

&lt;p&gt;To switch between branches or to create a new branch, &lt;code&gt;git checkout&lt;/code&gt; command is used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Switch between branches:
git checkout feature-branch
#Create and switch to a new branch
git checkout -b another-new-branch

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  18. git rm
&lt;/h3&gt;

&lt;p&gt;To remove a file (main.txt) from both the working directory and the Git repository, the &lt;code&gt;git rm&lt;/code&gt; command is used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git rm main.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to keep it in the working directory but remove from Git repository, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git rm --cached main.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, Git is a crucial tool for version control in software development. You can become an expert in version control by learning these key Git commands as well as other Git commands. These commands make it possible to manage the codebase and collaborate effectively with other developers. Git commands not only help you save time but also enable you to track changes, correct issues, and maintain a transparent history of the project. You may work smarter and more productively in software development by correctly using Git commands.&lt;/p&gt;




&lt;p&gt;I appreciate you taking the time to read this. Your support is much appreciated! If you found this article valuable, please consider sharing it with your friends and colleagues and clicking the 👉 Follow button and giving it a love 🖤 to help me create more informative content like this. Thank you for your time! 🖤 and also follow me on &lt;a href="https://cloudifydevops.com/"&gt;&lt;strong&gt;My Blog&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://medium.com/@nayeem.ridoy"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://twitter.com/ShaMdNayeem"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>git</category>
      <category>versioncontrol</category>
      <category>github</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Kubernetes 101: Essential Concepts to Master Before You Begin</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Sun, 26 Mar 2023 21:10:33 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/kubernetes-101-essential-concepts-to-master-before-you-begin-3iec</link>
      <guid>https://dev.to/shamdnayeem/kubernetes-101-essential-concepts-to-master-before-you-begin-3iec</guid>
      <description>&lt;p&gt;Kubernetes has emerged as a crucial component of contemporary software development, particularly for businesses that operate on a large scale. It is an open-source container orchestration technology that was initially created by Google and automates the deployment, scaling, and management of containerized applications. With Kubernetes, developers have the freedom to concentrate on creating code rather than worrying about the supporting infrastructure because of the framework it provides for managing distributed systems. For businesses that want to develop and deploy applications rapidly and effectively, with high availability, scalability, and resilience, Kubernetes is essential. This blog will explain what Kubernetes is, how it functions, its architecture and core components, and why modern software development needs it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes and How It Works
&lt;/h2&gt;

&lt;p&gt;As technology is rapidly changing, the deployment, scaling, and maintenance of containerized applications are all automated via the open-source container orchestration technology known as Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containerization&lt;/strong&gt; is a process of packaging software in a way that it can run reliably across different computing environments. It has grown in popularity in recent years because it enables developers to build, package, and deploy applications more rapidly and consistently. Several applications can operate on a single host operating system using this containerization technique without interfering with one another. Each container has its own set of dependencies, libraries, and configuration files and is isolated from the others.&lt;/p&gt;

&lt;p&gt;Let's understand more clearly with an example. Consider running a web application you have for development purposes on your local environment. To function, the application needs particular versions of Node.js, a database, and a few third-party libraries. As you are using your computer for other purposes as well, installing these dependencies globally and running the chance of conflicts with other programs is not something you want to do. Instead, you can package the application and its dependencies separately from the rest of the system by the container and run in any computer that supports containerization technologies, such as Docker. That container can also be deployed to a cloud platform like AWS or Google Cloud and based on demand, it can be quickly scaled up or down, making it simple to handle traffic peaks without over-provisioning resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Container Orchestrator&lt;/strong&gt; is a tool that simplifies the management of containerized applications and helps ensure that they are running properly across a distributed system which automates tasks such as network configuration, scaling, and load balancing. Kubernetes is created based on this principle that provides a powerful set of features for managing containerized applications such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Container orchestration: Kubernetes helps to automate the deployment, scaling, and management of containerized applications across a cluster of nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service discovery and load balancing: Kubernetes has an internal DNS system that helps to discover containers and communicate with each other. It also offers load balancing for distributing traffic between containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Self-healing: The health of containers is being monitored continuously and restarted or replaced automatically by Kubernetes if they fail or become unresponsive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Auto-scaling: Kubernetes can automatically scale the number of containers based on resource utilization and spikes in traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rolling updates and rollbacks: Without downtime, Kubernetes offers a way to perform updates to containers and if there are any critical issues, It allows for easy rollbacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Config management: Kubernetes offers a way to manage configuration files and environment variables for containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Storage orchestration: Kubernetes can manage storage for containerized applications, including persistent storage volumes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security: Kubernetes offers a range of security features, including role-based access control (RBAC), network policies, and container image verification&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Kubernetes Architecture and Components
&lt;/h2&gt;

&lt;p&gt;As a distributed system for managing containerized applications, Kubernetes is composed of a cluster of nodes and those nodes in a Kubernetes cluster are divided into two types:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Master node&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Worker Node&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Od9EJ3ck--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679862288830/262c132c-925d-46da-b6c3-2ba04aa02931.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Od9EJ3ck--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679862288830/262c132c-925d-46da-b6c3-2ba04aa02931.png" alt="" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Component of Kubernetes cluster (Source: Kubernetes)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Master Node:&lt;/strong&gt; The master node runs the Kubernetes &lt;strong&gt;control plane&lt;/strong&gt; components which is a collection of Kubernetes components that are responsible for managing the state of the cluster, scheduling applications, and maintaining communication between nodes. Control plane containers the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes API server&lt;/strong&gt; : This is the central management point for the Kubernetes cluster. It provides an HTTP REST API that allows end users to interact with the Kubernetes cluster, including creating and managing pods, services, and other objects. The other cluster components also communicate with this API server. The API server is responsible for exposing the cluster API endpoints and processing all API requests as well as authentication and authorization. This API server is the only component that communicates with etcd and also coordinates all the processes between the control plane and worker node components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;etcd&lt;/strong&gt; : This is a distributed key-value store used by Kubernetes to store cluster configuration and state information. We can call it the brain of the cluster. It provides a reliable and consistent way to store data across the cluster. etcd stores all configurations, states, and metadata of Kubernetes objects such as pods, secrets, deployment, daemonset, configmaps, etc. As it was mentioned earlier, It only communicates with the API server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes Scheduler&lt;/strong&gt; : This component is responsible for scheduling pods onto worker nodes based on available resources and other constraints.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kube Controller Manager&lt;/strong&gt; : In Kubernetes, controllers are programs that run endless control loops which continuously observe the state of objects in case there is any difference between the actual and desired state of those objects. The controller manager runs the core controllers that monitor and takes necessary action to maintain the desired state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud Controller Manager&lt;/strong&gt; : The Cloud Controller Manager (CCM) is a component of the Kubernetes control plane that runs when Kubernetes is deployed in cloud environments. It provides an interface between the Kubernetes control plane and the cloud platform API and enables interaction between Kubernetes and the cloud provider's underlying infrastructure. Load balancers, block storage, network routes, etc are a few of the resources that CCM is responsible for managing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Worker Node:&lt;/strong&gt; The worker node(s) are responsible for running containers and serving application traffic. They contain the following core components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubelet&lt;/strong&gt; : This is an agent that runs as a daemon on each worker node and communicates with the Kubernetes API server to manage containers and pods. It communicates with the Kubernetes API server to manage containers and pods from pod specification. It creates containers based on pod specifications. By starting, stopping, and restarting the containers as needed, the Kubelet makes sure they are running and in good condition. Moreover, it monitors how much CPU and memory are being used by the containers and provides this data to the Kubernetes API server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kube-proxy:&lt;/strong&gt; The kube-proxy is a network proxy and load balancer for Kubernetes services. It executes on each worker node as a daemonset and routes traffic to the proper container or pod in accordance with the configuration of the service. Iptables rules, which is a default mode, are used by the kube-proxy to control network traffic and guarantee the service's scalability and high availability. With this mode, kube-proxy choose the backend pod randomly for load balancing. Once the connection is made, requests are sent to the same pod until the connection is broken.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container runtime:&lt;/strong&gt; Container runtime is a program that runs containers on worker nodes. It runs on all the nodes in the Kubernetes cluster. It is responsible for starting and stopping containers as well as pulling images from container registries, and allocating containers resources such as CPU and memory. Organizations can select the container runtime that best suits their needs thanks to Kubernetes' support for a variety of them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, we have discussed the fundamentals of Kubernetes, such as containerization, container orchestrators, and the main elements of the Kubernetes architecture. We've seen how Kubernetes offers attributes like fault tolerance, scalability, and high availability, all of which are essential for operating mission-critical applications.&lt;/p&gt;

&lt;p&gt;The fundamental building blocks of Kubernetes, known as Kubernetes objects, will be covered in more detail in the forthcoming article. We will explore the different types of objects, their properties, and how a Kubernetes cluster can use them to manage its resources and applications.&lt;/p&gt;

&lt;p&gt;Stay tuned for more information about the interesting Kubernetes world!&lt;/p&gt;

&lt;h2&gt;
  
  
  References:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/"&gt;https://kubernetes.io/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;I appreciate you taking the time to read this. Your support is much appreciated! If you found this article valuable, please consider sharing it with your friends and colleagues and clicking the 👉 Follow button and giving it a love 🖤 to help me create more informative content like this. Thank you for your time! 🖤 and also follow me on &lt;a href="https://cloudifydevops.com/"&gt;&lt;strong&gt;My Blog&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://medium.com/@nayeem.ridoy"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://twitter.com/ShaMdNayeem"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containerization</category>
      <category>docker</category>
      <category>container</category>
    </item>
    <item>
      <title>How I Fixed the direnv allow Error: Troubleshooting .envrc and direnv Issues for Effective Environment Variable Management</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Thu, 09 Feb 2023 15:43:21 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/how-i-fix-the-direnv-allow-error-troubleshooting-envrc-and-direnv-issues-for-effective-environment-variable-management-2lb4</link>
      <guid>https://dev.to/shamdnayeem/how-i-fix-the-direnv-allow-error-troubleshooting-envrc-and-direnv-issues-for-effective-environment-variable-management-2lb4</guid>
      <description>&lt;p&gt;In software development, &lt;code&gt;.envrc&lt;/code&gt; and &lt;code&gt;direnv&lt;/code&gt; are tools used to manage environment variables. While working on a project, I faced a problem while getting environment variables from the .envrc file, though I executed &lt;code&gt;direnv allow&lt;/code&gt; command successfully.&lt;/p&gt;

&lt;p&gt;In this article, I am going to discuss how I have solved this issue and additionally, before that, I will discuss &lt;code&gt;.envrc&lt;/code&gt; and &lt;code&gt;direnv&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  .envrc
&lt;/h2&gt;

&lt;p&gt;Environment variables and their values can be defined in configuration files called &lt;code&gt;.envrc&lt;/code&gt; files which stand for "Environment RC" files. Sensitive data that shouldn't be committed to version control, such as API keys, credentials, and other configuration data, is typically stored in this file. The file, which contains a list of key-value pairs that can be exported as environment variables, is usually stored in the project's root directory.&lt;/p&gt;

&lt;p&gt;Similar to &lt;code&gt;.envrc&lt;/code&gt;, &lt;code&gt;.env&lt;/code&gt; is also a configuration file that sets environment variables but the main difference is that &lt;code&gt;.envrc&lt;/code&gt; is specific to &lt;code&gt;direnv&lt;/code&gt;, whereas &lt;code&gt;.env&lt;/code&gt; is a general-purpose file that may be used by any tool or framework.&lt;/p&gt;

&lt;p&gt;Consider that you are working on a project that calls for an API key from a third-party service. In the root directory of your project, run this command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim .envrc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then insert this line in the .envrc file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export API_KEY=YOUR_API_KEY

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  direnv
&lt;/h2&gt;

&lt;p&gt;Environment variables can be managed using the tool known as &lt;code&gt;direnv&lt;/code&gt;, which stands for "directory environment," in a flexible and secure way compared to conventional methods. It allows you to define environment variables for specific directories and automatically exports them when any change has been made into those directories. This can be helpful for projects that need various environment variables for various settings, such as production, staging, and development, or for various stages of development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to Fix the direnv allow Error
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Basic Installation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To install &lt;code&gt;direnv&lt;/code&gt; in MacOS, run this command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install direnv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Hook direnv into the Shell
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;direnv&lt;/code&gt; must be linked to the shell in order to function properly. Each shell has a unique extension system. Now add the following line at the end of the &lt;code&gt;~/.zshrc&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eval "$(direnv hook zsh)"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using Bash then add the following line at the end of the &lt;code&gt;~/.bashrc&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eval "$(direnv hook bash)"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In these examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;eval&lt;/code&gt; function evaluates the string that is passed to it as a shell command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To use &lt;code&gt;direnv&lt;/code&gt; in the shell, the &lt;code&gt;direnv hook zsh&lt;/code&gt; command generates a string of Zsh shell commands to configure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Bash, it's doing the same mechanism.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configuring the hook of &lt;code&gt;direnv&lt;/code&gt;, restart your shell to work properly.&lt;/p&gt;

&lt;p&gt;After that, the following command needs to be executed in the terminal to make &lt;code&gt;direnv&lt;/code&gt; work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;direnv allow

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;N.B: If you update the .envrc file, you need to execute the&lt;/strong&gt; &lt;code&gt;direnv allow&lt;/code&gt; &lt;strong&gt;command again.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieve Environment Variable
&lt;/h3&gt;

&lt;p&gt;Once the &lt;code&gt;direnv allow&lt;/code&gt; command will be executed, you can load the environment variables. You can use the &lt;code&gt;API_KEY&lt;/code&gt; variable in your code like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

api_key = os.getenv("API_KEY")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We need to import os which is a Python built-in module that offers a means to communicate with the underlying operating system. Sometimes, Python interpreter might include this os module by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Python os module, there is a function called &lt;code&gt;os.getenv&lt;/code&gt; that returns the value of an environment variable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, &lt;code&gt;.envrc&lt;/code&gt; and &lt;code&gt;direnv&lt;/code&gt; are essential tools for managing your environment variables and making sure that your development environment is configured correctly. The methods described in this article can help you identify and fix any problems, including the &lt;code&gt;direnv allow&lt;/code&gt; error. You may optimize your development process and improve your efficiency by following these best practices.&lt;/p&gt;




&lt;p&gt;I appreciate you taking the time to read this. Your support is much appreciated! If you found this article valuable, please consider clicking the 👉 Follow button and giving it a few claps by clicking the ❤️ like button to help me create more informative content like this. Thank you for your time! 🖤&lt;br&gt;
Also, follow me on &lt;a href="https://medium.com/@nayeem.ridoy"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://twitter.com/ShaMdNayeem"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>troubleshooting</category>
      <category>devops</category>
      <category>direnv</category>
    </item>
    <item>
      <title>12 Kubectl Commands to Master Kubernetes Deployments</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Tue, 07 Feb 2023 12:37:15 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/12-kubectl-commands-to-master-kubernetes-deployments-4k65</link>
      <guid>https://dev.to/shamdnayeem/12-kubectl-commands-to-master-kubernetes-deployments-4k65</guid>
      <description>&lt;p&gt;The kubectl command-line tool is the main interface for interacting with Kubernetes, a robust platform for managing containerized applications. You may create, update, and manage resources, such as pods, services, and deployments, in a Kubernetes cluster using kubectl.&lt;/p&gt;

&lt;p&gt;In this article, we'll look at the 12 kubectl commands that any Kubernetes administrator needs to be familiar with. These commands will help you master Kubernetes, from managing and debugging apps to creating and updating resources. This article will provide you a thorough overview of the most important kubectl commands, whether you're an experienced Kubernetes user or you're just getting started.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. kubectl cluster-info&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This command provides information about the current state of your Kubernetes cluster, including the API server address, the cluster state, and the versions of the components that make up your cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cluster-info

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. kubectl version
&lt;/h2&gt;

&lt;p&gt;This command displays the version of &lt;code&gt;kubectl&lt;/code&gt; that is currently installed on your system, as well as the version of the Kubernetes cluster that it is connected to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. kubectl get
&lt;/h2&gt;

&lt;p&gt;This command will provide a list of resources available in your Kubernetes cluster. There are several types of resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Namespace&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pod&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Node&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ReplicaSets.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will give the list of all of those available resources in your cluster. To pull a list of a specific resource, you need to use the following command in your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployment

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a specific namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployment -n namespace

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, -n is the short form of the flag -namespace. If you don't use any specific namespace, then &lt;code&gt;kubectl get deployment&lt;/code&gt; command will return the deployment list from the "default"/current namespace. A namespace is a Kubernetes object that separates a single physical Kubernetes cluster into numerous virtual clusters.&lt;/p&gt;

&lt;p&gt;Similarly, If you need to fetch the list of all the pods in a namespace, you can use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -o wide

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;here, "-o wide" flag is providing more details about the pod.&lt;/p&gt;

&lt;p&gt;So, with this kubectl get command, we can also fetch the list of Node, Namespace, ReplicaSets and Service&lt;/p&gt;

&lt;h2&gt;
  
  
  4. kubectl Create
&lt;/h2&gt;

&lt;p&gt;The Kubernetes command &lt;code&gt;kubectl create&lt;/code&gt; is used to add new resources to a cluster. Users can build resources like pods, services, and deployments using this commands. Here's an illustration of how to build a new deployment using kubectl create:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment my-nginx --image=nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this command, a new deployment named will &lt;code&gt;my-nginx&lt;/code&gt; will be created using the &lt;code&gt;nginx&lt;/code&gt; image.&lt;/p&gt;

&lt;p&gt;Here is another example of how to create a new cronjob with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create job my-cronjob --schedule="*/5 * * * *" --image=busybox -- command -- args="echo This is a cron job!"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A new CornJob will be created named my-cornjob which will be running in busybox container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An echo command will be repeated every 5 minutes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--schedule&lt;/code&gt; flag specifies the schedule for the job in cron syntax&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;"*/5 &lt;em&gt;"&lt;/em&gt; means to run the job every 5 minutes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--image&lt;/code&gt; flag specifies which container it should run&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--command&lt;/code&gt; flag is used to specify which command it should run inside the container.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Kubernetes' CronJobs functionality, you can automate recurring processes like database backups, log rotations, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. kubectl edit
&lt;/h2&gt;

&lt;p&gt;With this &lt;code&gt;kubectl edit&lt;/code&gt; command, you can edit an existing resource in a cluster. You can modify a resource's configuration in a text editor using &lt;code&gt;kubectl edit&lt;/code&gt;, which eliminates the need for you to manually generate a new YAML file.&lt;/p&gt;

&lt;p&gt;Here's an illustration of how to change a deployment using kubectl edit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit deployment my-nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts a text editor and opens the my-nginx deployment. When you make changes to the deployment's configuration and save the file, the cluster's deployment will be updated.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. kubectl delete
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;kubectl delete&lt;/code&gt; command will help you to delete any resources such as pod, deployment, service, cornjob in your Kubernetes cluster. So, never just remove anything without knowing everything you need to know about it. Think carefully before executing this command since once the resource is deleted, it cannot be recovered; you must reconstruct it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete deployment my-nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. kubectl apply
&lt;/h2&gt;

&lt;p&gt;The Kubernetes command &lt;code&gt;kubectl apply&lt;/code&gt; enables you to create or modify resources in a cluster. Kubectl apply creates or modifies the resource in the cluster to match the configuration by reading the resource's configuration from a file or from standard input.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;code&gt;kubectl apply&lt;/code&gt; command reads a deployment configuration from a file named &lt;code&gt;deployment.yaml&lt;/code&gt; and creates a new deployment in the cluster depending on that configuration file. In case, if there will be a file existed on that name, then this command will update that configuration file.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. kubectl config
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, the command kubectl config allows you to manage the configuration for a kubectl client. The config command can be used to view, edit, or switch between multiple cluster configurations, as well as to manage user credentials and context settings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set a Current Namespace
&lt;/h3&gt;

&lt;p&gt;If you are working on a specific namespace, then every time typing a namespace in each command is a hassle. To overcome this, you can set that namespace as a current namespace.&lt;/p&gt;

&lt;p&gt;Here is an example of how to set up a current namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config set-context --current --namespace=NAMESPACE

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kubectl config set-context is a command in Kubernetes that allows you to modify the context of a kubectl configuration. The context defines the cluster, user, and namespace that kubectl commands operate. In this example, this command will set the current namespace as "NAMESPACE".&lt;/p&gt;

&lt;h3&gt;
  
  
  Set Current Context
&lt;/h3&gt;

&lt;p&gt;A context is defined as a named group of clusters and user credentials in the Kubernetes cluster. To set a context, you need to execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config use-context docker-desktop

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, this command is used to switch the current context in the kubectl configuration to the docker-desktop context. So, kubectl commands can interact with the cluster running on docker-desktop. You can also use minikube instead of docker-desktop if you want to interact with the cluster running on minikube.&lt;/p&gt;

&lt;p&gt;You can check the current context by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config current-context

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  9. kubectl describe
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;kubectl describe&lt;/code&gt; is a useful tool for monitoring and debugging your Kubernetes cluster. It offers a quick approach to obtaining comprehensive information about a resource, making it simpler to understand the resource's current state and spot any problems. It shows details about a resource's status, events, and metadata.&lt;/p&gt;

&lt;p&gt;Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe deployment my-nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this command, you can get details information on my-nginx deployment which includes the status of the deployment, the events that have occurred, and the metadata associated with the deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. kubectl logs
&lt;/h2&gt;

&lt;p&gt;You may get the logs of a container in a pod using the Kubernetes command &lt;code&gt;kubectl logs&lt;/code&gt;. The logs can be used to track down problems with a container or to troubleshoot problems with it.&lt;/p&gt;

&lt;p&gt;Here's an example of how you can use &lt;code&gt;kubectl logs&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs my-pod

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there will be multiple containers in a pod, we can get the log of a specific container this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs my-pod -c my-container

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;code&gt;-c&lt;/code&gt; is an option that specifies the name of the container from where you want to retrieve the logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. kubectl exec
&lt;/h2&gt;

&lt;p&gt;You can execute a command in a running container of a pod using the Kubernetes command &lt;code&gt;kubectl exec&lt;/code&gt;. It is helpful for debugging, troubleshooting, and monitoring the status of an application.&lt;/p&gt;

&lt;p&gt;Here's an illustration of how kubectl exec might be used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it pod-name -- bash

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;kubectl exec&lt;/code&gt; is used to launch a &lt;code&gt;bash&lt;/code&gt; shell in the container of the specified pod. The &lt;code&gt;-it&lt;/code&gt; flag is used to attach to the shell and interact with it.&lt;/p&gt;

&lt;p&gt;If you have multiple containers in a pod, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it pod_name -c container_name bash

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;-c&lt;/code&gt; is used to specify the container name where the shell command will be executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. kubectl cp
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, the command &lt;code&gt;kubectl cp&lt;/code&gt; allows you to copy files and directories between a local file system and a container in a pod, or between two containers in the same pod. This can be useful for transferring files between the host and containers, or for copying files between containers within a pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cp &amp;lt;local-file-path&amp;gt; &amp;lt;pod-name&amp;gt;:&amp;lt;container-destination-path&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;kubectl cp&lt;/code&gt; is used to copy a local file to a container in a pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;local-file-path&lt;/code&gt; specifies the path to the file on the local file system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;pod-name&lt;/code&gt; and &lt;code&gt;container-destination-path&lt;/code&gt; specify the destination of the file within the container.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With that, we have come to the end of this discussion on the 12 kubectl commands for mastering Kubernetes deployments. Please keep in mind that these are not the only kubectl command and there are many more Kubernetes concepts and kubectl commands to investigate.&lt;/p&gt;




&lt;p&gt;I appreciate you taking the time to read this. If you do have still some confusion regarding this article, please let me know in the comments! Also, If you liked this article, consider following me for my latest publications and also follow me on &lt;a href="https://medium.com/@nayeem.ridoy"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://twitter.com/ShaMdNayeem"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>deployment</category>
      <category>cluster</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Disaster Recovery: Protecting Your Business from Unexpected Outages</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Fri, 03 Feb 2023 14:43:29 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/disaster-recovery-protecting-your-business-from-unexpected-outages-2cii</link>
      <guid>https://dev.to/shamdnayeem/disaster-recovery-protecting-your-business-from-unexpected-outages-2cii</guid>
      <description>&lt;p&gt;Events that disrupt services could occur at any time. The network will go down anytime, when the most recent application deployment may bring a serious issue, or when there will be a natural disaster. Disasters can happen at any time, and if a business is not ready, the effects could be disastrous.&lt;/p&gt;

&lt;p&gt;In this blog, we will examine the essential elements of an effective disaster recovery plan, including RTO, RPO, and RTA which will help you to keep your business up and running, no matter what life throws your way.&lt;/p&gt;

&lt;h1&gt;
  
  
  Disaster Recovery
&lt;/h1&gt;

&lt;p&gt;Disaster recovery (DR) refers to the process of anticipating and recovering from unexpected events. The possibility of a disaster, whether natural or man-made, is quite real in a society where technology is the backbone of the economy. Disaster recovery, however, may be a lifesaver with the correct strategy and execution, ensuring that your company remains functioning even in the aftermath of a disaster. The essential elements of an effective disaster recovery plan, from RTO to RPO to RTA, all work together to safeguard your crucial systems, data, and activities. Having a strong, focused, and tested disaster recovery plan is a fundamental need for a business when things go wrong.&lt;/p&gt;

&lt;h1&gt;
  
  
  Recovery Time Objective (RTO)
&lt;/h1&gt;

&lt;p&gt;RTO refers to the maximum amount of time the application can be offline which will be determined by the business and the DR solution should be able to restore functionality based on the business and all other requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m0kW8tMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/GlqDcyAqqhESsWND-epaCUllq7yXiXh3cLEQVHMPItfZx5Ylp1dDB-EW6xglJ81eNrKRrJ17a3nQpkHWaQ6NrBbVTyJ1MJOsetW4vChtRbQam7deqhZxkvdy_Fpt_eO1zfzGtTyH-PIpL_1tepo3-lvIvQ%3Ds2048" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m0kW8tMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/GlqDcyAqqhESsWND-epaCUllq7yXiXh3cLEQVHMPItfZx5Ylp1dDB-EW6xglJ81eNrKRrJ17a3nQpkHWaQ6NrBbVTyJ1MJOsetW4vChtRbQam7deqhZxkvdy_Fpt_eO1zfzGtTyH-PIpL_1tepo3-lvIvQ%3Ds2048" alt="" width="817" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Recovery Point Object (RPO)
&lt;/h1&gt;

&lt;p&gt;RPO is the measure of acceptable data loss in case of a disaster and it determines how much of the last known good data will be recovered based on the backup and replication policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DbsX6Rz0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/Js9oaDCWSaXlZs9eyjACCXSYYSqFwqiCcbXkSFAGo8tQLcqnaRJnrKwC8O1ztt5dcLGyaMq5ZGEIJW8kU32RJgvmMkRNtdmSV7bbl7367XaY_gAit5f4BV1zOKGnMDkDrpSDvtWLijL4TDupmJqNqss2aA%3Ds2048" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DbsX6Rz0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/Js9oaDCWSaXlZs9eyjACCXSYYSqFwqiCcbXkSFAGo8tQLcqnaRJnrKwC8O1ztt5dcLGyaMq5ZGEIJW8kU32RJgvmMkRNtdmSV7bbl7367XaY_gAit5f4BV1zOKGnMDkDrpSDvtWLijL4TDupmJqNqss2aA%3Ds2048" alt="" width="802" height="690"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Recovery Time Actual (RTA)
&lt;/h1&gt;

&lt;p&gt;RTA stands for Recovery time actual or achievable. This is a measurement of the actual recovery time that was shown during a test run for disaster recovery. The gap between RTO and RTA is an important metric to track. Testing the DR solution regularly will reveal RTA trends over time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;RTA&amp;lt;RTO then RTO indicates success or lower RTO which is good&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RTA&amp;gt;RTO then RTO indicates the failure to meet business goals or unachievable RTO&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tjwMkD1M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/VR2BXdkUrPXfOkCrVlfXWDBHwt_yg4ehyjZloQCeiH3hReI2P_evImLMA-872HE_r5g08_DoKbTPBlIO8v5h7YNdmxJAPdb7YyiaUMVrcEj5-WdckECrZ_IdfHm4x8VNj4eJ_0eheSbhxZ3IV7CVSJjZFg%3Ds2048" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tjwMkD1M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/VR2BXdkUrPXfOkCrVlfXWDBHwt_yg4ehyjZloQCeiH3hReI2P_evImLMA-872HE_r5g08_DoKbTPBlIO8v5h7YNdmxJAPdb7YyiaUMVrcEj5-WdckECrZ_IdfHm4x8VNj4eJ_0eheSbhxZ3IV7CVSJjZFg%3Ds2048" alt="" width="811" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Three Disaster Recovery Strategies
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Hot Site
&lt;/h3&gt;

&lt;p&gt;A hot site is a fully functional backup location that offers an organization's whole infrastructure, including hardware, software, and data, to enable business continuity in the case of a disaster. A business might set up a hot site at a different location with the same servers, networking hardware, and data storage devices as its main site, along with all necessary data and programs that are pre-installed and updated. The business may immediately move to the hot site in the case of a tragedy and go on with little interruption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cold Site
&lt;/h3&gt;

&lt;p&gt;A cold site is a pre-configured facility with minimal power and infrastructure but no hardware or data placed at the time of setup. To store its backup hardware, for instance, a business might rent a sizable data center with enough room, electricity, and cooling, but without any hardware or data already in place. Before using the cold site as a backup location, the organization must first install the needed equipment and software and restore data in the event of a disaster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Warm Site
&lt;/h3&gt;

&lt;p&gt;The middle ground between the two disaster recovery alternatives is a warm site. It is a backup location that has been pre-configured and is only partially operational, but it includes servers and networking equipment that are necessary for a business to continue operating in the event of a disaster but it might not have all the necessary data and programs pre-installed.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing DR Solution
&lt;/h1&gt;

&lt;p&gt;The significance of testing and upgrading disaster recovery plans on a regular basis is to maintain preparation for actual disasters. For instance, a business might have a plan in place for disaster recovery that calls for moving activities to a warm site in the case of a catastrophe. The warm site may not be fully functional or its data may be outdated, which might cause delays and disruptions in the company's recovery operations if the plan is not routinely evaluated and updated. Companies should test their disaster recovery plans on a regular basis and update them as needed, whether that means installing new hardware or software, upgrading data backups, or changing the plan itself to reflect changes in the organization's operations.&lt;/p&gt;

&lt;p&gt;As a result, there will be less downtime and business continuity will be maintained in the event of a disaster. This helps to ensure that the disaster recovery plan is prepared to be activated at a moment's notice.&lt;/p&gt;

&lt;h1&gt;
  
  
  Best Practices
&lt;/h1&gt;

&lt;p&gt;To make sure they have all the information they need in the case of a disaster, organizations should regularly backup their data. To respond to a disaster in an efficient and organized manner, it is also essential to have a clear communication plan in place that specifies who to contact and what information to share. Finally, it's critical to train employees on the disaster recovery plan so that they are all aware of what to do in an emergency. Businesses may assure that they are ready to respond to disasters and reduce downtime and data loss by putting these best practices into effect.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In conclusion, disaster recovery is an essential component of corporate operations that needs careful planning and preparation. As DR affects business cost and reputation, a proper RTO and RPO should be determined by business decisions. Lower RTO and RPO solutions may be more expensive, but they can give businesses more security and reduce downtime in the event of a disaster. To monitor RTA and ensure readiness, the disaster recovery plan must undergo regular testing. The ability to promptly respond to a disaster and maintain business continuity depends on having a solid disaster recovery solution in place and available.&lt;/p&gt;




&lt;p&gt;I appreciate you taking the time to read this. If you do have still some confusion regarding this article, please let me know in the comments! Also, If you liked this article, consider following me for my latest publications and also follow me on &lt;a href="https://medium.com/@nayeem.ridoy"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://twitter.com/ShaMdNayeem"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>disasterplanning</category>
      <category>disasterrecovery</category>
      <category>businessresilience</category>
      <category>databackup</category>
    </item>
    <item>
      <title>Minikube for Beginners: A Guide to a Local Kubernetes Cluster on MacOS</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Thu, 02 Feb 2023 07:59:02 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/minikube-for-beginners-a-guide-to-a-local-kubernetes-cluster-on-macos-18oj</link>
      <guid>https://dev.to/shamdnayeem/minikube-for-beginners-a-guide-to-a-local-kubernetes-cluster-on-macos-18oj</guid>
      <description>&lt;p&gt;Minikube is a tool that lets you test and develops Kubernetes applications on a single node on your local workstation. You need Minikube if you want an easy and convenient approach to test and build applications on a local Kubernetes cluster without disrupting other systems or applications. It supports a number of hypervisors and offers a low-cost, isolated environment for testing and development.&lt;/p&gt;

&lt;p&gt;In this article, let's look at how we can set up Minikube for MacOS&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 1: Install Homebrew
&lt;/h1&gt;

&lt;p&gt;Homebrew, a popular package manager for macOS, can be used to install Minikube. Run the following command in your terminal if Homebrew isn't already installed on your device:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 2: Install Minikube
&lt;/h1&gt;

&lt;p&gt;To install Minikube, run the following command in your terminal using Homebrew:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install minikube

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 3: Start Minikube
&lt;/h1&gt;

&lt;p&gt;You can start Minikube by running the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have a Docker machine installed in your MacOS, you might get the following error while executing the above command. &lt;strong&gt;To solve this issue&lt;/strong&gt; , you need to start the Docker desktop application and rerun the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Exiting due to DRV_DOCKER_NOT_RUNNING: Found docker, but the docker service isn't running. Try restarting the docker service.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 4: Set Context
&lt;/h1&gt;

&lt;p&gt;Now you need to set the context to minikube so that the kubectl command can interact with it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config use-context minikube

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 5: Verify Minikube Setup
&lt;/h1&gt;

&lt;p&gt;Now you can verify the status of Minikube using the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube status

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 6: Interacting with Cluster
&lt;/h1&gt;

&lt;p&gt;To interact with the newly created cluster in your MacOS, you need to run the following command in your terminal if you have &lt;strong&gt;kubectl already installed&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get po -A

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But, &lt;strong&gt;if you don't have kubectl installed&lt;/strong&gt; on your MacOS, then run the following command where minikube can download the appropriate kubectl version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube kubectl -- get po -A

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 7: Testing Minikube
&lt;/h1&gt;

&lt;p&gt;Now, we can test the minikube by deploying a simple application by following the command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
kubectl expose deployment hello-minikube --type=NodePort --port=8080

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This might take some time but if you run the following command, the deployment will show up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get services hello-minikube

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Letting Minikube open a web browser for you is the simplest approach to using this service. To do that run the following command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service hello-minikube

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 8: Access Kubernetes Dashboard
&lt;/h1&gt;

&lt;p&gt;To access the dashboard of Kubernetes, you need to run the following command which will open a Kubernetes dashboard on the web browser with a user-friendly interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube dashboard

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion:
&lt;/h1&gt;

&lt;p&gt;In this article, we have learned about Minikube and how to set it up on MacOS with a step-by-step process.&lt;/p&gt;

&lt;p&gt;I hope this article has helped you to explore Minikube's capabilities for testing and deploying applications on a single node of a Kubernetes cluster.&lt;/p&gt;




&lt;p&gt;I appreciate you taking the time to read this. If you do have still some confusion regarding this article, please let me know in the comments! Also, If you liked this article, consider following me for my latest publications and also follow me on &lt;a href="https://medium.com/@nayeem.ridoy" rel="noopener noreferrer"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://twitter.com/ShaMdNayeem" rel="noopener noreferrer"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>minikube</category>
      <category>kubernetes</category>
      <category>macos</category>
      <category>deployment</category>
    </item>
    <item>
      <title>I Did it! How I Maximized My Blog's Reach by Cross-Posting. A Step-by-Step Guide</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Sat, 21 Jan 2023 08:04:45 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/i-did-it-how-i-maximized-my-blogs-reach-by-cross-posting-a-step-by-step-guide-dod</link>
      <guid>https://dev.to/shamdnayeem/i-did-it-how-i-maximized-my-blogs-reach-by-cross-posting-a-step-by-step-guide-dod</guid>
      <description>&lt;p&gt;I recently began my adventure as a new blogger on Hashnode with a passion for sharing my technical views and thoughts with the world. But I quickly saw that I needed to look at other platforms as well in order to really increase the visibility of my blog and attract more readers. I later learned the value of cross-posting at that point. I was able to expand the audience for my blog and distribute it over the internet by posting my content on various websites. This article will outline the steps I took to successfully cross-post from Hashnode to Dev.to and explain how it helped me reach my objective of increasing blog readership.&lt;/p&gt;

&lt;p&gt;I will try other platforms as well for the cross-posting and I will review those in a future blog post.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Hashnode
&lt;/h1&gt;

&lt;p&gt;A contemporary blogging platform called &lt;a href="https://hashnode.com/"&gt;&lt;strong&gt;Hashnode&lt;/strong&gt;&lt;/a&gt; makes it simple for users to produce, publish, and share their material with a large audience. It is intended to simplify, streamline, and customize the blogging process, freeing users to concentrate on the blog's content rather than its technical setup and upkeep. Hashnode is a one-stop shop for bloggers trying to expand their online presence because it also provides a variety of services like analytics, comments, and SEO optimization tools. Users can quickly build a blog with a professional appearance, write and publish posts, and share their material on several channels all from one location using Hashnode. Overall, Hashnode is an easy-to-use, adaptable, and feature-rich platform.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Cross-Posting
&lt;/h1&gt;

&lt;p&gt;The act of uploading the same content on various platforms or websites is known as cross-posting. This can be done to boost visibility and interaction, reach a larger audience, or just save time by avoiding having to write fresh material for every platform.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting up Cross-Posting
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We need to create an account first in &lt;a href="https://hashnode.com/"&gt;&lt;strong&gt;Hashnode&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then create an account in &lt;a href="https://dev.to/"&gt;dev.to&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the settings and select the Extensions tab&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You'll discover Publishing to DEV Community from RSS under the Extensions tab. Simply paste the RSS feed URL from your Hashnode into the RSS Feed URL field.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O1ltMs7E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674284396646/8e3d5d9c-afa4-45bc-853c-fecb55029d29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O1ltMs7E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674284396646/8e3d5d9c-afa4-45bc-853c-fecb55029d29.png" alt="" width="734" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To get your RSS feed URL, go to your Hashnode dashboard and click on RSS.xml&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MRlgEDgN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674284530871/10457492-b477-40c1-869b-3898a5b8c34d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MRlgEDgN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674284530871/10457492-b477-40c1-869b-3898a5b8c34d.png" alt="" width="880" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A new tab will be opened and you will find your RSS feed URL from the address bar.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bgqaaIGm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674284654834/37a39a21-0d7f-4da5-b0a9-1487885afd58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bgqaaIGm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674284654834/37a39a21-0d7f-4da5-b0a9-1487885afd58.png" alt="" width="880" height="53"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After pasting it into the RSS Feed URL field, click "Save Feed Settings".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then click on "Fetch feed now" and your Hashnode articles will eventually show up as drafts in the dev.to dashboard.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will find the article as a Draft in the dashboard and edit it before publishing it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You might want to add a cover image that will be automatically dragged from Hashnode to dev website. At least, I have faced this issue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So, Upload the image and copy only the part starting from https&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add "cover_image" below tags and past the link.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The default value of "published" will be "false" as it is a draft copy. So make it "true"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then click "Save Changes" at the bottom and it will be published instantly.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8MTpyJY7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674286461187/c08f37b2-1931-4b44-b362-30b0e047681c.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8MTpyJY7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674286461187/c08f37b2-1931-4b44-b362-30b0e047681c.jpeg" alt="" width="600" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In conclusion, cross-posting is an effective strategy for bloggers who want to broaden their audience and boost interaction. Blogging professionals can reach new audiences, increase their exposure, and increase traffic to their blogs by distributing their material across a variety of platforms. Popular platforms like Hashnode and &lt;a href="http://Dev.to"&gt;Dev.to&lt;/a&gt; provide simple cross-posting tools that make it easy for bloggers to distribute their work on several websites. Bloggers can cross-post with confidence by following the instructions in this tutorial, knowing that they are doing everything possible to increase their audience and develop their online presence.&lt;/p&gt;




&lt;p&gt;I appreciate you taking the time to read this. If you do have still some confusion regarding this article, please let me know in the comments! Also, If you liked this article, consider following me for my latest publications and also follow me on &lt;a href="https://medium.com/@nayeem.ridoy"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://twitter.com/ShaMdNayeem"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>hashnode</category>
      <category>devcommunity</category>
      <category>crossposting</category>
      <category>contentsharing</category>
    </item>
    <item>
      <title>How I secured sensitive information in GitLab environment variable and made pipeline secured</title>
      <dc:creator>Sha Md Nayeem</dc:creator>
      <pubDate>Fri, 20 Jan 2023 06:15:57 +0000</pubDate>
      <link>https://dev.to/shamdnayeem/securely-store-sensitive-information-in-gitlab-environment-variable-lc0</link>
      <guid>https://dev.to/shamdnayeem/securely-store-sensitive-information-in-gitlab-environment-variable-lc0</guid>
      <description>&lt;p&gt;While working on my current project, I was facing some issues regarding GitLab CI/CD values which were getting exposed in the pipeline logs. I was looking for a solution and found that it can be achieved with the variable "masked" option. I wanted to mask a server key but it was not possible because that key has multiline values and according to the rules, the "masked" option can only be enabled on single-line variable values. So, finally, I figured out that I can achieve this using Base64. In this article, I will explain how to store sensitive data as an environment variable and secure a GitLab CI/CD pipeline by base64 encoding and decoding.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Base64?
&lt;/h1&gt;

&lt;p&gt;A file (such as an image or video) can be converted into a string of text using Base64 so that it can be transferred over the internet. This is accomplished by taking the file's binary contents and turning it into a set of 64 text-usable characters. In this way, the file can be transmitted as text and still be converted back to its original format once it is retrieved. It's similar to delivering someone a gift wrapped in paper and then having them open the package to retrieve the real gift. In email attachments and when embedding files in HTML, CSS, or JavaScript, Base64 is frequently utilized.&lt;/p&gt;

&lt;h1&gt;
  
  
  GitLab environment variable
&lt;/h1&gt;

&lt;p&gt;GitLab CI/CD environment variables are used to safely store and manage sensitive data that is required by the pipeline, such as API keys, passwords, and other configuration parameters. You may reduce the security risk of hardcoding sensitive information into your pipeline definition by using environment variables.&lt;/p&gt;

&lt;p&gt;While setting up an environment variable, GitLab CI/CD provides a "masked" option that can hide the value of an environment variable from the pipeline logs. This is helpful if we don't want sensitive data, such as passwords and API credentials, to be shown in the pipeline logs. When the "masked" option is on, a variable's value is changed to ***** In the pipeline logs.&lt;/p&gt;

&lt;h1&gt;
  
  
  Secured pipeline with Base64
&lt;/h1&gt;

&lt;p&gt;Though the "masked" is really an amazing feature of GitLab CI/CD variables, it can't be enabled on multiline variable values. Here comes Base64 which converts the multiline variable values into encoded single-line variable values and after that, I enabled the "masked" option. In the case of single-line values, we can also use Base64 to make it more secure in the pipeline. To mask a variable, there are some certain rules. The variable's value must:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A single line.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Characters from the Base64 alphabet (RFC4648).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;@&lt;/code&gt;, &lt;code&gt;:&lt;/code&gt;, &lt;code&gt;.&lt;/code&gt;, or &lt;code&gt;~&lt;/code&gt; characters.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to use Base64
&lt;/h3&gt;

&lt;p&gt;The base64 command-line utility is often pre-installed if you are using a popular Linux distribution like Ubuntu, Debian, CentOS, or Red Hat. There shouldn't be any more actions for you to take.&lt;/p&gt;

&lt;p&gt;If you are using Max OS, you can use the HomeBrew command to install base64:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install base64

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Encoding with base64
&lt;/h3&gt;

&lt;p&gt;To encode a variable, you can use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo -n "insert_your_variable" | base64

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of the command is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aW5zZXJ0X3lvdXJfdmFyaWFibGU=

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The "echo" command is showing the variable's value in the terminal&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;"-n" is a subcommand for echo which is telling not to append a newline character to the end of the output otherwise it would additionally append a newline character which will also be encoded by base64 and creates additional problems during decoding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;| is known as a 'pipe' operator which takes the output of one command and makes the input for another command. In this command, this operator is passing the output of echo -n "insert_your_variable" as the input of the base64 command.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After encoding, add that value as a CI/CD environment variable with the "masked" option enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674192856517%2Fce43f3e9-4de8-43a3-b8d4-a65a421d759e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674192856517%2Fce43f3e9-4de8-43a3-b8d4-a65a421d759e.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Decoding with Base64
&lt;/h3&gt;

&lt;p&gt;To decode a variable in the terminal, just use this command with the "-d" or "--decode" after base64:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "insert_your_variable" | base64 -d

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, now I am going to show how we can decode the value in the GitLab CI/CD pipeline. We need to use the base64 command to decode our base64-encoded value kept in a variable named &lt;strong&gt;SECRET_KEY&lt;/strong&gt; in a GitLab pipeline. I am giving a project-level example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
  - deploy

deploy:
  stage: deploy
  script:
    - echo "$SECRET_KEY" | base64 -d
    - export DECODED_SECRET_KEY="$SECRET_KEY"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pipeline in this illustration has just one stage, named deploy. At the project level, the variable is stored as &lt;strong&gt;SECRET_KEY&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The pipeline initially uses the base64 command to decode the value contained in &lt;strong&gt;SECRET_KEY&lt;/strong&gt; in the script section of the deploy step. After that, the decoded value is then exported and stored in the variable &lt;strong&gt;DECODED_SECRET_KEY&lt;/strong&gt; so that it can be used later in the pipeline.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In this article, you have learned how to do base64 encoding for a variable and stored it in GitLab CI/CD environment variable with the "masked" option. Also, how to decode it in the GitLab pipeline without showing the variable's value in the pipeline logs.&lt;/p&gt;

&lt;p&gt;It's crucial to note that using plain text to keep secret keys in your pipeline is not recommended; instead, it's more secure to use GitLab's secret variable feature to store the keys and use them in the pipeline.&lt;/p&gt;




&lt;p&gt;I appreciate you taking the time to read this. If you do have still some confusion regarding this article, please let me know in the comments! Also, If you liked this article, consider following me for my latest publications and also follow me on &lt;a href="https://medium.com/@nayeem.ridoy" rel="noopener noreferrer"&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://twitter.com/ShaMdNayeem" rel="noopener noreferrer"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt; &amp;amp; &lt;a href="https://www.linkedin.com/in/shanayeem/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>cicd</category>
      <category>bas64</category>
      <category>security</category>
    </item>
  </channel>
</rss>
