<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Neelanjan Manna</title>
    <description>The latest articles on DEV Community by Neelanjan Manna (@neelanjan00).</description>
    <link>https://dev.to/neelanjan00</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/neelanjan00"/>
    <language>en</language>
    <item>
      <title>Extending kubectl Utility With Plugins</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Thu, 15 Sep 2022 15:54:18 +0000</pubDate>
      <link>https://dev.to/neelanjan00/extending-kubectl-utility-with-plugins-l9a</link>
      <guid>https://dev.to/neelanjan00/extending-kubectl-utility-with-plugins-l9a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We can all agree on the fact that Kubernetes is quite expansive. It has become the de facto tool for cloud-native application deployment because of its flexibility. One often uses the kubectl CLI to interact with their Kubernetes cluster, and while kubectl is quite extensive in itself with regard to all the operations you can perform with it, its usability can be further extended using plugins.&lt;/p&gt;

&lt;p&gt;kubectl plugins can extend the usability of the CLI tool by adding functional capabilities specific to a Kubernetes application or in general providing more features that can be accessed as kubectl sub-commands. The best part is it does not require editing kubectl’s source code or recompiling it, thereby making the plugins truly modular in the sense that they can be simply plugged in and used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Plugins
&lt;/h2&gt;

&lt;p&gt;Although there’s more than one way to distribute and install kubectl plugins, the simplest way is to use &lt;a href="https://krew.sigs.k8s.io/"&gt;&lt;strong&gt;krew&lt;/strong&gt;&lt;/a&gt;. It is a package manager for kubectl and makes the installation and management of kubectl plugins a cakewalk. To get started, you would have to first install krew on your machine. You can refer to &lt;a href="https://krew.sigs.k8s.io/docs/user-guide/setup/install/"&gt;this&lt;/a&gt; installation document to do so.&lt;/p&gt;

&lt;p&gt;Once installed, you shall first index all the available krew plugins, a list of which can be accessed &lt;a href="https://krew.sigs.k8s.io/plugins/"&gt;here&lt;/a&gt;. To do so, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Did you notice how krew itself is a kubectl plugin as well? A plugin to manage all other plugins! If you’re willing to, you can check its source code &lt;a href="https://github.com/kubernetes-sigs/krew"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now, you can view the list of available plugins with the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew search
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To install any plugin, say the &lt;a href="https://github.com/rajatjindal/kubectl-whoami"&gt;&lt;strong&gt;whoami&lt;/strong&gt;&lt;/a&gt; plugin, run the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew &lt;span class="nb"&gt;install whoami&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once installed, you can simply follow the usage guide to run the plugin commands as such:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;whoami&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can also update, uninstall, and add a custom index for the plugins using krew among other things, for which you can refer to the &lt;a href="https://krew.sigs.k8s.io/docs/"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Developing Plugins
&lt;/h2&gt;

&lt;p&gt;Developing plugins for kubectl is a fairly simple process, if we abstract out the business logic of the plugin. The plugin itself is nothing but a standalone CLI application that uses appropriate commands to achieve its intended functionalities. So the question remains how do we add this standalone CLI as a kubectl plugin? Let’s find out.&lt;/p&gt;

&lt;p&gt;I have made a demo plugin called kubectl-count, which we will be using for this small demo. The complete source code can be found here: &lt;a href="https://github.com/neelanjan00/kubectl-count"&gt;https://github.com/neelanjan00/kubectl-count&lt;/a&gt;. In a nutshell, this plugin allows you to count the instances of Kubernetes resources present in your cluster. So for example, you can count the number of pods in a namespace, the number of nodes in your cluster, or the total number of deployments in all the namespaces of your cluster.&lt;/p&gt;

&lt;p&gt;The source code of the plugin is pretty straightforward, we use the &lt;a href="https://github.com/spf13/cobra"&gt;Cobra CLI&lt;/a&gt; to create a boilerplate CLI project.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Then, we add the commands for each of the supported Kubernetes resources for which we will count the resources. We do so using the Kubernetes client-go library. For example, this is the logic for the command for counting the number of pods.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;We are using flags here for specifying the namespace, the label selector, and all-namespaces, just as you would do in kubectl. Also, the aliases for the &lt;code&gt;pods&lt;/code&gt; command such as &lt;code&gt;po&lt;/code&gt; and &lt;code&gt;pod&lt;/code&gt; are also available to be used. Similarly, we have created definitions for other commands as well, corresponding to the resources that they represent.&lt;/p&gt;

&lt;p&gt;Lastly, if you’re wondering how we’re obtaining the Kubernetes client here, we use the existing .kubeconfig file in the machine and use it to generate a client.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;We can simply run this CLI program as it is like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go run kubectl-client.go po
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command should give the number of pods in your default namespace, given that you have a valid Kubernetes cluster config file on your machine.&lt;/p&gt;

&lt;p&gt;Now that we have our plugin ready, how shall we use it along with kubectl? To do so, first, we need to build a binary for this go program and then we need to rename the binary file (or an executable shell file, in case you want to make your plugins in bash) file such that the root sub-command of the plugin id preceded by &lt;code&gt;kubectl-&lt;/code&gt;. For example, since the root sub-command for this plugin is &lt;code&gt;count&lt;/code&gt;, the name of the binary will be &lt;code&gt;kubectl-count&lt;/code&gt;. After this, we just need to move this file to the &lt;code&gt;/usr/local/bin&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;The naming convention we used here will allow kubectl to search for this plugin and any other plugin that you will be installing, given that the directory &lt;code&gt;/usr/local/bin&lt;/code&gt; is in your system path. Now you can simply use this plugin to run the command that you had previously run to count the pods like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl count po
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;kubectl plugins are an excellent way to increase the usability of the CLI tool. While there’s an ever-soaring list of plugins that you can simply plug and play using krew, it’s also pretty feasible to develop your own plugins which can abridge the lack of any functionality in kubectl without complicating the developer experience or the end-user experience.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>go</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to Install Drone CI Server in Kubernetes</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Fri, 09 Sep 2022 10:59:07 +0000</pubDate>
      <link>https://dev.to/neelanjan00/how-to-install-drone-ci-in-kubernetes-39e5</link>
      <guid>https://dev.to/neelanjan00/how-to-install-drone-ci-in-kubernetes-39e5</guid>
      <description>&lt;p&gt;In this blog, we’ll setup a Drone CI server in Kubernetes using Helm. If you’re a beginner and are lost in the labyrinths of the documentations and GitHub READMEs, this simple blog will bootstrap you in just a few minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Drone by &lt;a href="https://harness.io/" rel="noopener noreferrer"&gt;Harness&lt;/a&gt; is a continuous integration service that enables you to conveniently set up projects to automatically build, test, and deploy as you make changes to your code. Drone integrates seamlessly with Github, Bitbucket and Google Code as well as third party services such as Heroku, Dotcloud, Google AppEngine and more.&lt;/p&gt;

&lt;p&gt;While Drone also has a cloud SaaS platform, we’ll be focusing on setting up the hosted version on Kubernetes. By default, Drone CI is configured to run as a Docker container and is meant to be setup in a server or a VM. In fact, all the CI build steps are performed using individual Docker containers.&lt;/p&gt;

&lt;p&gt;However it is also possible to setup the Drone CI server on a Kubernetes cluster. Drone also provides a Helm chart to do so, which we will be using here. However this mode of installation is not present in the primary &lt;a href="https://docs.drone.io/" rel="noopener noreferrer"&gt;user documentation&lt;/a&gt;, hence the requirement of this blog. The Helm chart we will be using is present in &lt;a href="https://github.com/drone/charts" rel="noopener noreferrer"&gt;this&lt;/a&gt; GitHub repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A Kubernetes Cluster&lt;/strong&gt;: Any Kubernetes cluster of sufficient capacity can be used. I will be using a GKE cluster of three e2-micro VMs for the setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt;: Ensure that you have Helm installed on your local machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Account&lt;/strong&gt;: For this blog, we’ll be using GitHub for creating an OAuth application for the Drone server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt;: We’ll be using kubectl to access some of the Kubernetes cluster info.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-1: Setting up a GitHub OAuth Application for Drone
&lt;/h2&gt;

&lt;p&gt;Before deploying Drone in Kubernetes, we need to create an OAuth application so that the users can authorise access of their GitHub repositories to Drone. We are using GitHub for this purpose here, however this can be achieved using other major Git hosting service providers as well like BitBucket, GitLab among others. Checkout the docs for the full list.&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="https://github.com/settings/developers" rel="noopener noreferrer"&gt;https://github.com/settings/developers&lt;/a&gt; and create a new OAuth application and choose New OAuth App.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AFaz2Eqb-jINoYdGOUaCozQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AFaz2Eqb-jINoYdGOUaCozQ.png" alt="Register a new GitHub OAuth application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Put application name as &lt;strong&gt;Drone&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For the Homepage URL, we need to provide the Drone server application endpoint. Depending on how we’re configuring the access to the application in Kubernetes i.e. either using a NodePort or a LoadBalancer type of service, we will obtain the URL.&lt;br&gt;
We’ll be using a NodePort type of service for the purpose of this demo. Currently, we don’t have the Drone server deployed in Kubernetes and hence the NodePort service isn’t available to us just yet. However, we can choose an unused NodePort value right away which will be later used for the server deployment. I will be using the port &lt;strong&gt;32000&lt;/strong&gt;, as an example.&lt;/p&gt;

&lt;p&gt;With the NodePort value decided upon, we can simply use it with the external IP of any of the Kubernetes nodes to obtain the Homepage URL. In other words, this will be the endpoint of the Drone server once we install it. To get the external IP of the nodes, we can use the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{ print $7 }'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You’ll obtain a similar result:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

EXTERNAL-IP
35.238.118.197
34.68.253.216
35.222.5.86


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You may choose any one of the external IPs available to you. For this demo I’ll be using the &lt;strong&gt;35.238.118.197&lt;/strong&gt; IP. Hence, our Homepage URL becomes &lt;strong&gt;&lt;a href="http://35.238.118.197:32000" rel="noopener noreferrer"&gt;http://35.238.118.197:32000&lt;/a&gt;&lt;/strong&gt; and for the authorisation callback URL, we will use &lt;strong&gt;&lt;a href="http://35.238.118.197:32000/login" rel="noopener noreferrer"&gt;http://35.238.118.197:32000/login&lt;/a&gt;&lt;/strong&gt;. Optionally, feel free to add any description that seems fit to you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AqQnbcAQ8VfmIaevS6IgPtw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AqQnbcAQ8VfmIaevS6IgPtw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, choose &lt;strong&gt;Register application&lt;/strong&gt;. This should register your Drone application with the GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AByQtw4YyNIwHqgWIIKH4AQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AByQtw4YyNIwHqgWIIKH4AQ.png" alt="Registration successful"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now all we need to do is obtain a client secret key so that it can be used by Drone to authorize the login requests via GitHub OAuth. To do so, choose &lt;strong&gt;Generate a new client secret&lt;/strong&gt;. This should prompt GitHub to authenticate you so that client secret key creation can be validated. Once done, you’ll obtain the secret key. Copy and save the key somewhere else as you won’t be able to access it from GitHub the next time, when we’ll make use of it to configure the Drone server. Also take a note of the client ID.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-2: Configuring the Drone server Helm chart
&lt;/h2&gt;

&lt;p&gt;Before we configure the deployment options, we will add the Helm repo. To do so, execute the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm repo add drone https://charts.drone.io

helm repo update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once done, we can configure the server deployment options. To do so, we’ll use a YAML file, which can be obtained &lt;a href="https://github.com/drone/charts/blob/master/charts/drone/values.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Download and save the file by the name &lt;code&gt;values.yaml&lt;/code&gt; and open it using any text editor. We’ll be doing a minimal configuration setup in this demo, though you can explore all the other configuration options as well.&lt;/p&gt;

&lt;p&gt;First, we’ll modify the Kubernetes service to be used for the server deployment. As decided earlier, we’ll be using a NodePort type of service, therefore &lt;code&gt;service.type&lt;/code&gt; field value will be &lt;code&gt;NodePort&lt;/code&gt;. Also, for the &lt;code&gt;service.nodePort&lt;/code&gt; field, we will use the value &lt;code&gt;32000&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;32000&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we’ll configure the Drone specific options, which will be provided as environment variables. The first env is &lt;code&gt;DRONE_SERVER_HOST&lt;/code&gt;, for which we will provide the external IP that we had selected earlier. Ensure that the IP address value is wrapped in quotes, to specify it as a string.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;DRONE_SERVER_PROTO&lt;/code&gt; env, provide the value http as we will be using the HTTP protocol for the Drone server requests and runner polling.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;DRONE_RPC_SECRET&lt;/code&gt; env, we need to provide a secret value that the Drone server and runners will share and use for authenticating the requests. To generate this value, you may use the following command to get a 32 character length secret:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

openssl rand &lt;span class="nt"&gt;-hex&lt;/span&gt; 16


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can then provide this value as a string for the env.&lt;/p&gt;

&lt;p&gt;Lastly, we will provide the GitHub OAuth application client ID and client secret key which we had created in the first step for the &lt;code&gt;DRONE_GITHUB_CLIENT_ID&lt;/code&gt; and &lt;code&gt;DRONE_GITHUB_CLIENT_SECRET&lt;/code&gt; envs respectively.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_SERVER_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;35.238.118.197"&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_SERVER_PROTO&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_RPC_SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;727b7fe17e8de56689f46943c76d25f8"&lt;/span&gt;

  &lt;span class="na"&gt;DRONE_GITHUB_CLIENT_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;620a9d86236b7470558a&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_GITHUB_CLIENT_SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;35e31b8fddd16c2cb3b681e2a4b84a1ee9c8ce57&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-3: Create the Drone server deployment in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Open a terminal in the directory where your &lt;code&gt;values.yaml&lt;/code&gt; file is located.&lt;/p&gt;

&lt;p&gt;Firstly we’ll create the namespace where the deployment will take place with the following command. I will be using the &lt;code&gt;drone&lt;/code&gt; namespace here.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl create namespace drone


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, run the following command which will install the Drone server in Kubernetes:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; drone drone drone/drone &lt;span class="nt"&gt;-f&lt;/span&gt; values.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It will take a while for the command to execute and setup all the Kubernetes resources. Upon its completion, you’ll notice a similar output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

NAME: drone
LAST DEPLOYED: Sun Sep  4 17:39:07 2022
NAMESPACE: drone
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;NODE_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get &lt;span class="nt"&gt;--namespace&lt;/span&gt; drone &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.spec.ports[0].nodePort}"&lt;/span&gt; services drone&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;NODE_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;--namespace&lt;/span&gt; drone &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items[0].status.addresses[0].address}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;echo &lt;/span&gt;http://&lt;span class="nv"&gt;$NODE_IP&lt;/span&gt;:&lt;span class="nv"&gt;$NODE_PORT&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This indicates that the server is successfully deployed in Kubernetes. We can validate that all the Kubernetes resources have been successfully setup using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get all &lt;span class="nt"&gt;--namespace&lt;/span&gt; drone


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This should give you a similar output:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;NAME                         READY   STATUS    RESTARTS   AGE&lt;br&gt;
pod/drone-77cd496d5d-jf7gc   1/1     Running   0          3m26s&lt;br&gt;
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;          AGE&lt;br&gt;
service/drone   NodePort   10.28.177.200   &amp;lt;none&amp;gt;        8080:32000/TCP   3m31s&lt;br&gt;
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE&lt;br&gt;
deployment.apps/drone   1/1     1            1           3m32s&lt;br&gt;
NAME                               DESIRED   CURRENT   READY   AGE&lt;br&gt;
replicaset.apps/drone-77cd496d5d   1         1         1       3m34s&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Step-4: Access Drone server UI&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Before accessing the Drone server UI, make sure that if there’s a Firewall for the Kubernetes nodes then there’s an ingress rule allowing all the http traffic on port &lt;strong&gt;32000&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This would allow us to reach the server endpoint using the NodePort and node external IP. For example, I will be accessing &lt;strong&gt;&lt;a href="http://35.238.118.197:32000" rel="noopener noreferrer"&gt;http://35.238.118.197:32000&lt;/a&gt;&lt;/strong&gt; URL in my browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2Ac8B0d55buJXylyRyXWopUA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2Ac8B0d55buJXylyRyXWopUA.png" alt="Drone CI server UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can successfully access the Drone server UI now. Choose &lt;strong&gt;CONTINUE&lt;/strong&gt; to proceed with the user authentication. Upon doing so, you’ll be prompted for GitHub OAuth authorization. This is by virtue of the OAuth application that we had created earlier in GitHub. Once done, you’ll be prompted to fill up your user details. Once done, choose &lt;strong&gt;SUBMIT&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A9tzWLQTKB80FCPr2aFDhXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A9tzWLQTKB80FCPr2aFDhXg.png" alt="Update user details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll be then able to access the Drone server dashboard, where all your repositories can be accessed, by virtue of the GitHub OAuth which gives permission to the Drone application to access all your repos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AmbF0NFFp0xeihou47z-vlQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AmbF0NFFp0xeihou47z-vlQ.png" alt="User GitHub repositories"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In summary, we observed how Drone CI can be setup in Kubernetes using Helm. We created a GitHub OAuth application to authorize the users and provide access to the repositories. Then, we configured the server deployment by providing the relevant details using a config file and finally we used that config file to create the server deployment in Kubernetes using Helm, which allowed us to access the Drone server UI.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>googlecloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>GCP IAM Integration for LitmusChaos with Workload Identity</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Fri, 09 Sep 2022 10:55:11 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/gcp-iam-integration-for-litmuschaos-with-workload-identity-2eai</link>
      <guid>https://dev.to/litmus-chaos/gcp-iam-integration-for-litmuschaos-with-workload-identity-2eai</guid>
      <description>&lt;p&gt;In this blog, we’ll take a look at how to do a GCP IAM integration for LitmusChaos using Workload Identity for executing the GCP experiments in a keyless manner when using the Google Kubernetes Engine (GKE) as the execution plane.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;To execute LitmusChaos GCP experiments, one needs to authenticate with GCP using a service account before trying to access the target resources. Usually, you have only one way of providing the service account credentials to the experiment, using a service account key, but if you’re using a GKE cluster you have a keyless medium of authentication as well.&lt;br&gt;
Therefore you have two ways of providing the service account credentials to your GKE cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Using Secrets:&lt;/strong&gt; As you would normally do, you can create a secret containing the GCP service account in your GKE cluster, which gets utilized by the experiment for authentication to access your GCP resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Integration:&lt;/strong&gt; When you’re using a GKE cluster, you can bind a GCP service account to a Kubernetes service account as an IAM policy, which can be then used by the experiment for keyless authentication using &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="noopener noreferrer"&gt;GCP Workload Identity&lt;/a&gt;. We’ll discuss more on this method in the following sections.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Why use Workload Identity for GCP authentication?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecesshl2vtumw1vp4hpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecesshl2vtumw1vp4hpq.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Google API request can be made using a GCP IAM service account, which is an identity that an application uses to make calls to Google APIs. You might create individual IAM service accounts for the users who execute GCP experiments to enforce role-based access control, then download and save the keys as a Kubernetes secret that you manually rotate. Not only is this time-consuming, but service account keys only last ten years (or until you manually rotate them). An unaccounted-for key could give an attacker extended access in the event of a breach or compromise. Using service account keys as secrets is not an optimal way of authenticating GKE workloads due to this potential blind spot and the management cost of key inventory and rotation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrepmzchc3vyvo1yg629.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrepmzchc3vyvo1yg629.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Workload Identity allows you to restrict the possible “blast radius” of a breach or compromise while enforcing the principle of least privilege across your environment. It accomplishes this by automating workload authentication best practices, eliminating the need for workarounds, and making it simple to implement recommended security best practices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your tasks will only have the permissions they require to fulfill their role with the principle of least privilege. It minimizes the breadth of a potential compromise by not granting broad permissions.&lt;/li&gt;
&lt;li&gt;Unlike the 10-year lifetime service account keys, credentials supplied to the Workload Identity are only valid for a short time, decreasing the blast radius in the case of a compromise.&lt;/li&gt;
&lt;li&gt;The risk of unintentional disclosure of credentials due to a human mistake is greatly reduced because Google controls the namespace service account credentials for you. It also eliminates the need for you to manually rotate these credentials.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Enabling service accounts to access GCP resources
&lt;/h2&gt;
&lt;h3&gt;
  
  
  STEP 1: Enable Workload Identity
&lt;/h3&gt;

&lt;p&gt;You can enable Workload Identity on clusters and node pools using the Google Cloud CLI or the Google Cloud Console. Workload Identity &lt;strong&gt;must&lt;/strong&gt; be enabled at the cluster level before you can enable Workload Identity on node pools. Workload Identity can be enabled for an existing cluster as well as a new cluster.&lt;/p&gt;

&lt;p&gt;To enable Workload Identity on a new cluster using the console, choose to create a GKE cluster, and aside from the usual configuration, simply enable the Workload Identity under security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpfmf6uw6o8bf71fa4t1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpfmf6uw6o8bf71fa4t1.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also use the &lt;code&gt;gcloud&lt;/code&gt; tool to create the Kubernetes cluster with Workload Identity enabled using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud container clusters create CLUSTER_NAME &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;COMPUTE_REGION &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--workload-pool&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;PROJECT_ID.svc.id.goog
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CLUSTER_NAME&lt;/code&gt;: the name of your new cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COMPUTE_REGION&lt;/code&gt;: the Compute Engine region of your cluster. For zonal clusters, use &lt;code&gt;--zone=COMPUTE_ZONE&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PROJECT_ID&lt;/code&gt;: your Google Cloud project ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can enable Workload Identity on an existing Standard cluster by using the &lt;code&gt;gcloud&lt;/code&gt; CLI or the Cloud Console. Existing node pools are unaffected, but any new node pools in the cluster use Workload Identity by the modification of the same setting as shown above in the Cloud Console. Alternatively, you can use the following command to achieve it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud container clusters update CLUSTER_NAME &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;COMPUTE_REGION &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--workload-pool&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;PROJECT_ID.svc.id.goog
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CLUSTER_NAME&lt;/code&gt;: the name of your new cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COMPUTE_REGION&lt;/code&gt;: the Compute Engine region of your cluster. For zonal clusters, use &lt;code&gt;--zone=COMPUTE_ZONE&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PROJECT_ID&lt;/code&gt;: your Google Cloud project ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  STEP 2: Configure LitmusChaos to use Workload Identity
&lt;/h3&gt;

&lt;p&gt;Assuming that you already have LitmusChaos installed in your GKE cluster as well as the Kubernetes service account you want to use for your GCP experiments, execute the following steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get Credentials for your cluster.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud container clusters get-credentials CLUSTER_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;CLUSTER_NAME&lt;/code&gt; with the name of your cluster that has Workload Identity enabled.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an IAM service account for your application or use an existing IAM service account instead. You can use any IAM service account in any project in your organization. For Config Connector, apply the &lt;code&gt;IAMServiceAccount&lt;/code&gt; object for your selected service account. To create a new IAM service account using the &lt;code&gt;gcloud&lt;/code&gt; CLI, run the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud iam service-accounts create GSA_NAME &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;GSA_PROJECT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GSA_NAME&lt;/code&gt;: the name of the new IAM service account.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GSA_PROJECT&lt;/code&gt;: the project ID of the Google Cloud project for your IAM service account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alternatively, you can also use the Cloud Console UI to create a new GCP IAM Service Account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftarlxv09b18vfylha6te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftarlxv09b18vfylha6te.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure that this service account has all the roles required for interacting with the Compute Engine resources including VM Instances and Persistent Disks according to the GCP experiments that you’re willing to run. You can grant additional roles either using the Cloud Console or using the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud projects add-iam-policy-binding PROJECT_ID &lt;span class="se"&gt;\ &lt;/span&gt;   
&lt;span class="nt"&gt;--member&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--role&lt;/span&gt; &lt;span class="s2"&gt;"ROLE_NAME"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PROJECT_ID&lt;/code&gt;: your Google Cloud project ID.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GSA_NAME&lt;/code&gt;: the name of your IAM service account.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GSA_PROJECT&lt;/code&gt;: the project ID of the Google Cloud project of your IAM service account.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ROLE_NAME&lt;/code&gt;: the IAM role to assign to your service account, like &lt;code&gt;roles/spanner.viewer&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Allow the Kubernetes service account to be used for the GCP experiments to impersonate the GCP IAM service account by adding an &lt;a href="https://cloud.google.com/sdk/gcloud/reference/iam/service-accounts/add-iam-policy-binding" rel="noopener noreferrer"&gt;IAM policy binding&lt;/a&gt; between the two service accounts. This binding allows the Kubernetes service account to act as the IAM service account.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--role&lt;/span&gt; roles/iam.workloadIdentityUser &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--member&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GSA_NAME&lt;/code&gt;: the name of your IAM service account.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GSA_PROJECT&lt;/code&gt;: the project ID of the Google Cloud project of your IAM service account.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;KSA_NAME&lt;/code&gt;: the name of the Kubernetes service account to be used for LitmusChaos GCP experiments. If you’re using ChaosCenter, then by default the &lt;code&gt;litmus-admin&lt;/code&gt; service account is used for all the experiments.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NAMESPACE&lt;/code&gt;: the namespace in which the Kubernetes service account to be used for LitmusChaos GCP experiments is present. If you’re using ChaosCenter, then by default &lt;code&gt;litmus&lt;/code&gt; is the namespace.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Annotate the Kubernetes service account to be used for LitmusChaos GCP experiments with the email address of the GCP IAM service account.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl annotate serviceaccount KSA_NAME &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--namespace&lt;/span&gt; NAMESPACE &lt;span class="se"&gt;\&lt;/span&gt;
iam.gke.io/gcp-service-account&lt;span class="o"&gt;=&lt;/span&gt;GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;KSA_NAME&lt;/code&gt;: the name of the Kubernetes service account to be used for LitmusChaos GCP experiments. If you’re using ChaosCenter, then by default the &lt;code&gt;litmus-admin&lt;/code&gt; service account is used for all the experiments.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NAMESPACE&lt;/code&gt;: the namespace in which the Kubernetes service account to be used for LitmusChaos GCP experiments is present. If you’re using ChaosCenter, then by default &lt;code&gt;litmus&lt;/code&gt; is the namespace.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GSA_NAME&lt;/code&gt;: the name of your IAM service account.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GSA_PROJECT&lt;/code&gt;: the project ID of the Google Cloud project of your IAM service account.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  STEP 3: Update ChaosEngine Manifest
&lt;/h3&gt;

&lt;p&gt;When creating a new workflow with GCP experiments, edit the manifest YAML and add the following value to the ChaosEngine manifest field &lt;code&gt;.spec.experiments[].spec.components.nodeSelector&lt;/code&gt; to schedule the experiment pod on nodes that use Workload Identity:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;iam.gke.io/gke-metadata-server-enabled: "true"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As an example, say you’re adding the GCP VM Instance Stop experiment, you’d make the following change:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ie971ysp9vemt4yhmrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ie971ysp9vemt4yhmrd.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  STEP 4: Update ChaosExperiment Manifest
&lt;/h3&gt;

&lt;p&gt;Remove &lt;code&gt;cloud-secret&lt;/code&gt; at &lt;code&gt;.spec.definition.secrets&lt;/code&gt; in the ChaosExperiment manifest as we are not using a secret to provide our GCP Service Account credentials.&lt;/p&gt;

&lt;p&gt;As an example, say you’re adding the GCP VM Instance Stop experiment, you’d remove the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbdbjybwaq7ozbpnmiqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbdbjybwaq7ozbpnmiqh.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can run your GCP experiments with a keyless authentication provided by GCP using Workload Identity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;To conclude, we were able to set up a keyless authentication medium for the GCP experiments executing in a GKE that belongs to the target GCP environment. We saw how we can leverage Workload Identity for the GKE node pools to assign an IAM identity to our Kubernetes workloads, which was then used for the authentication of LitmusChaos GCP experiments.&lt;/p&gt;

</description>
      <category>litmuschaos</category>
      <category>kubernetes</category>
      <category>googlecloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why Did I Contribute to the LitmusChaos Project for Hacktoberfest 2021</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Mon, 30 May 2022 15:26:51 +0000</pubDate>
      <link>https://dev.to/neelanjan00/why-did-i-contribute-to-the-litmuschaos-project-for-hacktoberfest-2021-m87</link>
      <guid>https://dev.to/neelanjan00/why-did-i-contribute-to-the-litmuschaos-project-for-hacktoberfest-2021-m87</guid>
      <description>&lt;p&gt;For the eighth edition of Hacktoberfest, I chose to contribute to &lt;a href="https://litmuschaos.io/"&gt;LitmusChaos&lt;/a&gt;, a CNCF sandboxed project for Cloud-Native Chaos Engineering. It was a month-long celebration of making Chaos Engineering simpler for all the SREs and Developers who aspire to make their services more resilient. So if you are a software developer like me, then why should you consider contributing to the &lt;a href="https://github.com/litmuschaos/litmus"&gt;LitmusChaos project&lt;/a&gt;?&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Buzzing Community
&lt;/h3&gt;

&lt;p&gt;The Litmus community boasts of some 1000+ community members who profoundly promote the Cloud-Native resiliency paradigm shift by redefining the Chaos Engineering experience for everyone. It is the community members who bring out the best qualities of Chaos Engineering for their distinct use cases by leveraging the diverse set of chaos experiments offered by LitmusChaos. The regular community meetups and sync-ups highlight these interesting use cases, which helps every community member to learn from each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Excellent Code Quality
&lt;/h3&gt;

&lt;p&gt;LitmusChaos is an awesome project for beginners who want to kick-start their journey into the world of open-source as the LitmusChaos project has been developed and contributed by all kinds of developers belonging to a varied number of organizations. This has enabled a robust project codebase that adheres to the best coding practices and is easy to understand and contribute. Further, the LitmusChaos project makes use of a wide range of open-source tools such as Kubernetes, GraphQL, Argo Workflows, React JS, etc. which allows every developer to contribute to something of their own choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Awesome Documentation
&lt;/h3&gt;

&lt;p&gt;The Litmus project features extensive &lt;a href="https://docs.litmuschaos.io/"&gt;documentation&lt;/a&gt; that encompasses every single aspect of the project. It aids the developers who want to gain a deeper insight into the project and want to contribute to it with its detailed user documentation, experiment documentation, and API reference. Contributing to the documentation and tutorials is also an excellent option, which helps LitmusChaos to be used by more community members and end-users.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Flexible Litmus SDK
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://medium.com/@ispeakc0de/getting-started-with-litmus-sdk-f2d1fc8c8dea"&gt;Litmus SDK&lt;/a&gt; allows developers to define their chaos experiments for the LitmusChaos framework. It helps developers to easily bootstrap the experiment files where the SDK is responsible for generating all the requisite artifacts and the developer is only responsible for defining the experiment business logic. The best part about Litmus SDK is that it’s available in multiple programming languages such as Go, Python, and Ansible, allowing developers to develop chaos experiments in any programming language of their choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Mentorship Programs
&lt;/h3&gt;

&lt;p&gt;Apart from Hacktoberfest, LitmusChaos also takes part in all the major open-source mentorship programs such as Google Summer of Code (GSoC), GitHub India Externship, The Linux Foundation Mentorship Program (LFX Mentorship), and Google Summer of Docs (GSoD) among the others. These mentorship programs provide a very lucrative opportunity to the mentees for not only contributing to the open-source but also to learn and gain recognition as a quality developer since you’ll be making very significant contributions to the Litmus project under the guidance of a LitmusChaos project maintainer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Join the Litmus Community:
&lt;/h2&gt;

&lt;p&gt;Want to get help with queries, learnings, &amp;amp; contributions? Join the Litmus community on slack. To join the slack community please follow the following steps:&lt;br&gt;
&lt;strong&gt;Step 1:&lt;/strong&gt; Join the Kubernetes slack using the following link: &lt;a href="https://slack.k8s.io/"&gt;https://slack.k8s.io/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; Join the #litmus channel on the Kubernetes slack or use this link after joining the Kubernetes slack: &lt;a href="https://slack.litmuschaos.io/"&gt;https://slack.litmuschaos.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking forward to seeing you in the world of Open Source!&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>chaosengineering</category>
      <category>cloudnative</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>GCP VM Disk Loss Experiment for LitmusChaos</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Sun, 25 Jul 2021 15:06:31 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/gcp-vm-disk-loss-experiment-for-litmuschaos-2gim</link>
      <guid>https://dev.to/litmus-chaos/gcp-vm-disk-loss-experiment-for-litmuschaos-2gim</guid>
      <description>&lt;p&gt;In this beginner-friendly blog, we’ll be going through the GCP VM Disk Loss experiment for LitmusChaos. GCP VM Disk Loss experiment causes detachment of a non-boot persistent storage disk from a GCP VM instance for a specified duration of time and later re-attaches the disk to its respective VM instance. The broad objective of this experiment is to extend the principles of cloud-native chaos engineering to non-Kubernetes targets while ensuring resiliency for all kinds of targets, be it Kubernetes or non-Kubernetes ones, as a part of a single chaos workflow for the entirety of a business.&lt;/p&gt;

&lt;p&gt;At the time of writing this blog, the experiment is available only as a technical preview in the chaos hub, but in the upcoming releases, the experiment will surely become an integral part of the chaos hub. That being said, we can still access and execute the experiment without any problem, as I am about to show you in this blog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisites
&lt;/h2&gt;

&lt;p&gt;Before we begin with the steps of the experiment, let’s check the pre-requisites for performing this experiment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A GCP project containing the target Persistent Storage Disks attached to their respective VM instances&lt;/li&gt;
&lt;li&gt;A GCP Service Account having project level Editor or Owner permissions&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster with Litmus 2.0 installed&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  STEP 1: Updating The Chaos Hub
&lt;/h2&gt;

&lt;p&gt;Browse and log in to your Litmus portal. You should be on the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yyaz70k6t9xs0dphq00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yyaz70k6t9xs0dphq00.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select ChaosHubs. Here you’d be able to see the default Chaos Hub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jba5j5jhqya3nri5jlt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jba5j5jhqya3nri5jlt.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose to Edit the default Chaos Hub and instead of the &lt;code&gt;v1.13.x&lt;/code&gt; branch, choose the &lt;code&gt;master&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1rtqi5ulhid464i6ekn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1rtqi5ulhid464i6ekn.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Submit Now. Now you’d be able to access all the experiments, even those under the technical preview. To confirm that the experiments have been added successfully, click on Chaos Hub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi106y71pltemlr0l27vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi106y71pltemlr0l27vg.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see the GCP Experiments listed here. Now we are all set to begin the steps of the experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  STEP 2: Setting Up the Chaos Experiment
&lt;/h2&gt;

&lt;p&gt;We’d be using the &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/gcp/gcp-vm-disk-loss/" rel="noopener noreferrer"&gt;experiment docs&lt;/a&gt; to help us with a few steps.&lt;/p&gt;

&lt;p&gt;In this demo, we will inject chaos into two non-boot persistent storage disks named &lt;code&gt;test-disk&lt;/code&gt; and &lt;code&gt;test-disk-1&lt;/code&gt;, attached to the VM instances named &lt;code&gt;test-instance&lt;/code&gt; and &lt;code&gt;test-instance-1&lt;/code&gt; respectively, the disks are located in the zones &lt;code&gt;us-central1-a&lt;/code&gt; and &lt;code&gt;us-central1-b&lt;/code&gt; respectively, and belong to the GCP project “Litmus GCP Instance Delete” with the ID of &lt;code&gt;litmus-gcp-instance-delete&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz7ddakn89uxgoapu20d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz7ddakn89uxgoapu20d.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please notice that both the disks are initially attached to their respective VM instances, before the injection of the chaos. Now that we have our disks ready, we can set up our experiment. Before scheduling the chaos experiment, we need to make the GCP Service Account credentials available to Litmus, so that the instances can be shut down and later started as part of the experiment. To do that, we’ll make a Kubernetes secret named &lt;code&gt;secret.yaml&lt;/code&gt; as follows:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The format of this secret is also available in the experiment docs. Make sure the name of the secret is &lt;code&gt;cloud-secret&lt;/code&gt; and replace the respective fields of the secret with your own service account credentials. Once done, apply the secret in the &lt;code&gt;litmus&lt;/code&gt; namespace using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f secret.yaml -n litmus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the secret is applied, we’re all set to schedule our experiment from the Litmus portal. In Dashboard, click on the Schedule a Workflow button. In the workflow creation page, choose the self-agent and click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foc6hgz7p0nkhglduwtjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foc6hgz7p0nkhglduwtjm.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Choose a Workflow page, select “Create a new workflow using the experiments from MyHub” and select Chaos Hub in the dropdown. Then click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gmhntl8nbtrh0b9m1wl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gmhntl8nbtrh0b9m1wl.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Workflow Settings page, fill in the workflow name and description of your choice. Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx90wto2xdwegpi8ydhl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx90wto2xdwegpi8ydhl9.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Tune Workflow page, click on “Add a new experiment” and choose &lt;code&gt;gcp/gcp-vm-disk-loss&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feytue9jdfv2b33ee35gg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feytue9jdfv2b33ee35gg.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Done. Notice that the experiment has been added to the experiment graph diagram. Now click on “Edit YAML”. Here we will edit the workflow manifest to specify the experiment resource details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb03ndzvyi49rih870j6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb03ndzvyi49rih870j6o.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down to the manifest of the &lt;code&gt;ChaosExperiment&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j453o8c4ouqsdk95u5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j453o8c4ouqsdk95u5e.png" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the name of the secret that we had previously created is being passed to the &lt;code&gt;ChaosExperiment&lt;/code&gt; to be mounted at the path &lt;code&gt;/tmp/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Scroll further down and similarly fill in the relevant experiment details in the manifest of the &lt;code&gt;ChaosEngine&lt;/code&gt; as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibo74g0hcwf4b5cfa9pq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibo74g0hcwf4b5cfa9pq.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please take note that the zone for each target disk is to be mentioned in &lt;code&gt;DISK_ZONES&lt;/code&gt; in the same order of the &lt;code&gt;DISK_VOLUME_NAMES&lt;/code&gt;. Similarly, the device name for each target disk is to be mentioned in &lt;code&gt;DEVICE_NAMES&lt;/code&gt; in the same order of the &lt;code&gt;DISK_VOLUME_NAMES&lt;/code&gt;. If you like, feel free to modify the parameters of the experiment such as the &lt;code&gt;RAMP_TIME&lt;/code&gt;, &lt;code&gt;TOTAL_CHAOS_DURATION&lt;/code&gt;, etc. As you would have noticed, some of the experiment tunables are common for both the &lt;code&gt;ChaosEngine&lt;/code&gt; and &lt;code&gt;ChaosExperiment&lt;/code&gt;, and the values of &lt;code&gt;ChaosExperiment&lt;/code&gt; get overridden by that of the values of the &lt;code&gt;ChaosEngine&lt;/code&gt; if they differ in both the manifests. Once you have made the changes, click Save Changes. We’ve now specified all the experiment details and are ready to go to the next step. Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhn4oq2r0l31guwx9rp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhn4oq2r0l31guwx9rp7.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Reliability Score, we will use the default score of 10. Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldm57pwljfc12p45sktg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldm57pwljfc12p45sktg.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Schedule, click Schedule Now. Click Next. On the Verify and Commit page verify all the details and once satisfied click on Finish. We’ve successfully scheduled our chaos experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse5dmtjpmz4ntjb5gl6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse5dmtjpmz4ntjb5gl6j.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  STEP 3: Observing the Chaos
&lt;/h2&gt;

&lt;p&gt;Click on Go to Workflow and choose the workflow that we just created. Here we can observe the different steps of the workflow execution including chaos experiment installation, chaos injection, and chaos revert.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywcnm96cvh15po1oo0j8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywcnm96cvh15po1oo0j8.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also determine if the chaos injection has taken place and as a result, the disks have detached from their respective VM instances or not from the GCP Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqgiosudt70vcnf2ctvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqgiosudt70vcnf2ctvp.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also view the Table View for the experiment logs as the experiment proceeds through the various steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljrj13fvb01k4oewnq75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljrj13fvb01k4oewnq75.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once completed, the workflow graph should have executed all the steps successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgftwy0hvs69ujraqpos2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgftwy0hvs69ujraqpos2.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have got a 100% Resiliency Score. We can also check the &lt;code&gt;ChaosResult&lt;/code&gt; verdict which should say the experiment has passed. The Probe Success Percentage should be 100% as all our disks re-attached successfully post their detachment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg27i6flwhu31ewch385.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg27i6flwhu31ewch385.png" width="800" height="400"&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe84tf6j5jh0qlx1ir3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe84tf6j5jh0qlx1ir3k.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again you can check in the GCP console if the disks have been re-attached or not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fj6e5qplpniazt7npbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fj6e5qplpniazt7npbw.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also perform post chaos analysis for the experiment results in the Analytics section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxddyry3e8uqjnk6782en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxddyry3e8uqjnk6782en.png" width="800" height="400"&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff96zfy5xqo1vpaqdpjj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff96zfy5xqo1vpaqdpjj3.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion of this blog, we saw how we can perform the GCP VM Disk Loss chaos experiment using Litmus Chaos 2.0. This experiment is only one of the many experiments for the Non-Kubernetes experiments in LitmusChaos, including experiments for AWS, Azure, VMWare, and many more, which are targeted towards making Litmus an absolute Chaos Engineering toolset for every enterprise regardless of the technology stack used by them.&lt;/p&gt;

&lt;p&gt;Come join me at the Litmus community to contribute your bit in developing chaos engineering for everyone. To join the Litmus community:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Step 1:&lt;/strong&gt; Join the Kubernetes slack using the following link: &lt;code&gt;https://slack.k8s.io/&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; Join the &lt;code&gt;#litmus&lt;/code&gt; channel on the Kubernetes slack or use this link after joining the Kubernetes slack: &lt;code&gt;[https://slack.litmuschaos.io/](https://slack.litmuschaos.io/)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Show your ❤️ with a ⭐ on our &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt;. To learn more about Litmus, check out the &lt;a href="https://docs.litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus documentation&lt;/a&gt;. Thank you! 🙏&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>litmuschaos</category>
      <category>cloudnative</category>
      <category>chaosengineering</category>
    </item>
    <item>
      <title>GCP VM Instance Stop Experiment for LitmusChaos</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Sat, 24 Jul 2021 11:06:12 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/gcp-vm-instance-stop-experiment-for-litmuschaos-jn8</link>
      <guid>https://dev.to/litmus-chaos/gcp-vm-instance-stop-experiment-for-litmuschaos-jn8</guid>
      <description>&lt;p&gt;This blog is a beginner-friendly guide for the GCP VM Instance Stop chaos experiment for LitmusChaos. The experiment causes the shutdown of one or more GCP VM instances for a specified duration of time and later restarts them. The broad objective of this experiment is to extend the principles of cloud-native chaos engineering to non-Kubernetes targets while ensuring resiliency for all kinds of targets, be it Kubernetes or non-Kubernetes ones, as a part of a single chaos workflow for the entirety of a business.&lt;/p&gt;

&lt;p&gt;At the time of writing this blog, the experiment is available only as a technical preview in the chaos hub, but in the upcoming releases, the experiment will surely become an integral part of the chaos hub. That being said, we can still access and execute the experiment without any problem, as I am about to show you in this blog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisites
&lt;/h2&gt;

&lt;p&gt;Before we begin with the steps of the experiment, let’s check the pre-requisites for performing this experiment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; A GCP project containing the target VM instances&lt;/li&gt;
&lt;li&gt; A GCP Service Account having sufficient permissions to stop or start the VM Instances&lt;/li&gt;
&lt;li&gt; A Kubernetes cluster with Litmus 2.0 installed&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  STEP 1: Updating The Chaos Hub
&lt;/h2&gt;

&lt;p&gt;Browse and log in to your Litmus portal. You should be on the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps7m6aq0n7zrdys9jg0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps7m6aq0n7zrdys9jg0q.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select ChaosHubs. Here you’d be able to see the default ChaosHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5w60nbcb51jid3jlosl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5w60nbcb51jid3jlosl.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose to Edit the default Chaos Hub and instead of the &lt;code&gt;v1.13.x&lt;/code&gt; branch, choose the &lt;code&gt;master&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqku4v0otkmickcsjerx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqku4v0otkmickcsjerx.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Submit Now. Now you’d be able to access all the experiments, even those under the technical preview. To confirm that the experiments have been added successfully, click on Chaos Hub and view the Chaos Hub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9sov0loqd6s99hwo0ff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9sov0loqd6s99hwo0ff.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see the GCP Experiments listed here. Now we are all set to begin the steps of the experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  STEP 2: Setting Up the Chaos Experiment
&lt;/h2&gt;

&lt;p&gt;We’d be using the &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/gcp/gcp-vm-instance-stop/" rel="noopener noreferrer"&gt;experiment docs&lt;/a&gt; to help us with a few steps.&lt;/p&gt;

&lt;p&gt;In this demo, we will inject chaos into two VM instances named &lt;code&gt;test-instance&lt;/code&gt; and &lt;code&gt;test-instance-1&lt;/code&gt;, belonging to the zones &lt;code&gt;us-central1-a&lt;/code&gt; and &lt;code&gt;us-central1-b&lt;/code&gt; respectively, belonging to the GCP project “Litmus GCP Instance Delete” with the ID of &lt;code&gt;litmus-gcp-instance-delete&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg7gykjg2dqn1curzj3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg7gykjg2dqn1curzj3g.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please notice that the instances are in a running state initially, before the injection of the chaos. Now that we have our instances ready, we can set up our experiment. Before scheduling the chaos experiment, we need to make the GCP Service Account credentials available to Litmus, so that the instances can be shut down and later started as part of the experiment. To do that, we’d make a Kubernetes secret named &lt;code&gt;secret.yaml&lt;/code&gt; as follows:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The format of this secret is also available in the experiment docs. Make sure the name of the secret is &lt;code&gt;cloud-secret&lt;/code&gt; and replace the respective fields of the secret with your own service account credentials. Once done, apply the secret in the &lt;code&gt;litmus&lt;/code&gt; namespace using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f secret.yaml -n litmus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the secret is applied, we’re all set to schedule our experiment from the Litmus portal. In Dashboard, click on the Schedule a Workflow button. In the workflow creation page, choose the self-agent and click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ih4fsflktycalzpw2cj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ih4fsflktycalzpw2cj.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Choose a Workflow page, select “Create a new workflow using the experiments from MyHub” and select Chaos Hub in the dropdown. Then click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1f7f36wa3ak3s8251ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1f7f36wa3ak3s8251ui.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Workflow Settings page, fill in the workflow name and description of your choice. Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv7e50zbwgewrmobmdtn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv7e50zbwgewrmobmdtn.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Tune Workflow page, click on “Add a new experiment” and choose &lt;code&gt;gcp/gcp-vm-instance-stop&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhl6xke04hdietvqtt1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhl6xke04hdietvqtt1a.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Done. Notice that the experiment has been added to the experiment graph diagram. Now click on “Edit YAML”. Here we will edit the workflow manifest to specify the experiment resource details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrz5fovf06d3zn0doedd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrz5fovf06d3zn0doedd.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down to the manifest of the &lt;code&gt;ChaosExperiment&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kccd6dt855p4f1bn6o9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kccd6dt855p4f1bn6o9.png" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the name of the secret that we had previously created is being passed to the &lt;code&gt;ChaosExperiment&lt;/code&gt; to be mounted at the path &lt;code&gt;/tmp/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Scroll further down and similarly fill in the relevant experiment details in the manifest of the &lt;code&gt;ChaosEngine&lt;/code&gt; as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqv5l2bag3svjwf6qio29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqv5l2bag3svjwf6qio29.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please take note that the zone for each target instance is to be mentioned in &lt;code&gt;INSTANCE_ZONES&lt;/code&gt; in the same order of the &lt;code&gt;VM_INSTANCE_NAMES&lt;/code&gt;. If you like, feel free to modify the other parameters of the experiment such as the &lt;code&gt;RAMP_TIME&lt;/code&gt;, &lt;code&gt;TOTAL_CHAOS_DURATION&lt;/code&gt;, etc. As you would have noticed, some of the experiment tunables are common for both the &lt;code&gt;ChaosEngine&lt;/code&gt; and &lt;code&gt;ChaosExperiment&lt;/code&gt;, and the values of &lt;code&gt;ChaosExperiment&lt;/code&gt; get overridden by that of the values of the &lt;code&gt;ChaosEngine&lt;/code&gt; if they differ in both the manifests. Once done, click Save Changes. We’ve now specified all the experiment details and are ready to go to the next step. Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtdcv9bk9g1vk6seqobt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtdcv9bk9g1vk6seqobt.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Reliability Score, we will use the default score of 10. Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnb0gb58rgih7f60mrhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnb0gb58rgih7f60mrhx.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Schedule, click Schedule Now. Click Next. On the Verify and Commit page verify all the details and once satisfied click on Finish. We’ve successfully scheduled our chaos experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46vuffu14gp0i5754nge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46vuffu14gp0i5754nge.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  STEP 3: Observing the Chaos
&lt;/h2&gt;

&lt;p&gt;Click on Go to Workflow and choose the workflow that we just created. Here we can observe the different steps of the workflow execution including chaos experiment installation, chaos injection, and chaos revert.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp03q99s0i2ywnolb09ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp03q99s0i2ywnolb09ev.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also determine if the chaos injection has taken place and as a result, the instances have shutdown or not from the GCP Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5j447ryy8uvrramngxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5j447ryy8uvrramngxq.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also view the Table View for the experiment logs as the experiment proceeds through the various steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfzrnq13nqxj1m8gaaej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfzrnq13nqxj1m8gaaej.png" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once completed, the workflow graph should have executed all the steps successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yx4x6wvpxkpkany67kk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yx4x6wvpxkpkany67kk.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also check the &lt;code&gt;ChaosResult&lt;/code&gt; verdict which should say the experiment has passed. The Probe Success Percentage should be 100% as all our instances restarted successfully post their shutdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz02cpakz6wxc4552l5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz02cpakz6wxc4552l5n.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again you can check in the GCP console if the instances have restarted or not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feef067i68icz944u8uui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feef067i68icz944u8uui.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also perform post chaos analysis for the experiment results in the Analytics section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih7wynscg7qtempqqyyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih7wynscg7qtempqqyyo.png" width="800" height="468"&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld8c0t2djvakp94dwbja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld8c0t2djvakp94dwbja.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion of this blog, we saw how we can perform the GCP VM Instance Stop chaos experiment using Litmus Chaos 2.0. This experiment is only one of the many experiments for the Non-Kubernetes experiments in LitmusChaos, including experiments for AWS, Azure, VMWare, and many more, which are targeted towards making Litmus an absolute Chaos Engineering toolset for every enterprise regardless of the technology stack used by them.&lt;/p&gt;

&lt;p&gt;Come join me at the Litmus community to contribute your bit in developing chaos engineering for everyone. To join the Litmus community:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Step 1:&lt;/strong&gt; Join the Kubernetes slack using the following link: &lt;code&gt;https://slack.k8s.io/&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; Join the &lt;code&gt;#litmus&lt;/code&gt; channel on the Kubernetes slack or use this link after joining the Kubernetes slack: &lt;code&gt;[https://slack.litmuschaos.io/](https://slack.litmuschaos.io/)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Show your ❤️ with a ⭐ on our &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt;. To learn more about Litmus, check out the &lt;a href="https://docs.litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus documentation&lt;/a&gt;. Thank you! 🙏&lt;/p&gt;

</description>
      <category>litmuschaos</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>chaosengineering</category>
    </item>
    <item>
      <title>Getting Started with Litmus 2.0 in Google Kubernetes Engine</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Sat, 03 Jul 2021 04:53:08 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/getting-started-with-litmus-2-0-in-google-kubernetes-engine-4obf</link>
      <guid>https://dev.to/litmus-chaos/getting-started-with-litmus-2-0-in-google-kubernetes-engine-4obf</guid>
      <description>&lt;p&gt;This is a quick tutorial on how to get started with Litmus 2.0 in Google Kubernetes Engine. We’ll first set up our basic GKE cluster throughout this blog, then install Litmus 2.0 in the cluster, and finally, execute a simple chaos workflow using Litmus.&lt;/p&gt;

&lt;p&gt;But before we kick off the demonstration, let’s have a brief introduction to Litmus. &lt;a href="https://litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus&lt;/a&gt; is a toolset to perform cloud-native Chaos Engineering. It provides tools to orchestrate chaos on Kubernetes to help developers and SREs find weaknesses in their application deployments. Litmus can be used to run chaos experiments initially in the staging environment and eventually in production to find bugs, vulnerabilities. Fixing the weaknesses leads to increased resilience of the system. Litmus adopts a “Kubernetes-native” approach to define chaos intent in a declarative manner via custom resources, although Litmus is not limited to only Kubernetes targets while injecting chaos and is able to target a plethora of non-Kubernetes targets as well, such as bare-metal infrastructure, public cloud infrastructure, hybrid cloud infrastructure, containerized services, etc. to fulfill the chaos engineering needs of an entire business, and not just a specific application microservice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;A GCP project with sufficient permission for accessing GKE&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-1: Setup the GKE Cluster
&lt;/h2&gt;

&lt;p&gt;To set up the GKE cluster, we first need to enable the Kubernetes Engine API in our GCP project. To do that, we can access the APIs &amp;amp; Services Dashboard from the GCP console and search for the Kubernetes Engine API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbt0at2ra82gvvkeug98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbt0at2ra82gvvkeug98.png" alt="1_2r4jdrIuLGsTW_xo1Ze5kg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can then enable the Kubernetes Engine API. Now we are all set to launch our GKE cluster. To do that, we will first go to the Kubernetes Engine dashboard in the GCP console and then choose to Create the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnljer0uut1esr96i9iv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnljer0uut1esr96i9iv6.png" alt="1_YKxtj85kPd6Xf7ew2j3ZFw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go for a standard cluster and choose to Configure it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qg3wkw78zk6fj6l7o9k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qg3wkw78zk6fj6l7o9k.png" alt="1_ivIqlysLZ3rP_2zy_reBtA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you’d get all the options to configure your Kubernetes cluster. You can either choose to configure your own cluster as per your preference or you can simply go for My First Cluster option under the Cluster set-up Guide, which will set up a basic three-node cluster for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s6o3zw04repxh6fagp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s6o3zw04repxh6fagp6.png" alt="1_ER03wDAfQ9z4efTQ5s6g6g" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will take a while for your cluster resources to set up and undergo a health check, after which it would finally be ready.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j7l15ufb2s3wtu22zzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j7l15ufb2s3wtu22zzs.png" alt="1_xCiLOzED8t9nEA3Tu-8nXA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now connect to our cluster and proceed to install litmus. To do that, we need to choose our cluster and then choose the Connect option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtgov3z6izre42fb7rkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtgov3z6izre42fb7rkc.png" alt="1_bVFVSZVfTMWMy2gaOnfi5A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, choose the Run In Cloud Shell option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlzf459f4kdzeslrvvrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlzf459f4kdzeslrvvrk.png" alt="1_jq0Is2xIvn2eooe8oOZqrQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will open up a cloud shell terminal in your console. A cloud SDK command would be already there in your terminal which you need to execute. This command will configure the Kubectl to configure itself for the cluster we just created. If prompted, choose accept.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xoc9cco8wubqxcyh0so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xoc9cco8wubqxcyh0so.png" alt="1_d-zbtP9wSV7VcyvtXhoajQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are all set to install Litmus.&lt;/p&gt;

&lt;h2&gt;
  
  
  STEP 2: Install Litmus
&lt;/h2&gt;

&lt;p&gt;We’d follow the instructions given in the &lt;a href="https://docs.litmuschaos.io/docs/getting-started/installation" rel="noopener noreferrer"&gt;Litmus 2.0 documentation&lt;/a&gt; to install Litmus. Here we will be installing Litmus in namespace mode using Helm. To do that, the very first step is to add the litmus helm repository using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add litmuschaos https://litmuschaos.github.io/litmus-helm/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will add &lt;code&gt;litmuschaos&lt;/code&gt; repository to the list of Helm chart repositories. We can verify the repository using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see &lt;code&gt;litmuschaos&lt;/code&gt; repository listed here. Next, we will create the &lt;code&gt;litmus&lt;/code&gt; namespace as all the litmus infra components will be placed in this namespace. We will do this using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create ns litmus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, we can proceed to install the Litmus control plane using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install chaos litmuschaos/litmus-2-0-0-beta --namespace=litmus --devel --set portalScope=namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get a similar message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpvco8zonjuw1qtfbra8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpvco8zonjuw1qtfbra8.png" alt="1_EMnOfsKCUcdTFwXEWjx_6Q" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we can proceed to install the Litmus CRDs. Cluster-Admin or an equivalent user with the right permissions are required to install them CRDs. We’d use the following command to apply the CRDs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/litmus-portal/litmus-portal-crds.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we’re done with installing Litmus. Just one final step before we can access the Litmus portal. If we use the command &lt;code&gt;kubectl get svc -n litmus&lt;/code&gt;, we’ll be able to list down all the services running in the litmus namespace. Here you should be able to see the &lt;code&gt;litmusportal-frontend-service&lt;/code&gt; and the &lt;code&gt;litmusportal-server-service&lt;/code&gt;, both of which should be NodePort services, and their corresponding TCP ports should be something like &lt;code&gt;9091:xxxxx/TCP&lt;/code&gt; where &lt;code&gt;xxxxx&lt;/code&gt; is the node port of the respective service. To access the portal, we need to expose both the node ports of the frontend service and the server service by applying a firewall rule. For the &lt;code&gt;litmusportal-server-service&lt;/code&gt; we can expose only the first NodePort as there are two of them. We can do that using the following two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute firewall-rules create frontend-service-rule --allow tcp:&amp;lt;NODE_PORT&amp;gt;

gcloud compute firewall-rules create server-service-rule --allow tcp:&amp;lt;NODE_PORT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the &lt;code&gt;&amp;lt;NODE_PORT&amp;gt;&lt;/code&gt; in the first command with your frontend service’s node port and similarly replace the &lt;code&gt;&amp;lt;NODE_PORT&amp;gt;&lt;/code&gt; in the first command with your server service’s node port. Once done, we’re all set to access the Litmus portal. To do that, use the command &lt;code&gt;kubectl get nodes -o wide&lt;/code&gt; to list all the pods in your cluster. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlwbe6t2pv9fwhsmk6nf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlwbe6t2pv9fwhsmk6nf.png" alt="1_DOT0E1kS-kJ_aWX8Aj5j-A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For any of the nodes, copy its external IP and paste it into your browser URL section, followed by a &lt;code&gt;:xxxxx&lt;/code&gt; where &lt;code&gt;xxxxx&lt;/code&gt; corresponds to your node port. You should then be directed to the Litmus home screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcyc7a8g7bqswcxj0hcv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcyc7a8g7bqswcxj0hcv.png" alt="1_DKLkV01jnnjVilby6mtewA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Log in using the username &lt;code&gt;admin&lt;/code&gt; and password &lt;code&gt;litmus&lt;/code&gt;. Then you’d be asked to set up a password, which you’d use for any subsequent login. Once done, you’d be able to access the Litmus portal dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqd28njeczn1dlos852p0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqd28njeczn1dlos852p0.png" alt="1_WCvu2RffIvxO9tWc6ukELQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Run a Chaos Workflow
&lt;/h2&gt;

&lt;p&gt;For this demo, we’d see how do chaos workflows execute in Litmus with the help of a predefined chaos workflow template called podtato-head. It is a simple application deployed as a part of the Litmus installation in which we will try to inject pod-delete chaos.&lt;/p&gt;

&lt;p&gt;Choose Schedule a Workflow. In the workflows dashboard, choose the self agent and then choose Next. Choose Create a Workflow from Pre-defined Templates and choose podtato-head.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuokbnu6lwcp5u7y6z3iy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuokbnu6lwcp5u7y6z3iy.png" alt="1_bBmN0Fsmw5FamvJX7V7aTQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose Next. Here we can define the experiment name, description, and namespace. Leave the default values and choose Next. Here we can tune the workflow by editing the experiment manifest and adding/removing or arranging the experiments in the workflow. The podtato-head template comes with its own defined workflow so simply choose Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye8nd2hy8x0p0oh6iiy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye8nd2hy8x0p0oh6iiy9.png" alt="1_PZ8AkosqNIHESenfR5QAgw" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can set a reliability score per chaos experiment in our workflow. Since this workflow has only pod-delete chaos experiment, we can set a score for that in between 1–10. Then choose Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq4ssucsoxtyy15bexcu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq4ssucsoxtyy15bexcu.png" alt="1_c0NyrzqdjAbZsUuFPClNPw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can schedule our workflows to be executed in a determined frequency or we can simply schedule it to execute right away. Choose Schedule now. Choose Next. Here we can review the workflow details and make changes if required before finally executing the workflow. Choose Finish. You have successfully started the Chaos Workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyesaws4qj1uhqfzpyab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyesaws4qj1uhqfzpyab.png" alt="1_1eiFUcOY1hnW84w5M_nnvw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose Go to Workflow and you’d see the running workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmjw5ib4r0awyrr69u5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmjw5ib4r0awyrr69u5z.png" alt="1_-fSRwcYfywGwkiCu0Kxy8Q" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the workflow and you’d be able to see the workflow execute in its various stages in the graphical chart:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffehrzaodwri2kt6sfh8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffehrzaodwri2kt6sfh8m.png" alt="1_Geyjf477RlNbAtDwxTs3lw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for the workflow execution to complete, once completed the graph would appear as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fge2vilt529th2wok9zgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fge2vilt529th2wok9zgi.png" alt="1_-Imn4mQYGRsHjHW__MLlLQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can analyze the Table View tab for every step as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjagddw6pzjd5irk0xd0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjagddw6pzjd5irk0xd0t.png" alt="1_jQeLj-efOQ2oKn0Dy3Qfmw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose View Logs &amp;amp; Results to access the chaos result of the pod-delete experiment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rnl7rp6tb7ycnxfpvlf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rnl7rp6tb7ycnxfpvlf.png" alt="1_m3qvepJ0rmcSyHwjVLScFg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also observe the analytics of the workflow from the workflow dashboard page which shows a graphical analysis of the workflow run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5yp425a5zzpr6syehu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5yp425a5zzpr6syehu8.png" alt="1_GHKxARyWigg7vDnAefIH3Q" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion of this blog, we saw how to set up a Google Kubernetes Engine cluster, how to install Litmus 2.0, and finally how to execute a chaos workflow using the Litmus portal. Though this is only the tip of the iceberg, I really hope you learned something new today, which will be helpful for your further journey in chaos engineering.&lt;/p&gt;

&lt;p&gt;Come join me at the Litmus community to contribute your bit in developing chaos engineering for everyone. To join the Litmus community:&lt;br&gt;
Step 1: Join the Kubernetes slack using the following link: &lt;code&gt;https://slack.k8s.io/&lt;/code&gt;&lt;br&gt;
Step 2: Join the &lt;code&gt;#litmus&lt;/code&gt; channel on the Kubernetes slack or use this link after joining the Kubernetes slack: &lt;code&gt;https://slack.litmuschaos.io/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Show your ❤️ with a ⭐ on our &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt;. To learn more about Litmus, check out the &lt;a href="https://docs.litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus documentation&lt;/a&gt;. Thank you! 🙏 &lt;/p&gt;

</description>
      <category>litmuschaos</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>chaosengineering</category>
    </item>
    <item>
      <title>Part-2: A Beginner’s Practical Guide to Containerisation and Chaos Engineering with LitmusChaos 2.0</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Fri, 02 Jul 2021 15:16:41 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/part-2-a-beginner-s-practical-guide-to-containerisation-and-chaos-engineering-with-litmuschaos-2-0-253i</link>
      <guid>https://dev.to/litmus-chaos/part-2-a-beginner-s-practical-guide-to-containerisation-and-chaos-engineering-with-litmuschaos-2-0-253i</guid>
      <description>&lt;h2&gt;
  
  
  Part 2: Chaos Engineering 101, Using LitmusChaos 2.0 for a custom workflow chaos experiment
&lt;/h2&gt;

&lt;p&gt;This blog is part two of a two blog series that details how to get started with containerization using Docker and Kubernetes for deployment, and later how to perform chaos engineering using LitmusChaos 2.0. Find Part-1 of the blog &lt;a href="https://dev.to/neelanjan00/part-1-a-beginner-s-practical-guide-to-containerisation-and-chaos-engineering-with-litmuschaos-2-0-3h5c"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Say you have deployed your E-Commerce application using Kubernetes and you’re very satisfied with how flexible and stable your application deployment has come to be. During the testing of the application, it had checked all the boxes and you’re very confident that your application deployment is all set to face the peak hours of the sale next week, which will have customers all over the country trying to buy products using your application. But alas, right on the peak hours of the sale, your customers face a service outage. What went wrong? You don’t have any idea, since on a superficial level nothing seems to be out of order. Could this infelicitous situation have been avoided? Yes, using chaos engineering.&lt;/p&gt;

&lt;p&gt;In this blog, we will fundamentally explore chaos engineering; starting from what is chaos engineering, why chaos engineering is a necessity, how is it different from testing, processes, and principles of chaos engineering, introduction to cloud-native chaos engineering, a bird’s eye view of LitmusChaos, and finally, we will perform a pod-delete chaos experiment with a custom chaos workflow on the application that we had deployed in the previous blog.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Chaos Engineering?
&lt;/h2&gt;

&lt;p&gt;As Wikipedia defines it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Chaos engineering is the discipline of experimenting on a software system in production in order to build confidence in the system’s capability to withstand turbulent and unexpected conditions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In our previous example of the E-Commerce application which faced a service downtime on account of the sharp rise in the number of users trying to simultaneously access the application, that situation could have been avoided by identifying the factors that were contributing to the service downtime.&lt;/p&gt;

&lt;p&gt;Chaos Engineering emphasizes experiments, otherwise called hypotheses, and then compares the results to a defined steady-state. It is also regarded as the science of “breaking things on purpose” to identify the unforeseen weaknesses of the system before they wreak havoc in production. More than just fault injection, chaos engineering is an attempt at understanding the factors that contribute to the instability of a system, and gaining insights from the behavior of the system as a whole when it is subjugated to an adverse situation, with the ultimate goal of making systems more resilient in the production environment.&lt;/p&gt;

&lt;p&gt;As an example, a distributed system could be checked for resiliency by randomly disabling the services responsible for the functioning of the system and analyzing its impact on the system as a whole.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Chaos Engineering?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Recognize the dangers and consequences: By allowing you to create an experiment and quantify how it affects your business, chaos engineering allows you to understand the influence of turbulent conditions on important applications. Companies can make informed judgments and react proactively to avoid or prevent losses when they understand what’s at risk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reaction to an incident: Because distributed systems are so complicated, there are several ways for things to go wrong. The notion of disaster recovery and business continuity is critical for firms in highly regulated contexts, such as the financial industry, because even a single instant of outage can be costly. These industries may rehearse, prepare, and put mechanisms in place for real-life situations by conducting chaotic experiments. When an incident occurs, chaos engineering allows teams to have the correct level of awareness, plans, and visibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Application Security &amp;amp; Observability: Chaos experiments help you figure out where your systems’ monitoring and observability capabilities are lacking, as well as your team’s ability to respond to crises. Chaos engineering will help you identify areas for improvement and motivate you to make your systems more visible, resulting in better telemetry data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System Reliability: Chaos engineering enables firms to create dependable and fault-tolerant software systems while also increasing your team’s trust in them. The more reliable your systems are, the more confident you can be in their ability to perform as expected.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Is it the same as testing?
&lt;/h2&gt;

&lt;p&gt;In one word, no.&lt;/p&gt;

&lt;p&gt;A failure test looks at a single situation and determines whether or not a property is true. A test like this breaks the system in a predetermined way. The outcomes are often binary and do not reveal any additional information about the program, which is essential to understand the root cause of the problem.&lt;/p&gt;

&lt;p&gt;Chaos Engineering’s purpose is to generate fresh information about the system when it is subjugated to adversity. One can learn more about the system’s behaviors, attributes, and performance because the scope is broader and the outcomes are unpredictable. Therefore it allows us to better understand the limitations of our system and act on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Process of conducting Chaos Engineering
&lt;/h2&gt;

&lt;p&gt;By and large, chaos engineering can be abstracted into the following set of processes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define the steady-state hypothesis: You should begin by imagining what could go wrong. Start with a failed injection and forecast what will happen when it’s live.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm the steady-state and perform several realistic simulations: Test your system using real-world scenarios to observe how it reacts to different stressors and events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collect data and monitor dashboards: You must assess the system’s dependability and availability. It’s ideal to employ key performance indicators that are linked to consumer success or usage. We want to see how the failure compares to our hypothesis, therefore we’ll look at things like latency and requests per second.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Changes and issues should be addressed: After conducting an experiment, you should have a good notion of what is working and what needs to be changed. We can now predict what will cause an outage and precisely what will cause the system to fail.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Principles of Chaos Engineering
&lt;/h2&gt;

&lt;p&gt;An elaborate description of the same is the &lt;a href="https://principlesofchaos.org/" rel="noopener noreferrer"&gt;Principles of Chaos&lt;/a&gt; manifesto, which describes the core concerns that chaos engineering should address:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Understand your system’s normal state: Define your system’s steady state. Any chaotic experiment uses a system’s regular behavior as a reference point. You will have a better understanding of the effects of faults and failures if you understand the system when it is healthy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use of realistic bugs and failures: All experiments should be based on plausible and realistic settings. When a real-life failure is injected, it becomes clear which processes and technologies need to be upgraded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Production-level testing: Only by running the test in a production setting can you see how disruptions influence the system. Allow your team to experiment in a development environment if they have little or no experience with chaotic testing. Once the production environment is ready, test it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Control the radius of the blast: A chaotic test’s blast radius should always be kept as small as possible. Because these tests are conducted in a live setting, there is a potential that they will have an impact on end-users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automating chaos: Chaos experiments may be automated to the same degree as your CI/CD pipeline. Continuous chaos allows your team to continuously improve current and future systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvqfixjc9igak8vza0bt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvqfixjc9igak8vza0bt.png" alt="1_UHbTcsEAqA6RpxT_bhjv-w" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to Cloud-Native Chaos Engineering
&lt;/h2&gt;

&lt;p&gt;Businesses love cloud: About a third of companies’ IT budget goes to cloud services and the global public cloud computing market is set to exceed $330 billion in 2021. The groundbreaking shift towards cloud-native software products needs to be supplemented with the right set of tools to ensure that they are resilient against all the possible adverse situations that may arise in production.&lt;/p&gt;

&lt;p&gt;Enter cloud-native chaos engineering, the best of both worlds. In its core essence, it's all about performing chaos engineering in a cloud-native or Kubernetes-first way. There are four major principles that define how cloud-native a chaos engineering tool or framework is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;CRDs for Chaos Management: For coordinating chaos on Kubernetes, the framework should have explicitly defined CRDs. These CRDs provide standard APIs for provisioning and managing chaos in large-scale production systems. These are the elements that make up a chaotic workflow orchestration system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open Source: To enable larger community engagement and examination, the framework must be totally open-source under the Apache License 2.0. The number of applications that are migrating to the Kubernetes platform is uncountable. Only the Open Chaos model will thrive and gain the requisite adoption at such a wide scale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extensible and Pluggable: The framework should be integrable with the vast number of existing cloud-native applications, essentially built as a component that can be easily plugged in for chaos engineering within an application and can be easily plugged out as well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Broad Community adoption: The chaos will be carried out against well-known infrastructures such as Kubernetes, applications such as databases, and infrastructure components like as storage and networking. These chaos experiments can be utilized again, and a large community can help identify and contribute to more high-value scenarios. As a result, a Chaos Engineering system should have a central hub or forge where open-source chaos experiments can be shared and code-based collaboration is possible.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;LitmusChaos is a cloud-native Chaos Engineering framework for Kubernetes that fulfills all the four criteria listed above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenowfqhju80euqy5ghni.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenowfqhju80euqy5ghni.jpeg" alt="1_rnHIj_oz14J_PwM9qlwJYA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A Bird’s Eye View of LitmusChaos
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Litmus
&lt;/h3&gt;

&lt;p&gt;Litmus is a toolset to do cloud-native Chaos Engineering. It helps both Developers and SREs automate the chaos experiments at different stages within the DevOps pipeline like development, during CI/CD, &amp;amp; in production. Fixing the weaknesses leads to increased resilience of the system.&lt;/p&gt;

&lt;p&gt;Litmus adopts a “Kubernetes-native” approach to define chaos intent in a declarative manner via custom resources. It broadly defines Kubernetes chaos experiments into two categories: application or pod-level chaos experiments and platform or infra-level chaos experiments. The former includes pod-delete, container-kill, pod-cpu-hog, pod-network-loss, etc., while the latter includes node-drain, disk-loss, node-cpu-hog, etc. It is largely developed under Apache License 2.0 license at the project level.&lt;/p&gt;

&lt;p&gt;Before we understand the architecture of Litmus, let us understand a few terminologies:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Chaos Experiment: Chaos Experiments are the building blocks of the Litmus architecture. Users can develop the desired chaos workflow by choosing from freely available chaos experiments or by creating new ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chaos Workflow: A chaos workflow is a lot more than a chaos experiment. It helps the user define the intended result, observe the result, analyze the overall system behavior, and decide whether the system needs to be changed to improve resilience. For a normal development or operations team, LitmusChaos provides the infrastructure required to design, use, and manage chaotic workflows. Litmus’ teaming and GitOps features considerably aid in the collaborative control of chaotic processes within teams or software organizations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio0odhjvmbj12bizzr1t.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio0odhjvmbj12bizzr1t.jpeg" alt="1_dwb1lUg99sVRANP3rd5dtQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Litmus Architecture
&lt;/h2&gt;

&lt;p&gt;Litmus components can be classified into two parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Portal&lt;/li&gt;
&lt;li&gt;Agents&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Portal is a set of Litmus components that act as Cross-Cloud Chaos Control plane (WebUI) which is being used to orchestrate and observe the chaos workflows on Agents.&lt;/p&gt;

&lt;p&gt;Agent is the set of Litmus components that induces Chaos using the chaos workflows on the K8s cluster component.&lt;/p&gt;

&lt;h3&gt;
  
  
  Portal Components
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Litmus WebUI: Litmus UI provides a web user interface, where users can construct and observe the chaos workflow at ease. Also this act as a cross-cloud chaos control plane that is&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Litmus Server: Litmus Server act as middleware which is used to handle API request from the user interface, store the config and results into the DB. This also acts as an interface to communicate between the requests and scheduling the workflow to Agent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Litmus DB: Litmus DB act as a config store for chaos workflows and their results.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Agent Components
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Chaos Operator: Chaos-Operator watches for the ChaosEngine CR and executes the Chaos-Experiments mentioned in the CR. Chaos-Operator is namespace scoped. By default, it runs in &lt;code&gt;litmus&lt;/code&gt; namespace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CRDs: During installation, the following three CRDs are installed on the Kubernetes cluster: &lt;code&gt;chaosexperiments.litmuschaos.io&lt;/code&gt;, &lt;code&gt;chaosengines.litmuschaos.io&lt;/code&gt;, and &lt;code&gt;chaosresults.litmuschaos.io&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chaos Experiment: Chaos Experiment is a CR and is available as YAML files on Chaos Hub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chaos Engine: ChaosEngine CR connects experiments to applications. The user must construct ChaosEngine YAML by giving the app label and experiments, as well as the CR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chaos Results: The results of a ChaosExperiment with a namespace scope are stored in the ChaosResult resource. The experiment itself creates or updates it in runtime. It contains critical information such as the ChaosEngine reference, Experiment State, Experiment Verdict (on completion), and key application/result properties. It can also be used to collect metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chaos Probes: Litmus probes are pluggable tests that can be defined for any chaotic experiment within the ChaosEngine. These checks are carried out by the experiment pods based on the mode they are defined in, and their success is used to determine the experiment’s judgment (along with the standard “in-built” checks).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chaos Exporter: Metrics can be exported to a Prometheus database if desired. The Prometheus metrics endpoint is implemented by Chaos-Exporter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Subscriber: Subscriber is a component on the Agent side that communicates with the Litmus Server component to obtain Chaos process data and return the results.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Demo: Performing the pod-delete experiment with a custom chaos workflow on a Kubernetes deployment
&lt;/h2&gt;

&lt;p&gt;Let’s get a hang of Litmus with a very simple chaos experiment: the &lt;code&gt;pod-delete&lt;/code&gt; experiment. Essentially, we’d like to see that whether our Kubernetes deployment from the last blog is resilient against the event of accidental pod-deletion.&lt;/p&gt;

&lt;p&gt;Here’s what we’d do; firstly we will install Litmus, then we’d define our custom workflow, and finally we’d analyze the results of our experiment. Simple, isn’t it?&lt;/p&gt;

&lt;p&gt;Our application deployment can be viewed using the &lt;code&gt;kubectl get deployments&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0813rggh4sqa19f9qrn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0813rggh4sqa19f9qrn.png" alt="1_RpyGgHKGv3pCyA36CTTabQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and the associated pods can be viewed using &lt;code&gt;kubectl get pods&lt;/code&gt;: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02d5mgjiivdajdnwo0bt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02d5mgjiivdajdnwo0bt.png" alt="1_O3CbUjMfVW6hqzomgpl-aQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before installing Litmus, let us check the pre-requisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kubernetes 1.15 or later&lt;/li&gt;
&lt;li&gt;A persistent volume of 20GB (Recommended)&lt;/li&gt;
&lt;li&gt;Helm3 or Kubectl &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once we’re good with these, let’s follow the installation steps. Litmus can be installed either using Helm or using Kubectl. Let’s try to install it using Helm:&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP 1: Create a Litmus namespace in Kubernetes
&lt;/h4&gt;

&lt;p&gt;Litmus installs within the litmus namespace. So let us create the namespace with the command: &lt;code&gt;kubectl create namespace litmus&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP 2: Add the Litmus Helm Chart
&lt;/h4&gt;

&lt;p&gt;Clone the litmus repo and move into the cloned directory using the command: &lt;code&gt;git clone https://github.com/litmuschaos/litmus-helm &amp;amp;&amp;amp; cd litmus-helm&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP 3: Install Litmus
&lt;/h4&gt;

&lt;p&gt;The helm chart will install all the CRDs, required service account configuration, and chaos-operator required both for the core services as well as the portal to run. Use the command &lt;code&gt;helm install litmuschaos — namespace litmus ./charts/litmus-2–0–0-beta/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And that’s it, you’re all set up to use Litmus! You can verify the installation by viewing all the resources installed under the &lt;code&gt;litmus&lt;/code&gt; namespace using &lt;code&gt;kubectl get all -n litmus&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyt7ro8t0z2pacsglnqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyt7ro8t0z2pacsglnqg.png" alt="1_Ed2ylmNDs1MaRbPoao7_6A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It might take a while for all the resources to get ready and eventually you’d see something as depicted above. Here we can observe that we have three pods running: &lt;code&gt;litmus-frontend&lt;/code&gt;, &lt;code&gt;litmus-backend&lt;/code&gt;, and &lt;code&gt;mongo&lt;/code&gt;. These comprise the Litmus WebUI, Litmus Server, and Litmus DB respectively, as discussed earlier.&lt;/p&gt;

&lt;p&gt;We also have three services for our application, namely &lt;code&gt;litmusportal-frontend-service&lt;/code&gt;, &lt;code&gt;litmusportal-backend-service&lt;/code&gt;, and &lt;code&gt;mongo-service&lt;/code&gt;. These services maintain the endpoints for the pods which we saw earlier.&lt;/p&gt;

&lt;p&gt;These pods are created using the two deployments namely &lt;code&gt;litmusportal-frontend&lt;/code&gt; and &lt;code&gt;litmusportal-backend&lt;/code&gt;, which are also responsible for specifying the replicasets for the same.&lt;/p&gt;

&lt;p&gt;Lastly, the Mongo DB database being a stateful resource uses a statefulset named &lt;code&gt;mongo&lt;/code&gt; to persist the data contained by the DB even if the &lt;code&gt;mongo&lt;/code&gt; pod dies and restarts.&lt;/p&gt;

&lt;p&gt;Once all these resources are ready, we can proceed to the Litmus portal. For this, we will try to access the &lt;em&gt;NodePort&lt;/em&gt; service &lt;code&gt;litmusportal-frontend-service&lt;/code&gt;. The &lt;em&gt;nodePort&lt;/em&gt; assigned to the &lt;code&gt;litmusportal-frontend-service&lt;/code&gt; in my machine has a mapping of &lt;code&gt;9091:30628&lt;/code&gt; where &lt;code&gt;9091&lt;/code&gt; is the specified &lt;code&gt;targetPort&lt;/code&gt; while &lt;code&gt;30628&lt;/code&gt; is the assigned &lt;code&gt;nodePort&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you’re using Minikube, you can port-forward any unused port of your choice to the &lt;code&gt;litmusportal-frontend-service&lt;/code&gt; in order to access it at that port. For example, if I wish to access the &lt;code&gt;litmusportal-frontend-service&lt;/code&gt; at the &lt;code&gt;3000&lt;/code&gt; port, I’d use the command &lt;code&gt;kubectl port-forward svc/litmusportal-frontend-service 3000:9091 -n litmus&lt;/code&gt;. Once done, simply access the Litmus portal at &lt;code&gt;http://127.0.0.1:3000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you’re using any other Kubernetes platform, you can directly access the Litmus portal at the &lt;code&gt;nodePort&lt;/code&gt;, given you have a firewall rule allowing ingress at that port. For example, I have a &lt;code&gt;nodePort&lt;/code&gt; of &lt;code&gt;30628&lt;/code&gt;, hence I can directly access the Litmus portal at &lt;code&gt;http://127.0.0.1:30628&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g2vl1qokjowgyn1vunu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g2vl1qokjowgyn1vunu.png" alt="1_P-EBZxJGoQ8YQFkEQ5X7Xw" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The default username is &lt;code&gt;admin&lt;/code&gt; and the default password is &lt;code&gt;litmus&lt;/code&gt;. Once you log in, you’d be prompted to enter a new password. Once that’s done, you’d find yourself in the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikzyxc3bfuygckop5y7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikzyxc3bfuygckop5y7n.png" alt="1_ESqD9pbxN8w-2rfzVpKKKQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s take a quick tour of the portal. The Dashboard is where you see the different workflows which were either currently executing or had been previously executing, the number of agents connected to the portal, and the number of projects and invitations for collaboration. Inside Workflows, you will be able to get an elaborate view of the running and past workflow, along with their respective analytics and logs. ChaosHubs is where you can access the chaos experiments, by default you can access all the experiments listed under the Litmus’ ChaosHub but you can also set it up with your own hub. Under Analytics you’d be able to get a vivid analysis of the different aspects of your chaos workflows, using the graphs. Finally, Settings allow you to modify your personal information, automate workflows using GitOps, etc.&lt;/p&gt;

&lt;p&gt;Let’s proceed on with the workflow creation for our pod-delete experiment. In Dashboard, click on the Schedule a Workflow button and choose the self-agent, since the application we’re targetting is deployed within the same Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v3q9udr3y3u5wbyo0sz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v3q9udr3y3u5wbyo0sz.png" alt="1_YC0HEGqAjgzt1b3_nYtAPQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next. Select “Create a new workflow using the experiments from MyHub” and from the dropdown, choose “Chaos Hub”. This is because the experiment we’re trying to perform here i.e. the pod-delete experiment, is a part of the Chaos Hub and we can directly use it, without the need to define it ourselves.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvixohrti6j000wpx68n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvixohrti6j000wpx68n.png" alt="1_jiT3AhFrMSBINekc9BvrGQ" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next. Under the Workflow Settings, rename your workflow to any name of your choice. Don’t alter the namespace since our Litmus installation is supposed to use the &lt;code&gt;litmus&lt;/code&gt; namespace only. Add a description of your own choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27sjlyuojr5xcohznsh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27sjlyuojr5xcohznsh0.png" alt="1_vj4Wz7GnoE3l5l15Fgmv5A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next. Inside Tune Workflow, currently, you’d be able to view only one step listed in the flow chart and that is “install-chaos-experiments”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F092axjpr07l9ekd5wvik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F092axjpr07l9ekd5wvik.png" alt="1_YYEQ7efbd7qswwU_qTV4-Q" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Add a new experiment and choose generic/pod-delete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w53693b6i9nlg6bhe5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w53693b6i9nlg6bhe5d.png" alt="1_EinYdPuGhg7eyNLpxv4kcg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqnpxhnjtxp10dvqiz2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqnpxhnjtxp10dvqiz2i.png" alt="1_YXb30oN3jFSbO0XFnLYy0Q" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can observe, in the flow chart a second step “pod-delete” has been listed now. Although for the purpose of this demo we’d keep the workflow simple with only one experiment, one can design their entire workflow by adding more experiments as per the desired sequence. Its as simple as that!&lt;/p&gt;

&lt;p&gt;Now, we’d specify the target deployment on which the chaos workflow will be executed. Click on Edit YAML and scroll down to line number 134 in the editor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcizl54vao1tbxyiwqxve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcizl54vao1tbxyiwqxve.png" alt="1_oK4GE6K1PBBU4pyfTeEnKg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we need to specify the &lt;code&gt;appLabel&lt;/code&gt; of our hello-world application by overriding the default &lt;code&gt;app=nginx&lt;/code&gt; value. To check the label of our deployment, we can use the &lt;code&gt;kubectl get deployments --show-labels&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ei2mu7iz16trjf4fijl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ei2mu7iz16trjf4fijl.png" alt="1_GfHOgbIXkS2g7LWotcnkeQ" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The label of our deployment is &lt;code&gt;app=hello-world&lt;/code&gt; so we’d simply replace &lt;code&gt;app=nginx&lt;/code&gt; with &lt;code&gt;app=hello-world&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9f1yf9esxs1hzdzz5st.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9f1yf9esxs1hzdzz5st.png" alt="1_Q6FCkE7WouTfjC_quKZ88A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Save Changes. Back in the portal, keep Revert Schedule as TRUE. This ensures that all the changes made to the system during the execution of the workflow will get reverted post-completion of the workflow. Click Next.&lt;/p&gt;

&lt;p&gt;In Reliability Score, we can add a numeric score between 1 to 10 for each of our experiments. Litmus will simply use this weightage while calculating the resiliency score at the end of the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i5foud53topw8g4opto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i5foud53topw8g4opto.png" alt="1_dskugwWlsj2B0dJUSkXzHw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next. Inside Schedule, we can schedule our workflows to be executed in a determined frequency or we can simply schedule it to execute right away. Choose “Schedule now”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyzfx9jt0lho9an7re3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyzfx9jt0lho9an7re3x.png" alt="1_bTKajnGdzAa4Yzwr-8_ovA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next. Inside Verify and Commit, you can review your workflow details and make changes if required before finally executing the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah886brxa39ewmsxk2zs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah886brxa39ewmsxk2zs.png" alt="1_CIHMkM-LULK9TZoIlSqC8w" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you’re satisfied with the configuration, click Finish. You have successfully started the Chaos Workflow now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnk547apz0ixh65n3wp1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnk547apz0ixh65n3wp1a.png" alt="1_a-im6XW6sgaYnE4oyUMJPA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Go to Worklow and choose the currently running workflow from the Workflows dashboard. You can view the graphical view of the workflow as it is actively performing the pod-delete experiment on our deployed application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzqgem795o3s2wsnr8gk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzqgem795o3s2wsnr8gk.png" alt="1_FSUxFmyaHyjCO2B7h_NurQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In effect, this experiment will first install the chaos experiment resources in the cluster, causing one of the three pods in our application deployment to be randomly chosen and deleted, and finally perform a cleanup of the chaos experiment resources.&lt;/p&gt;

&lt;p&gt;The application deployment is therefore expected to spin up a new pod in replacement of the forcefully deleted pod for the experiment to be successful. Otherwise, if a new pod fails to spin up, then our system is definitely not resilient.&lt;/p&gt;

&lt;p&gt;After a few minutes, you’d be able to see that the experiment has successfully completed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t9mo83ikdo05xvjrm24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t9mo83ikdo05xvjrm24.png" alt="1_iX5eYeYwaKKhkZYUEc3ePw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the steps of the experiment were successful and our system was able to successfully cope up even in the scenario of forceful pod-delete. We can further analyze the details regarding the experiment inside the Table View:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqnx579bhyp3shlsfssd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqnx579bhyp3shlsfssd.png" alt="1_tyE7JX0iJv7fIhtD4ZCCtw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, we have obtained a 100% resilience score, and the other pieces of information of our experiment are also visible here. Furthermore, we can analyze the workflow analytics to better understand the impact of the workflow upon our system. Click Back and in the Workflow dashboard, click on the options ellipsis icon:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjawl50snh3y2yr3x46ot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjawl50snh3y2yr3x46ot.png" alt="1_1JXI5e0AIonB1YvpCr2DjA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose Show the analytics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl1ybqpannidufkcb5x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl1ybqpannidufkcb5x9.png" alt="1_zloC_QTsfUqwWbVHTB08kg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The analytics graph shows the sequence of chaos experiments, the resiliency score, and the state of other parameters over the timeline of the execution of the workflow.&lt;/p&gt;

&lt;p&gt;In conclusion of this blog series, let us appreciate that how we went all the way from containers to chaos engineering, starting from learning how to Dockerize a Node.js application, deploying our Docker container using Kubernetes, and finally using LitmusChaos to perform a pod-delete chaos experiment on our application.&lt;/p&gt;

&lt;p&gt;Once again, welcome to the world of containers and chaos engineering. Come join me at the &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Litmus community&lt;/a&gt; to contribute your bit in developing chaos engineering for everyone. Stay updated on the latest Litmus trends through the Kubernetes &lt;a href="https://kubernetes.slack.com/messages/CNXNB0ZTN" rel="noopener noreferrer"&gt;Slack&lt;/a&gt; channel (Look for #litmus channel).&lt;/p&gt;

&lt;p&gt;Don’t forget to share these resources with someone who you think might benefit from them. Thank you. 🙏    &lt;/p&gt;

</description>
      <category>litmuschaos</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>chaosengineering</category>
    </item>
    <item>
      <title>Part-1: A Beginner's Practical Guide to Containerisation and Chaos Engineering with LitmusChaos 2.0</title>
      <dc:creator>Neelanjan Manna</dc:creator>
      <pubDate>Fri, 02 Jul 2021 14:46:31 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/part-1-a-beginner-s-practical-guide-to-containerisation-and-chaos-engineering-with-litmuschaos-2-0-3h5c</link>
      <guid>https://dev.to/litmus-chaos/part-1-a-beginner-s-practical-guide-to-containerisation-and-chaos-engineering-with-litmuschaos-2-0-3h5c</guid>
      <description>&lt;h2&gt;
  
  
  Part 1: Containers 101, Deploy a Node.js App Using Docker and Kubernetes
&lt;/h2&gt;

&lt;p&gt;This blog is part one of a two blog series that details how to get started with containerization using Docker and Kubernetes for deployment, and later how to perform chaos engineering using LitmusChaos 2.0. Find Part-2 of the blog &lt;a href="https://dev.to/neelanjan00/part-2-a-beginner-s-practical-guide-to-containerisation-and-chaos-engineering-with-litmuschaos-2-0-253i"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So you’ve just come across this term called Containerisation and now you’re wondering that as an aspiring software product engineer, or a DevOps engineer, what role will it play in your day-to-day work. After all, applications can be deployed without containers, and in fact, that has been the norm for a long time until containerization technologies like Docker, Kubernetes came into the big picture. So what’s all the fuss about?&lt;/p&gt;

&lt;p&gt;In this blog, I’d try to answer all your questions, starting from what are containers, how containers got themselves into the limelight, why one should use them, what is container orchestration, advantages of container orchestration, and finally we will deploy a Node.js application using Docker and Minikube Kubernetes. Please take note that this blog will try to put more emphasis on the practical aspect of using Containers, and won’t encompass the basic theoretical concepts of Docker and Kubernetes at a large, but only those concepts which are necessary to understand the demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Containers?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqz0089a63dyze0nj4fm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqz0089a63dyze0nj4fm.png" alt="1_gFhfSHI_yJehrdIXL9O_pg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As Docker defines it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Simply said, containers allow applications to run in an isolated environment of their own, along with all its dependencies. This decoupling makes it simple and consistent to bundle and deploy container-based applications along with all their dependencies, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Difference between Containers and Virtual Machines
&lt;/h2&gt;

&lt;p&gt;A hypervisor virtualizes physical hardware in conventional virtualization. As a result, each virtual machine has a guest OS, a virtual copy of the hardware that the OS needs to run, and an application with all of its related libraries and dependencies. On the same physical server, multiple virtual machines running different operating systems will coexist. A VMware VM, for example, can coexist with a Linux VM, which in turn can coexist with a Microsoft VM, and so on.&lt;/p&gt;

&lt;p&gt;Containers virtualize the operating system (typically Linux or Windows) rather than the underlying hardware, so each container only contains the portable and its libraries and dependencies. Containers are small, fast, and portable since, unlike virtual machines, they do not need a guest OS in every instance and can instead rely on the host OS’s features and resources.&lt;/p&gt;

&lt;p&gt;Containers, like virtual machines, allow developers to make better use of physical machines’ CPU and memory. On the other hand, Containers go much further because they support microservice architectures, which allow for more granular deployment and scaling of application components. This is a more appealing option than scaling up an entire monolithic framework because a single component is experiencing load issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we should Use Containers
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Faster time to market: Keeping the competitive advantage needs new software and services. Organizations may use growth and organizational agility to accelerate the implementation of new services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment velocity: Containerization allows a quicker move from production to implementation. It allows DevOps teams to reduce deployment times and frequency by breaking down barriers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduction of IT infrastructure: Containerization increases the density of device workloads, improve the utilization of your server compute density, and cut software licensing costs to save money.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performance in IT operations: Containerization allows developers to streamline and automate the management of multiple applications and resources into a single operating model to improve operational performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Obtain greater freedom of choice: Any public or private cloud can be used to package, ship, and run applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is Container Orchestration
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzav2u9r5m5yjhi4lius.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzav2u9r5m5yjhi4lius.png" alt="Alt Text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As VMware defines it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Container orchestration is the automation of much of the operational effort required to run containerized workloads and services. This includes a wide range of things software teams need to manage a container’s lifecycle, including provisioning, deployment, scaling (up and down), networking, load balancing and more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Running containers in production can easily become a huge effort due to their lightweight and ephemeral nature. When used in conjunction with microservices, which usually run in their own containers, a containerized application will result in hundreds or thousands of containers being used to construct and run any large-scale system.&lt;br&gt;
If handled manually, this can add a lot of difficulties. Container orchestration, which offers a declarative way of automating much of the job, is what makes the organizational complexity manageable for creation and operations, or DevOps. This makes it a natural match for DevOps teams and cultures, which aim for much greater speed and agility than conventional software development teams.&lt;/p&gt;
&lt;h2&gt;
  
  
  Advantages of Container Orchestration
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Improved Resilience: Container orchestration software can improve stability by automatically restarting or scaling a container or cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simplification of Operations: The most significant advantage of container orchestration, and the primary explanation for its popularity, is simplified operations. Containers add a lot of complexity, which can easily spiral out of control if you don’t use container orchestration to keep track of it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhanced Security: Container orchestration’s automated approach contributes to the protection of containerized applications by reducing or removing the risk of human error.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Demo: Deploy a Node.js App Using Docker and Kubernetes
&lt;/h2&gt;

&lt;p&gt;Let’s get our hands dirty by deploying a simple Node.js application using a Docker container, followed by deploying the container image in a Minikube Kubernetes cluster in our own development machine.&lt;br&gt;
Before we move on to the actual demo, let's check a few pre-requisites off the list so that we will all be on the same page:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Node.js version 10.19.0&lt;/li&gt;
&lt;li&gt;Docker version 20.10.6&lt;/li&gt;
&lt;li&gt;Minikube version 1.20.0&lt;/li&gt;
&lt;li&gt;Virtualbox 6.1.6&lt;/li&gt;
&lt;li&gt;Kubectl&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's worth mentioning that I will be using a machine running on Ubuntu 20.04 for this demo, though you should be fine with a Windows machine too. For this demo, we’ll not cover the installation part of Docker and Minikube since they are pretty straightforward and require no special instruction, and focus on the deployment part only.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Node.js Application
&lt;/h3&gt;

&lt;p&gt;Let’s start with a very basic Node.js “Hello World” application for this demo. The application has been developed as any other Node.js application, after initializing an empty repository using &lt;code&gt;npm init&lt;/code&gt;. The &lt;code&gt;package.json&lt;/code&gt; generated is as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once that’s done, we can set up our &lt;code&gt;index.js&lt;/code&gt; as follows:&lt;/p&gt;

&lt;p&gt;Here we have a pretty basic Node.js server and all it does is serve the string ‘Hello World’ when a GET request is sent to the loopback address, at port 3000. Upon executing the above code using node index.js we obtain the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp1xzixdq5pe9riv4ele.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp1xzixdq5pe9riv4ele.png" alt="1_pkSh-I2OvDxst-GL5ruK0g" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we do get our Hello World at &lt;code&gt;http://127.0.0.1:3000/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqengv0xlnem9oce8y1p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqengv0xlnem9oce8y1p.png" alt="1_D2X66IR766_m_9cUIFBZcw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Simple, right? Once that’s done, let’s move on to the sweet part, creating a container image of our application using Docker.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Docker
&lt;/h3&gt;

&lt;p&gt;Once we have our application up and running, we can proceed to dockerizing the application. But before we do that, let us initialize a startup script in our &lt;code&gt;package.json&lt;/code&gt; so that our application can be readily executed by Docker. Therefore, we’d modify our &lt;code&gt;package.json&lt;/code&gt; as follows:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Now we are all set to Dockerize our application! To do that we need to simply create a Dockerfile in the same directory as of our &lt;code&gt;index.js&lt;/code&gt;. A &lt;code&gt;Dockerfile&lt;/code&gt; is simply a set of instructions required for building the container image of the application. Our &lt;code&gt;Dockerfile&lt;/code&gt; looks something like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Let us walk through each of these commands to better understand their purpose.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;FROM&lt;/code&gt; instruction initializes a new build stage and sets the Base Image for subsequent instructions. A Base Image is simply a Docker image that has no parent image, which is created using the &lt;code&gt;FROM scratch&lt;/code&gt; directive. As such, a valid &lt;code&gt;Dockerfile&lt;/code&gt; must start with a &lt;code&gt;FROM&lt;/code&gt; instruction. Here we are using the &lt;code&gt;node v12.0 slim&lt;/code&gt; Base Image.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;WORKDIR&lt;/code&gt; command is used to set the working directory for all the subsequent &lt;code&gt;Dockerfile&lt;/code&gt; instructions. If the &lt;code&gt;WORKDIR&lt;/code&gt; is not manually created, it gets created automatically during the processing of the instructions. It does not create new intermediate Image layers. Here we set our working directory as &lt;code&gt;/app&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;COPY&lt;/code&gt; instruction copies new files or directories from &lt;code&gt;&amp;lt;src&amp;gt;&lt;/code&gt; and adds them to the filesystem of the container at the path &lt;code&gt;&amp;lt;dest&amp;gt;&lt;/code&gt;. Here we are copying the &lt;code&gt;package.json&lt;/code&gt; file to &lt;code&gt;/app&lt;/code&gt; directory. Interestingly, we don’t copy the rest of the files into &lt;code&gt;/app&lt;/code&gt; just yet. Can you guess why? This is because we’d like Docker to cache the first 3 commands so that every time we run &lt;code&gt;docker build&lt;/code&gt; we won’t need to execute those commands, and thus improve our build speed.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RUN&lt;/code&gt; command can be used in two ways; either through a &lt;code&gt;Dockerfile&lt;/code&gt; as shown here, otherwise through the Docker CLI. The &lt;code&gt;RUN&lt;/code&gt; instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the &lt;code&gt;Dockerfile&lt;/code&gt;. To install the Node.js app dependencies from the &lt;code&gt;package.json&lt;/code&gt; file, we use the &lt;code&gt;RUN&lt;/code&gt; command here.&lt;/p&gt;

&lt;p&gt;Next, we use COPY to move all the remaining files to the &lt;code&gt;/app&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Finally, we use the &lt;code&gt;CMD&lt;/code&gt; command to execute the Node.js application. The main purpose of a &lt;code&gt;CMD&lt;/code&gt; is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case we must specify an &lt;code&gt;ENTRYPOINT&lt;/code&gt; instruction as well. There can only be one &lt;code&gt;CMD&lt;/code&gt; instruction in a &lt;code&gt;Dockerfile&lt;/code&gt;. If we list more than one &lt;code&gt;CMD&lt;/code&gt; then only the last &lt;code&gt;CMD&lt;/code&gt; will take effect.&lt;/p&gt;

&lt;p&gt;And we’re done with the &lt;code&gt;Dockerfile&lt;/code&gt;. Now we’re all set to build our container image, which can be done using the command &lt;code&gt;docker build -t hello-world:1.0 .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The only important takeaway here is that we MUST tag our image via &lt;code&gt;-t&lt;/code&gt; flag, as it’d be very much beneficial for us to deploy and manage the container image later on. Here we have specified the name of our container image as &lt;code&gt;hello-world&lt;/code&gt; and tagged it with its version &lt;code&gt;1.0&lt;/code&gt;. We had seen a similar practice with our Base Image &lt;code&gt;node:12-slim&lt;/code&gt; as well. Lastly, we specified the directory from where the &lt;code&gt;Dockerfile&lt;/code&gt; is located using the &lt;code&gt;.&lt;/code&gt; path.&lt;/p&gt;

&lt;p&gt;Upon building the image, Docker tries to fetch the Base Image if it's not present in the local registry. Next, it executes the set of instructions given in the &lt;code&gt;Dockerfile&lt;/code&gt; in a sequential order to complete the build process. Here’s the build output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0yfgzumebcqlge4rli4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0yfgzumebcqlge4rli4.png" alt="1_DoAfrP-gVtUdTtuwjojPDA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon a successful build, we can view our image using the command &lt;code&gt;docker images&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xpuox4v4djw7zprrtib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xpuox4v4djw7zprrtib.png" alt="1_S70NkGEe-pJEFqLeI73fvw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the node image is also present here since we have used it as our Base Image. Now we’re all set to run our application. Let’s run our docker container using the command &lt;code&gt;docker run -it -p 3000:3000 hello-world:1.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here we’re specifying that we want to run the container image in an interactive mode using the flag &lt;code&gt;-it&lt;/code&gt;. Further, we specify the port mapping of our container using the &lt;code&gt;-p&lt;/code&gt; flag, where we direct that the &lt;code&gt;3000&lt;/code&gt; port of the host machine is mapped to the &lt;code&gt;3000&lt;/code&gt; port of the container. Finally, we specify the name and tag of the image to be run. Hence we obtain:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F737xa5d25kkdqyj5dytg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F737xa5d25kkdqyj5dytg.png" alt="1_8NS5v2Z0dI3CbsTkkDDGQg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thus, we have successfully deployed a container image of our application using Docker, which we can verify by visiting &lt;code&gt;http://127.0.0.1:3000/&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1r1bqxktjep7g60k0bhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1r1bqxktjep7g60k0bhi.png" alt="1_OsjzHp4Zq-EOT4fvcWFLPw" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Further, we can run the container in detached mode by specifying &lt;code&gt;-d&lt;/code&gt; flag in place of &lt;code&gt;-it&lt;/code&gt; in the previous command: &lt;code&gt;docker run -d -p 3000:3000 hello-world:1.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex36upy2bjaenf751y0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex36upy2bjaenf751y0h.png" alt="1_3X2tH4zmhOHwa0XOUaiXhQ" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The long alphanumeric string output is nothing but the UUID long identifier of the running container. We can further inspect the properties of this container using the command &lt;code&gt;docker container ls&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcho3ctn8fbzodfvkc85z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcho3ctn8fbzodfvkc85z.png" alt="1_lXCz8qRPYslhDhniyQ7ntg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazing! We have just deployed our very first Docker container and now we’re all set for our next destination: Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Kubernetes
&lt;/h3&gt;

&lt;p&gt;As the container image of our application is now ready, all that’s left is to deploy our container image to a Kubernetes deployment. We’d use a Minikube cluster for the deployment of our container image locally.&lt;/p&gt;

&lt;p&gt;It's important to take note that a Minikube cluster would have only one worker node, which will be created using a virtual machine, and the control plane will reside in our own machine only.&lt;/p&gt;

&lt;p&gt;To start Minikube, we can use the command minikube start. It's worth pointing out that a minimum of 2 CPUs, 2GB RAM, and 20GB Disk Space is required for starting a Minikube cluster using this command. One may check the number of processing units in their machine using the &lt;code&gt;npoc&lt;/code&gt; command: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1txedgfymgnnjnece1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1txedgfymgnnjnece1l.png" alt="1_cXpcfP4upw1FKwMiuJnN8g" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the Minikube cluster starts up, you’d get the following output in the terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bahk2stjg9jofex3hbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bahk2stjg9jofex3hbd.png" alt="1_3CJThtZ7ENmu4xq_w_c10g" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that our Minikube cluster is up and running, let’s devise a deployment strategy for our container image.&lt;/p&gt;

&lt;p&gt;Currently, our container image is stored locally in our own Docker Local Registry which is present in our machine. The Registry is nothing but a stateless, highly scalable server-side application that stores and allows the distribution of Docker images. So, either we can use the image directly from the Local Registry to deploy it in the Minikube server, or we can first push our image to a Hosted Registry such as Docker Hub and later pull it for the image deployment. The latter is a more suitable approach when working in a team.&lt;/p&gt;

&lt;p&gt;The former approach, however, has an unnoticed aspect. Minikube comes with its own Docker ecosystem when we install it on our machine. If we create Docker images on our computer and try to use them in a Kubernetes deployment, we will get the ErrImageNeverPull error since it always tries to get the image from its own Local Registry or Docker Hub, resulting in an error as the pod is started.&lt;br&gt;
To verify this, let’s do a small experiment. We can still view our Local Registry images using the command &lt;code&gt;docker images&lt;/code&gt;: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F674oo9u5im9q3kn09yrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F674oo9u5im9q3kn09yrw.png" alt="1_FwbZxo81t-HFCHS8UzEykg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s try to access the Docker Images of Minikube. To do that we need to first SSH into the Minikube’s VM. To do that, we can use the command &lt;code&gt;minikube ssh&lt;/code&gt;: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmezxr7gzuvafdqwb1m43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmezxr7gzuvafdqwb1m43.png" alt="1_gctsJnt-DrARxY4YG6kUag" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we are inside the Minikube’s VM, we can again use the &lt;code&gt;docker images&lt;/code&gt; command and this time the hello-world image or the node image is nowhere to be found:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp0hp0e2n070d3i2gyiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp0hp0e2n070d3i2gyiy.png" alt="1_LQQJ9zEPBj1-ZDx8Sgg55w" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The SSH can be exited using the &lt;code&gt;exit&lt;/code&gt; command.&lt;br&gt;
To get around this issue, we have two options. Either we can push our image first to a Hoster Registry and then pull it into our Minikube VM’s Docker, or we can directly build our Docker image using Minikube’s Docker daemon. Let’s try to deploy our container image using the second approach.&lt;/p&gt;

&lt;p&gt;Firstly, we need to set the environment variable using the eval command as &lt;code&gt;eval $(minikube docker-env)&lt;/code&gt;. This will allow us to use the Docker daemon of Minikube to be used for the subsequent command. You can confirm that now the Minikube’s Docker is being used by using the docker images command again and this time you’d find the images from your Minikube Docker: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldq5i6doqc9lti9sk9lz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldq5i6doqc9lti9sk9lz.png" alt="1_LQQJ9zEPBj1-ZDx8Sgg55w (1)" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, use the build command for docker image as you’d normally do using the command &lt;code&gt;docker build -t hello-world:1.0 .&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82wxm731gb1jo34rfds3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82wxm731gb1jo34rfds3.png" alt="1_gzdTT8KWTvsY6RqurbEPPQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that our Docker image is in the right Registry, we can proceed towards the actual deployment. We can deploy our image either using a deployment manifest file or by using &lt;code&gt;kubectl create deployment&lt;/code&gt; command directly. The first approach is a better approach since it gives us much more flexibility to specify our exact Pod configuration.&lt;/p&gt;

&lt;p&gt;Let’s define our &lt;code&gt;deployment.yml&lt;/code&gt; manifest:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;A few interesting observations are to be made over here. Notice that &lt;code&gt;replicas&lt;/code&gt; has been set to 1 for the time being. This means that there will always be exactly one pod for our deployment under the present configuration. The &lt;code&gt;imagePullPolicy&lt;/code&gt; is set to “Never” as the image is expected to be fetched from the Minikube Docker’s Local Registry. Finally under &lt;code&gt;ports&lt;/code&gt;, the &lt;code&gt;containerPort&lt;/code&gt; has been set to 3000 because our Node.js application listens on that port.&lt;/p&gt;

&lt;p&gt;Now, let’s create the container deployment using the following command &lt;code&gt;kubectl create -f deployment.yml&lt;/code&gt;, assuming that the terminal is open in the directory where &lt;code&gt;deployment.yml&lt;/code&gt; file is present. We get the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0tq629twf6jn1qhnamt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0tq629twf6jn1qhnamt.png" alt="1_sanrBD_2Ng6Ilp4-pLLlcg" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Woohoo! We just deployed our container image in the Kubernetes cluster! To verify our deployment, we can see all the deployments in our cluster using the command &lt;code&gt;kubectl get deployments&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6rsysx2pmk27egel2cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6rsysx2pmk27egel2cy.png" alt="1_LJhZdOn96yCHEdW-Jtlv9g" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, our deployment is successfully created. We can also inspect the pods associated with this deployment using the command &lt;code&gt;kubectl get pods&lt;/code&gt;: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s4rlf2macj5lwgnmqlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s4rlf2macj5lwgnmqlv.png" alt="1_UhRgUzSA95dbY-ixiPlqJQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As per our manifest file, only one pod is created. Though, we can still increase or decrease the number of Pod replicas in our deployment using the following command &lt;code&gt;kubectl scale --replicas=3 deployment hello-world&lt;/code&gt;. This command will create 2 more Pods which will be exact replicas of the Pod that we have created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1vzc03edqf2cieizd74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1vzc03edqf2cieizd74.png" alt="1_QC7qBM_araAtJbejpzOj9w" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can verify the newly created Pods by again using the command &lt;code&gt;kubectl get pods&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheg6cxib1r57uiwc0q1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheg6cxib1r57uiwc0q1m.png" alt="1_YARmteUSxo0-mOOz8Ki4jQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Though we have deployed our container image, still we can’t access the application just yet, for the lack of a Service. As we know, Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. For our purpose, we’d create a NodePort Service to be able to access our deployment. A NodePort exposes the Service on the same port of each selected Node in the cluster using NAT.&lt;/p&gt;

&lt;p&gt;Let’s define a &lt;code&gt;service.yml&lt;/code&gt; manifest:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;A few points to notice: Under &lt;code&gt;ports&lt;/code&gt;, we have defined the &lt;code&gt;port&lt;/code&gt; and &lt;code&gt;targetPort&lt;/code&gt; in relation to the Service itself i.e. port refers to the port exposed by the service and &lt;code&gt;targetPort&lt;/code&gt; refers to the port used by our deployment aka. the Node.JS application.&lt;/p&gt;

&lt;p&gt;To create this Service we’d use the following command &lt;code&gt;kubectl create -f service.yml&lt;/code&gt;, assuming that the terminal is open in the directory where &lt;code&gt;service.yml&lt;/code&gt; file is present. We get the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl228nw3krvdww454kb0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl228nw3krvdww454kb0.png" alt="1_EWmEjmjwmeeGVxyGDTniNg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We just created our NodePort service, which we can verify using the command &lt;code&gt;kubectl get services&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmneszr9linfueifh8bs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmneszr9linfueifh8bs.png" alt="1_qxo3TEBul1b86RPR--8YEQ" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As evident, we have successfully created a NodePort Service. Noticeably, it doesn’t have an external IP, therefore we can use port-forwarding to access our deployment at a specified port. The command to do so is specified as &lt;code&gt;kubectl port-forward svc/hello-world-svc 3000:3000&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv61ycsy2kju2rahnxro4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv61ycsy2kju2rahnxro4.png" alt="1_KykNaWl1RLb3mSpjxlk3Qg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now if we go to &lt;code&gt;http://127.0.0.1:3000&lt;/code&gt;: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9agk4hzi2ohvprpxztp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9agk4hzi2ohvprpxztp.png" alt="1_OsjzHp4Zq-EOT4fvcWFLPw (1)" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You have just deployed a containerized application using Docker and Kubernetes by adhering to the best practices. Though we have touched only the tip of the iceberg, I hope this demo has made you a little bit more familiar with the containers, and how Docker and Kubernetes can be used for containerizing and deploying applications.&lt;/p&gt;

&lt;p&gt;In the next part of this series, we’d explore the world of Chaos Engineering using Litmus Chaos! We’d understand the best principles of Chaos Engineering and witness how Litmus Chaos performs Kubernetes-Native Chaos Engineering for attaining unparalleled resiliency in our Kubernetes application. &lt;/p&gt;

&lt;p&gt;With that, I’d like to welcome you to the world of containers and chaos engineering. Come join me at the &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Litmus community&lt;/a&gt; to contribute your bit in developing chaos engineering for everyone. Stay updated on the latest Litmus trends through the Kubernetes &lt;a href="https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN" rel="noopener noreferrer"&gt;Slack&lt;/a&gt; channel (Look for #litmus channel).&lt;/p&gt;

&lt;p&gt;Don’t forget to share these resources with someone who you think might benefit from them. Thank you. 🙏&lt;/p&gt;

</description>
      <category>litmuschaos</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>chaosengineering</category>
    </item>
  </channel>
</rss>
