<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Akash Shrivastava</title>
    <description>The latest articles on DEV Community by Akash Shrivastava (@avaakash).</description>
    <link>https://dev.to/avaakash</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/avaakash"/>
    <language>en</language>
    <item>
      <title>Introduction to HTTP Chaos in LitmusChaos</title>
      <dc:creator>Akash Shrivastava</dc:creator>
      <pubDate>Tue, 13 Sep 2022 15:58:52 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/introduction-to-http-chaos-in-litmuschaos-3hn</link>
      <guid>https://dev.to/litmus-chaos/introduction-to-http-chaos-in-litmuschaos-3hn</guid>
      <description>&lt;p&gt;This article is a getting-started guide for HTTP Chaos in LitmusChaos. We will be talking about&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Introduction to LitmusChaos&lt;/li&gt;
&lt;li&gt; How does HTTP Chaos work — Architecture&lt;/li&gt;
&lt;li&gt; Types of HTTP Chaos Experiments&lt;/li&gt;
&lt;li&gt; HTTP Chaos Demo&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is LitmusChaos
&lt;/h2&gt;

&lt;p&gt;LitmusChaos is a toolset to do cloud-native chaos engineering. It provides tools to orchestrate chaos on Kubernetes to help SREs find weaknesses in their deployments. SREs use Litmus to run chaos experiments initially in the staging environment and eventually in production to find bugs and vulnerabilities. Fixing the weaknesses leads to increased resilience of the system.&lt;/p&gt;

&lt;p&gt;Litmus takes a cloud-native approach to creating, managing and monitoring chaos. Chaos is orchestrated using the following Kubernetes Custom Resource Definitions (CRDs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  ChaosEngine: A resource to link a Kubernetes application or Kubernetes node to a ChaosExperiment. ChaosEngine is watched by Litmus’ Chaos-Operator which then invokes Chaos-Experiments&lt;/li&gt;
&lt;li&gt;  ChaosExperiment: A resource to group the configuration parameters of a chaos experiment. ChaosExperiment CRs are created by the operator when experiments are invoked by ChaosEngine.&lt;/li&gt;
&lt;li&gt;  ChaosResult: A resource to hold the results of a chaos experiment. The Chaos-exporter reads the results and exports the metrics into a configured Prometheus server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information, you can visit &lt;a href="https://litmuschaos.io/" rel="noopener noreferrer"&gt;litmuschaos.io&lt;/a&gt; or &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;github.com/litmuschaos/litmus&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The experiments internally use two things to inject HTTP chaos and redirect traffic properly. First, it runs a proxy server that acts as a middleman and modifies the request/response per the experiment type. Second, it creates a routing rule in the network routing table using the IPtables library to redirect all incoming traffic on the targeted service port to the proxy port.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4f9py92zerbdmpm134l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4f9py92zerbdmpm134l.png" alt="Without proxy server" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram shows a request without HTTP chaos injected. The request to access Service A comes at port 80 and is forwarded to Service A to be processed&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnls0820dbb8v9fpoolab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnls0820dbb8v9fpoolab.png" alt="With proxy server" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, after we inject HTTP chaos, the request to access Service A comes to port 80 but is forwarded to port 8000, on which the proxy server listens for requests. This is done by adding a routing rule in the routing table using IPtables. After the proxy server has modified the request, if required, it will forward the request to Service A to be processed. Now the response will follow the same path, the proxy server will modify the response if required and then send it back to the client to complete the request loop.&lt;/p&gt;

&lt;p&gt;The proxy server is running inside the service pod and the service pod routing table is updated by running commands inside the service pod using the &lt;code&gt;nsenter&lt;/code&gt; tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F0%2AZoDUj7m56Bg8mu7f" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F0%2AZoDUj7m56Bg8mu7f" alt="How proxy server and IPtable are run inside Target Pod" width="1400" height="799"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To facilitate the creation of a proxy server and adding rules to the routing table, a helper pod is run which uses &lt;code&gt;nsenter&lt;/code&gt; to enter inside the target pod to run commands to achieve this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiments
&lt;/h2&gt;

&lt;p&gt;Currently, there are 5 different types of HTTP experiments available. They are&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-http-latency/" rel="noopener noreferrer"&gt;HTTP Latency&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-http-reset-peer/" rel="noopener noreferrer"&gt;HTTP Reset Peer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-http-status-code/" rel="noopener noreferrer"&gt;HTTP Status Code&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-http-modify-header/" rel="noopener noreferrer"&gt;HTTP Modify Header&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-http-modify-body/" rel="noopener noreferrer"&gt;HTTP Modify Body&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s know more about them&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Latency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HTTP latency adds latency to the HTTP requests by adding a sleep timer before sending the request forward from the proxy server. It can be used to simulate delayed responses from the APIs. To tune the latency value, use the &lt;code&gt;LATENCY&lt;/code&gt; experiment variable and provide the value in milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Reset Peer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HTTP Reset Peer simulates TCP connection reset error by closing the connection after a specified timeout. It can be used to simulate connection failures. To tune the timeout value, use the &lt;code&gt;RESET_TIMEOUT&lt;/code&gt; experiment variable and provide the value in milliseconds&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Status Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HTTP Status Code can modify the status code of the response from the service as well as change the body of the response for the defined status code with a predefined template. It can be used to simulate API failures. To specify the status code using the &lt;code&gt;STATUS_CODE&lt;/code&gt; experiment variable to tune the value. Supported values are available in the docs. You can also provide a comma-separated list of values and the experiment will select a random value from the list to use. If no value is provided then any random value from the supported values will be chosen.&lt;/p&gt;

&lt;p&gt;You can use the &lt;code&gt;MODIFY_RESPONSE_BODY&lt;/code&gt; variable to tune whether the response is changed with a predefined template according to the status code or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Modify Header&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HTTP Modify Header can modify, add or remove headers from a request or response based on provided values. To specify whether you want to modify the request or response, use the &lt;code&gt;HEADER_MODE&lt;/code&gt; variable. You can set it to &lt;code&gt;request&lt;/code&gt; or &lt;code&gt;response&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;HEADERS_MAP&lt;/code&gt; needs a JSON-type input. Suppose you want to add a header &lt;code&gt;litmus&lt;/code&gt; with a value &lt;code&gt;2.12.0&lt;/code&gt; then you should provide it like this &lt;code&gt;{“litmus”: “2.12.0”}&lt;/code&gt; similarly for multiple values as well. To remove a header, you can overwrite its value to an empty string, currently removing the header key is not possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Modify Body&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HTTP Modify Body can modify the request/response body completely. This can be used to modify API responses. You can use the &lt;code&gt;RESPONSE_BODY&lt;/code&gt; variable to provide the overwrite value, this can be an HTML, plain text or JSON object.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important Tuneable
&lt;/h2&gt;

&lt;p&gt;These are the tuneable specific to all the HTTP Chaos experiments&lt;/p&gt;

&lt;h3&gt;
  
  
  Toxicity
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;TOXICITY&lt;/code&gt; can be used to provide the probability of requests being affected. Suppose you want only 50% of the requests to be affected, by setting the value &lt;code&gt;TOXICITY&lt;/code&gt; to 50, the probability of a request getting affected is 50%. This doesn’t mean every alternate request will be affected, but each request has a 50–50 chance of being affected. In large requests count, this comes out to be around 50% of requests being affected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Target Service Port
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;TARGET_SERVICE_PORT&lt;/code&gt; is the port of the service you want to target. This should be the port where the application runs at the pod level, not at the service level. This means if the application pod is running the service at port 8080 and we create a service exposing that at port 80, then the target service port should be 8080 and not 80, which is the port at pod-level.&lt;/p&gt;

&lt;p&gt;Proxy Port&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PROXY_PORT&lt;/code&gt; is the port at which the proxy server will be running. You are not required to change the default value (which is 20000) if this port is being used explicitly by any of your other services. If the experiment fails due to a port bind issue for the proxy server, you can change this value to an empty port to make it work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Interface
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;NETWORK_INTERFACE&lt;/code&gt; is the interface name for the network which your service is using. The default value is &lt;code&gt;eth0&lt;/code&gt;. If the experiment injection is failing due to a network interface error, you can use this to change it to the correct value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Let us run the HTTP Status Code experiment. For simplicity, we will be injecting chaos into an Nginx service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7u2690xiqmra048bdci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7u2690xiqmra048bdci.png" alt="Nginx Service" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The service is running on port 80, we will be targeting this.&lt;/p&gt;

&lt;p&gt;If we access the service, we are getting a 200 OK response with the default Nginx webpage. I will be using Postman to verify the status code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r8zhqrzvsp7umt3lxe0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r8zhqrzvsp7umt3lxe0.png" alt="Status code before injecting chaos" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, that we have the application set up on which we will be injecting chaos, let’s start creating a chaos scenario with the HTTP Status Code experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwliqstiz03rl015276rd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwliqstiz03rl015276rd.gif" alt="Creating a scenario" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Login to Chaos Centre and get to the Chaos Scenario section. Click on the Schedule a Chaos Scenario button. Select your agent and then select the chaos hub (HTTP experiments are available from ChaosHub version 2.11.0). Add a name for your scenario and move ahead. Now we are at selecting the experiments page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83j7rgo0mcchw9bjnykj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83j7rgo0mcchw9bjnykj.gif" alt="Adding experiment" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be selecting the &lt;code&gt;generic/pod-http-status-code&lt;/code&gt; experiment from the list of experiments. Moving ahead, we will tune the experiment variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b02ibpvykapjy45tdjh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b02ibpvykapjy45tdjh.gif" alt="Tuning the experiment tuneable" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the pencil icon next to the experiment name to edit the experiment. Now we have to select the app to inject chaos into. The NGINX application we are using is running in the &lt;code&gt;default&lt;/code&gt; namespace, it is of the &lt;code&gt;deployment&lt;/code&gt; kind and has the label &lt;code&gt;app=nginx&lt;/code&gt;. We will skip adding probes to keep it simple. The next section is to tune the experiment variables. Change the &lt;code&gt;STATUS_CODE&lt;/code&gt; to &lt;code&gt;500&lt;/code&gt; and the &lt;code&gt;TARGET_SERVICE_PORT&lt;/code&gt; to the port of the service, in this case, it is &lt;code&gt;port 80&lt;/code&gt;. The &lt;code&gt;MODIFY_RESPONSE_BODY&lt;/code&gt; is a boolean to specify whether the response body should be changed to a pre-defined HTTP template according to the status code. Now we are done with tuning the required variables in this experiment, let’s move ahead and run this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yjjeq7shz64ezd9ndc7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yjjeq7shz64ezd9ndc7.gif" alt="running the experiment" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now LitmusChaos will set up the experiment and then run it, once it starts injecting chaos we will be seeing the status code changing for the service. The output will be something similar to this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyfk5vbicaxtdcb969tc.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyfk5vbicaxtdcb969tc.gif" alt="during chaos status code" width="1280" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it, we have injected HTTP chaos into our application. The experiment passed because we haven’t specified any criteria to verify, we can do this using probes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpio7jfvhy0vmm1kibj9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpio7jfvhy0vmm1kibj9q.png" alt="Experiment completed graph" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Through this article, we could understand how the HTTP chaos experiment works internally and then talk about the current types of HTTP chaos experiments available. Then we injected the HTTP Status code experiment on a sample NGINX service and saw the experiment in live action. In further tutorial blogs, I will be talking about running the other HTTP experiments as well.&lt;/p&gt;

&lt;p&gt;You can join the LitmusChaos community on &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;&lt;em&gt;GitHub&lt;/em&gt;&lt;/a&gt;  and &lt;a href="https://www.notepadonline.org/wmtBaRICHQ" rel="noopener noreferrer"&gt;&lt;em&gt;Slack&lt;/em&gt;&lt;/a&gt;. The community is very active and tries to solve queries quickly.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this journey and found the blog interesting. You can leave your queries or suggestions (appreciation as well) in the comments below.&lt;/p&gt;

&lt;p&gt;Show your ❤️ with a ⭐ on our &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt;. To learn more about Litmus, check out the &lt;a href="https://docs.litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus documentation&lt;/a&gt;. Thank you! 🙏&lt;/p&gt;

&lt;p&gt;Thank you for reading&lt;/p&gt;

&lt;p&gt;Akash Shrivastava&lt;/p&gt;

&lt;p&gt;Software Engineer at Harness&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/avaakash/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; | &lt;a href="https://github.com/avaakash" rel="noopener noreferrer"&gt;Github&lt;/a&gt; | &lt;a href="https://instagram.com/avaakash" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt; | &lt;a href="https://twitter.com/_avaakash_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>chaosengineering</category>
      <category>devops</category>
      <category>http</category>
      <category>litmuschaos</category>
    </item>
    <item>
      <title>Setting up LitmusChaos on Raspberry Pi Cluster</title>
      <dc:creator>Akash Shrivastava</dc:creator>
      <pubDate>Tue, 13 Sep 2022 15:43:06 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/setting-up-litmuschaos-on-raspberry-pi-cluster-3cm4</link>
      <guid>https://dev.to/litmus-chaos/setting-up-litmuschaos-on-raspberry-pi-cluster-3cm4</guid>
      <description>&lt;p&gt;This blog is a guide on how to set up LitmusChaos on a Raspberry Pi cluster. This kind of setup can be used for development or testing purposes, as it is cheaper than cloud-based services, and it overcomes any limitations on your system.&lt;/p&gt;

&lt;p&gt;LitmusChaos is a toolset to do cloud-native chaos engineering. It provides tools to orchestrate chaos on Kubernetes to help SREs find weaknesses in their deployments. SREs use Litmus to run chaos experiments initially in the staging environment and eventually in production to find bugs and vulnerabilities. Fixing the weaknesses leads to increased resilience of the system.&lt;/p&gt;

&lt;p&gt;You can use this setup to see LitmusChaos in action, as well as this, can be used for development-level testing of services using Litmus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up a Raspberry Pi Cluster
&lt;/h2&gt;

&lt;p&gt;This section is a guide on how to set up an RPi cluster to run Kubernetes with a Master and multiple Worker Nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hardware Required
&lt;/h3&gt;

&lt;p&gt;For setting up the RPi Cluster, we need the following hardware (minimum requirement)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Raspberry Pis (at least 2, the 4 GB variant will be good enough)&lt;/li&gt;
&lt;li&gt; Power Hub for powering the Raspberry Pis&lt;/li&gt;
&lt;li&gt; Ethernet Cable(s)&lt;/li&gt;
&lt;li&gt; Router (Optional Wi-Fi)&lt;/li&gt;
&lt;li&gt; 32 GB SD Card(s) (One for each RPi)&lt;/li&gt;
&lt;li&gt; MicroSD Card Reader (or MicroSD Slot on your Laptop)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installing Operating System on SD Card
&lt;/h3&gt;

&lt;p&gt;There are many Linux-based distros available for RPis, you can go with the RaspiOS Lite, the only drawback is that it is only available for 32bit systems. Considering that, you can choose Ubuntu 20.02 Server, which is also lightweight (not as much as RaspiOS) but it has been working fine. For this article, I will be using Ubuntu 20.02 Server.&lt;/p&gt;

&lt;p&gt;Raspberry Pi provides an &lt;a href="https://www.raspberrypi.org/software/" rel="noopener noreferrer"&gt;&lt;em&gt;official image tool&lt;/em&gt;&lt;/a&gt; for installing the operating system on SD Card, but you can use any other tool as well. Download the Ubuntu image from &lt;a href="https://cdimage.ubuntu.com/releases/20.04.2/release/ubuntu-20.04.2-preinstalled-server-arm64+raspi.img.xz" rel="noopener noreferrer"&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt; Next, connect the SD card and open the image tool. Select the &lt;em&gt;Choose OS&lt;/em&gt; option and then select the &lt;em&gt;Custom Image&lt;/em&gt; option, select the ubuntu image you downloaded. Next, select the storage device and click on &lt;em&gt;Write&lt;/em&gt;. This will take some time (from 5–20 minutes), and once done, continue the same process for all other SD Cards.&lt;/p&gt;

&lt;p&gt;After this, insert the SD Cards into the Raspberry Pis and power them on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting RPis to Wifi (Optional)
&lt;/h3&gt;

&lt;p&gt;If you want to use the RPis connected with Ethernet only then you can skip this step. Also if you have a mini-HDMI to HDMI converter, you don’t need an Ethernet cable to set up wifi, you can connect your RPis to a screen and follow the same process.&lt;/p&gt;

&lt;p&gt;To connect your RPis to Wifi, you will have to first connect it with an Ethernet cable. Go to your router settings and get the IP address of the RPis. Then SSH into the RPis one by one and repeat the same step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh ubuntu@&amp;lt;ip-addr-rpi&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: The default password is &lt;strong&gt;&lt;em&gt;ubuntu&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need to find the network interface name first&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iw dev | grep Interface
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now to connect to Wifi you have to edit the &lt;em&gt;netplan&lt;/em&gt; configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/netplan/50-cloud-init.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add the following inside &lt;em&gt;network&lt;/em&gt; block&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wifis:
    &amp;lt;interface-name&amp;gt;:
        dhcp4: true
        optional: true
        access-points:
            "&amp;lt;your-wifi-ssid&amp;gt;”:
                password: "&amp;lt;your-wifi-password&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and exit the editor and then apply the new configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo netplan apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your device should be connected to wifi, you can check by&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, repeat the same process on all the Raspberry Pis and then you can disconnect the Ethernet cable.&lt;/p&gt;

&lt;p&gt;Note: the IP address has changed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Raspberry Pis for SSH
&lt;/h3&gt;

&lt;p&gt;First, change the hostname of the Pis so they are easy to distinguish.&lt;/p&gt;

&lt;p&gt;For master node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo hostname-ctl set-hostname kmaster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For worker nodes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo hostname-ctl set-hostname knode&amp;lt;node number&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, on your system create SSH keys and authorise them for the RPis by following these steps&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: Following steps are to be followed on your system&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a &lt;em&gt;.ssh&lt;/em&gt; directory if it doesn’t exist and cd into it
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir .ssh &amp;amp;&amp;amp; cd .ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2. Use ssh-keygen to create SSH keys for master and all worker nodes, name the keys according to the hostname of the nodes so it’s easy to find.&lt;/p&gt;

&lt;p&gt;3. Add the SSH keys to the ssh-agent&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-add kmaster
ssh-add knode&amp;lt;node number&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4. Copy the ssh-keys to the RPis&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_\# Master node_
ssh-copy-id -i ~/.ssh/kmaster.pub ubunut@&amp;lt;RPI-IP-ADDRESS&amp;gt;

_\# Worker node_
ssh-copy-id -i ~/.ssh/knode&amp;lt;number&amp;gt;.pub ubuntu@&amp;lt;RPI-IP-ADDRESS&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you had defined a static IP address for the RPis, you can use a hostname rather than an IP address&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo -e "&amp;lt;master node ip address&amp;gt;\\tkmaster" | sudo tee -a /etc/hosts
echo -e "&amp;lt;worker node 1 ip address&amp;gt;\\tknode1" | sudo tee -a /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, try to login into the RPis to verify that everything is working fine&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh ubuntu@kmaster
ssh ubuntu@knode1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Kubernetes on Raspberry Pi Cluster
&lt;/h2&gt;

&lt;p&gt;=============================================&lt;/p&gt;

&lt;p&gt;This section is a guide on how to install Kubernetes on Raspberry Pi Cluster with a Master and multiple Worker Nodes. We will be installing k3s because it is lightweight, you can install any other distribution as well.&lt;/p&gt;

&lt;p&gt;Since we will be using Docker, follow the official docs to install, you can find them &lt;a href="https://docs.docker.com/engine/install/ubuntu/" rel="noopener noreferrer"&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Installing K3s Master&lt;/p&gt;

&lt;p&gt;SSH into the master node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh ubuntu@kmaster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now install K3s&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sfL [https://get.k3s.io](https://get.k3s.io) | sh -s - --docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the installation was successful&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: You can check the k3s service to debug if the installation was not successful&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing K3s Nodes
&lt;/h3&gt;

&lt;p&gt;On your system, run the following command to get the node token from the k3s master&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MASTER\_TOKEN=$(ssh ubuntu@kmaster "sudo cat /var/lib/rancher/k3s/server/node-token")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now SSH into the node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh ubuntu@knode1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install K3s agent&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sfL [http://get.k3s.io](http://get.k3s.io) | K3S\_URL=https://kmaster:6443 K3S\_TOKEN=$MASTER\_TOKEN sh -s - --docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the K3s agent was installed successfully&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status k3s-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Kubectlt
&lt;/h3&gt;

&lt;p&gt;Install &lt;em&gt;kubectl&lt;/em&gt;, a command-line interface tool that allows you to run commands against a remote Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Now, create a config file to access the RPis K3s Cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p $HOME/.kube/k3s
touch $HOME/.kube/k3s/config
chmod 600 $HOME/.kube/k3s/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, copy the k3s cluster configuration from the master node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh pi@kmaster "sudo cat /etc/rancher/k3s/k3s.yaml" **\&amp;gt;** $HOME/.kube/k3s/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit the &lt;em&gt;k3s&lt;/em&gt; config file on the client machine and change the remote IP address of the &lt;em&gt;k3s&lt;/em&gt; master from &lt;code&gt;localhost/127.0.0.1&lt;/code&gt; to &lt;code&gt;kmaster&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_\# Edit master config_
nano $HOME/.kube/k3s/config

_\# Search for the 'server' attribute located in -_
_\# clusters:_
_\# - cluster:_
_\#   server:_ [_https://127.0.0.1:6443_](https://127.0.0.1:6443) _or_ [_https://localhost:6443_](https://localhost:6443)
_#_
_\# Change 'server' value to_ [_https://kmaster:6443_](https://kmaster:6443)_
\# Do not change the port value_
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, export the &lt;code&gt;_k3s_&lt;/code&gt; config file path as the &lt;code&gt;KUBECONFIG&lt;/code&gt; environment variable to use the config&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KUBECONFIG=$HOME/.kube/k3s/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the setup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing LitmusChaos on Raspberry Pi Cluster
&lt;/h2&gt;

&lt;p&gt;==============================================&lt;/p&gt;

&lt;p&gt;This section is a guide on how to install LitmusChaos 2.0 on Raspberry Pi Cluster with K3s&lt;/p&gt;

&lt;p&gt;For installation, we will be following their &lt;a href="https://litmusdocs-beta.netlify.app/docs/litmus-install-namespace-mode" rel="noopener noreferrer"&gt;&lt;em&gt;docs&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt; There are two ways to install, one is by using helm, other is by applying the YAML spec file. We will be installing using the YAML spec file, you can follow the other one if you want by going through their docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f [https://litmuschaos.github.io/litmus/2.0.0/litmus-2.0.0.yaml](https://litmuschaos.github.io/litmus/2.0.0/litmus-2.0.0.yaml)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You can find the latest version of litmus at &lt;a href="http://docs.litmuschaos.io" rel="noopener noreferrer"&gt;docs.litmuschaos.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s verify that all the services are running and that there have been no issues&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n litmus
kubectl get svc -n litmus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now use the LitmusChaos dashboard by using this address&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;master-node-ip&amp;gt;:&amp;lt;port&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the  with the master node IP and the  with what is showing to you for the &lt;em&gt;litmusportal-frontend-service&lt;/em&gt; external port value, the one after 9091: and then visit that address in your browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add-Ons
&lt;/h2&gt;

&lt;p&gt;=======&lt;/p&gt;

&lt;p&gt;The /etc/hosts file sets to default after a restart, so you will have to keep adding the RPis IP every time you restart or you can run a startup script that will automatically set the values on every restart.&lt;/p&gt;

&lt;p&gt;You can edit the &lt;em&gt;bash profile&lt;/em&gt; file on your system to use this &lt;em&gt;Kubeconfig&lt;/em&gt; and also add the ssh keys to the ssh-agent. In my system it was the &lt;em&gt;/home/username/.profile&lt;/em&gt; file, it might differ in your system. I added these lines to the profile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eval $(ssh-agent)
ssh-add ~/.ssh/kmaster
ssh-add ~/.ssh/knode1export KUBECONFIG=$HOME/.kube/k3s/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;=======&lt;/p&gt;

&lt;p&gt;In this article, we first set up the Raspberry Pis cluster and then installed K3s on the cluster. After that, we installed LitmusChaos onto the K3s cluster. We can now proceed with injecting chaos using the portal. This kind of setup is beneficially for local development purposes, and you will be saving money on AWS servers.&lt;/p&gt;

&lt;p&gt;You can join the LitmusChaos community on &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;&lt;em&gt;Github&lt;/em&gt;&lt;/a&gt;  and &lt;a href="https://www.notepadonline.org/wmtBaRICHQ" rel="noopener noreferrer"&gt;&lt;em&gt;Slack&lt;/em&gt;&lt;/a&gt;. The community is very active and tries to solve queries quickly.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this journey and found the blog interesting. You can leave your queries or suggestions (appreciation as well) in the comments below.&lt;/p&gt;

&lt;p&gt;Show your ❤️ with a ⭐ on our &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt;. To learn more about Litmus, check out the &lt;a href="https://docs.litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus documentation&lt;/a&gt;. Thank you! 🙏&lt;/p&gt;

&lt;p&gt;Thank you for reading&lt;/p&gt;

&lt;p&gt;Akash Shrivastava&lt;/p&gt;

&lt;p&gt;Software Engineer at Harness&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/avaakash/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; | &lt;a href="https://github.com/avaakash" rel="noopener noreferrer"&gt;Github&lt;/a&gt; | &lt;a href="https://instagram.com/avaakash" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt; | &lt;a href="https://twitter.com/_avaakash_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>chaosengineering</category>
      <category>raspberrypi</category>
      <category>litmuschaos</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to run Azure Disk Loss Experiment in LitmusChaos</title>
      <dc:creator>Akash Shrivastava</dc:creator>
      <pubDate>Fri, 29 Oct 2021 04:24:11 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/how-to-run-azure-disk-loss-experiment-in-litmuschaos-419l</link>
      <guid>https://dev.to/litmus-chaos/how-to-run-azure-disk-loss-experiment-in-litmuschaos-419l</guid>
      <description>&lt;p&gt;This article is a guide for setting up and running the Azure Virtual Disk Loss experiment on LitmusChaos 2.0. The experiment causes detachment of one or more virtual disks from the instance for a certain chaos duration and then re-attached them. The broad objective of this experiment is to extend support of LitmusChaos to non-Kubernetes targets while ensuring resiliency for all kinds of targets, as a part of a single chaos workflow for the entirety of a business.&lt;/p&gt;

&lt;p&gt;Currently, the experiment is available only as a technical preview in the chaos hub, so we will have to use the master branch of the chaos hub to access it.&lt;/p&gt;

&lt;p&gt;If you are looking for the Azure Instance Stop experiment, you can find it &lt;a href="https://medium.com/litmus-chaos/how-to-run-azure-instance-stop-experiment-in-litmuschaos-63ae3bcdb9ad" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisites
&lt;/h2&gt;

&lt;p&gt;To run this experiment, we need a few things beforehand&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;An Azure account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disk(s) attached to Virtual Machine Scale Set (or an Instance only)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Kubernetes cluster with LitmusChaos 2.0 installed (you can follow this blog to set up LitmusChaos 2.0 on AKS — &lt;a href="https://medium.com/litmus-chaos/litmus-in-aks-f8838cfc551f" rel="noopener noreferrer"&gt;Getting Started with LitmusChaos 2.0 in Azure Kubernetes Service&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting up Azure Credentials as Kubernetes Secret
&lt;/h2&gt;

&lt;p&gt;To let LitmusChaos access your Azure instances, you need to set up the azure credentials as a Kubernetes secret. It is a very simple process, first, you need to install Azure CLI (if you already haven’t) and log in to it. Now run this command to get the azure credentials saved in an &lt;em&gt;azure.auth&lt;/em&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az ad sp create-for-rbac — sdk-auth &amp;gt; azure.auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, create a &lt;em&gt;secret.yaml&lt;/em&gt; file with the following content. Change the content inside &lt;em&gt;azure.auth&lt;/em&gt; with the contents inside your &lt;em&gt;azure.auth&lt;/em&gt; file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: cloud-secret
type: Opaque
stringData:
  azure.auth: |-
    {
      "clientId": "XXXXXXXXX",
      "clientSecret": "XXXXXXXXX",
      "subscriptionId": "XXXXXXXXX",
      "tenantId": "XXXXXXXXX",
      "activeDirectoryEndpointUrl": "XXXXXXXXX",
      "resourceManagerEndpointUrl": "XXXXXXXXX",
      "activeDirectoryGraphResourceId": "XXXXXXXXX",
      "sqlManagementEndpointUrl": "XXXXXXXXX",
      "galleryEndpointUrl": "XXXXXXXXX",
      "managementEndpointUrl": "XXXXXXXXX"
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now run the following command. Remember to change the namespace if you have installed LitmusChaos in any other namespace&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f secret.yaml -n litmus&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Updating ChaosHub&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;As the experiment is only available as a technical preview right now, we will have to update the ChaosHub to use the technical preview (master) branch.&lt;/p&gt;

&lt;p&gt;Login to the Chaos Center and go to the ChaosHub section, select MyHub there and click on Edit Hub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmilzg03ss619eoyuxgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmilzg03ss619eoyuxgz.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now change the branch to “master”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuc5ig1wafhpka7q9ddw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuc5ig1wafhpka7q9ddw7.png" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Submit Now and the ChaosHub will now show the Azure Disk Loss experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scheduling the Experiment Workflow
&lt;/h2&gt;

&lt;p&gt;Now move to the Workflows section and click on Schedule a Workflow. Select the Self-Agent (or any other one if you have multiple agents installed) and click on Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsci244ep0ov0qmrntacq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsci244ep0ov0qmrntacq.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the third option to create a workflow from experiments using ChaosHub. Click on Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajgqnhy0gbs89jgw8r0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajgqnhy0gbs89jgw8r0j.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next again (or edit the workflow name if you want to) and now on the Experiments page, click on Add a new Experiment and select the Azure Virtual Disk Loss experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbf3ywxl1eqami9v4hng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbf3ywxl1eqami9v4hng.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next click on Edit YAML, you will now have to add the Disk Name(s) and Resource Group name in the ChaosEngine environments. Scroll down to the ChaosEngine artefacts, where you will see the environment variables, set the values accordingly. If your disks are connected to an instance that is a part of Scale Set, set the SCALE_SET to “enable”. Save the changes and schedule your workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wtqzxr1s3fg39age1q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wtqzxr1s3fg39age1q7.png" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For Scale set and node pools, the experiment works only for disk(s) attached to a specific instance in the scale set and not to the scale set&lt;/p&gt;

&lt;h2&gt;
  
  
  Observing the Experiment Run
&lt;/h2&gt;

&lt;p&gt;Great, now your workflow is running and you can check it out, click on Go to Workflow and then select your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpatpktxjq0jmeybcrp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpatpktxjq0jmeybcrp4.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check the status of your disk(s) in the Azure Portal to verify that the experiment is working as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2xqig5pvoi7lafc6vnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2xqig5pvoi7lafc6vnu.png" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywcyx7zdsn6v7wz2l4a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywcyx7zdsn6v7wz2l4a9.png" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also click on azure-disk-loss to view the experiment logs. After the given chaos duration, the experiment will automatically re-attach the disk(s), and it will give a pass/fail verdict. In case the experiment fails, verify through the logs and portal that the disk have been re-attached.&lt;/p&gt;

&lt;p&gt;This was it, you have successfully run the Azure Disk Loss experiment using LitmusChaos 2.0 Chaos Center.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7owan4o8qltvfa8yj35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7owan4o8qltvfa8yj35.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we saw how we can perform the Azure Disk Loss experiment using LitmusChaos 2.0. You can learn more about this experiment from the &lt;a href="https://github.com/litmuschaos/litmus-docs/blob/master/docs/azure-disk-loss.md" rel="noopener noreferrer"&gt;docs&lt;/a&gt;. This experiment is one of the many experiments Non-Kubernetes experiments in LitmusChaos, including experiments for AWS, GKS, VMWare, which are targeted towards making Litmus an absolute Chaos Engineering toolset for every enterprise regardless of the technology stack used.&lt;/p&gt;

&lt;p&gt;You can join the LitmusChaos community on &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt; and &lt;a href="https://www.notepadonline.org/wmtBaRICHQ" rel="noopener noreferrer"&gt;Slack&lt;/a&gt;. The community is very active and tries to solve queries quickly.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this journey and found the blog interesting. You can leave your queries or suggestions (appreciation as well) in the comments below.&lt;/p&gt;

&lt;p&gt;Show your ❤️ with a ⭐ on our &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt;. To learn more about Litmus, check out the &lt;a href="https://docs.litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus documentation&lt;/a&gt;. Thank you! 🙏&lt;/p&gt;

&lt;p&gt;Thank you for reading&lt;/p&gt;

&lt;p&gt;Akash Shrivastava&lt;/p&gt;

&lt;p&gt;Software Engineer at ChaosNative&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/avaakash/" rel="noopener noreferrer"&gt;Linkedin &lt;/a&gt;| &lt;a href="https://github.com/avaakash" rel="noopener noreferrer"&gt;Github &lt;/a&gt;| &lt;a href="https://instagram.com/avaakash" rel="noopener noreferrer"&gt;Instagram &lt;/a&gt;| &lt;a href="https://twitter.com/_avaakash_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>litmuschaos</category>
      <category>virtualdisk</category>
    </item>
    <item>
      <title>How to run Azure Instance Stop Experiment in LitmusChaos</title>
      <dc:creator>Akash Shrivastava</dc:creator>
      <pubDate>Wed, 25 Aug 2021 13:37:00 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/how-to-run-azure-instance-stop-experiment-in-litmuschaos-m9m</link>
      <guid>https://dev.to/litmus-chaos/how-to-run-azure-instance-stop-experiment-in-litmuschaos-m9m</guid>
      <description>&lt;p&gt;This article is a guide for setting up and running the Azure Instance Stop experiment on LitmusChaos 2.0. The experiment causes the power off of one or more azure instance(s) for a certain chaos duration and then power them on. The broad objective of this experiment is to extend the principles of cloud-native chaos engineering to non-Kubernetes targets while ensuring resiliency for all kinds of targets, as a part of a single chaos workflow for the entirety of a business.&lt;/p&gt;

&lt;p&gt;Currently, the experiment is available only as a technical preview in the chaos hub, so we will have to use the master branch of the chaos hub to access it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisites
&lt;/h2&gt;

&lt;p&gt;To run this experiment, we need a few things beforehand&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;An Azure account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Virtual Machine Scale Set (or an Instance only)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Kubernetes cluster with LitmusChaos 2.0 installed (you can follow this blog to set up LitmusChaos 2.0 on AKS — &lt;a href="https://medium.com/litmus-chaos/litmus-in-aks-f8838cfc551f" rel="noopener noreferrer"&gt;Getting Started with LitmusChaos 2.0 in Azure Kubernetes Service&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting up Azure Credentials as Kubernetes Secret
&lt;/h2&gt;

&lt;p&gt;To let LitmusChaos access your Azure instances, you need to set up the azure credentials as a Kubernetes secret. It is a very simple process, first, you need to install Azure CLI (if you already haven’t) and log in to it. Now run this command to get the azure credentials saved in an &lt;em&gt;azure.auth&lt;/em&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az ad sp create-for-rbac — sdk-auth &amp;gt; azure.auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, create a &lt;em&gt;secret.yaml&lt;/em&gt; file with the following content. Change the content inside &lt;em&gt;azure.auth&lt;/em&gt; with the contents inside your &lt;em&gt;azure.auth&lt;/em&gt; file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: cloud-secret
type: Opaque
stringData:
  azure.auth: |-
    {
      "clientId": "XXXXXXXXX",
      "clientSecret": "XXXXXXXXX",
      "subscriptionId": "XXXXXXXXX",
      "tenantId": "XXXXXXXXX",
      "activeDirectoryEndpointUrl": "XXXXXXXXX",
      "resourceManagerEndpointUrl": "XXXXXXXXX",
      "activeDirectoryGraphResourceId": "XXXXXXXXX",
      "sqlManagementEndpointUrl": "XXXXXXXXX",
      "galleryEndpointUrl": "XXXXXXXXX",
      "managementEndpointUrl": "XXXXXXXXX"
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now run the following command. Remember to change the namespace if you have installed LitmusChaos in any other namespace&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f secret.yaml -n litmus&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Updating ChaosHub&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;As the experiment is only available as a technical preview right now, we will have to update the ChaosHub to use the technical preview (master) branch.&lt;/p&gt;

&lt;p&gt;Login to the Chaos Center and go to the ChaosHub section, select MyHub there and click on Edit Hub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmilzg03ss619eoyuxgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmilzg03ss619eoyuxgz.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now change the branch to “master”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuc5ig1wafhpka7q9ddw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuc5ig1wafhpka7q9ddw7.png" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Submit Now and the ChaosHub will now show the Azure Instance Stop experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scheduling the Experiment Workflow
&lt;/h2&gt;

&lt;p&gt;Now move to the Workflows section and click on Schedule a Workflow. Select the Self-Agent (or any other one if you have multiple agents installed) and click on Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsci244ep0ov0qmrntacq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsci244ep0ov0qmrntacq.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the third option to create a workflow from experiments using ChaosHub. Click on Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajgqnhy0gbs89jgw8r0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajgqnhy0gbs89jgw8r0j.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next again (or edit the workflow name if you want to) and now on the Experiments page, click on Add a new Experiment and select the Azure Instance Stop experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxtb8k10cbhq4iaftjy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxtb8k10cbhq4iaftjy1.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next click on Edit YAML, you will now have to add the Instance Name(s) and Resource Group name in the ChaosEngine environments. Scroll down to the ChaosEngine artefacts, where you will see the environment variables, set the values accordingly. If you are injecting chaos on a Scale Set, set the SCALE_SET to “enable”. Save the changes and schedule your workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufpvkeis6w0uhweji2fj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufpvkeis6w0uhweji2fj.png" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You need to provide the instance name from the Virtual Machine Scale Set section for Azure AKS nodes, not from the AKS node pools section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4htq2e8wx624flle91k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4htq2e8wx624flle91k.png" alt="Azure Virtual Machine Scale Set Instances section" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Azure Virtual Machine Scale Set Instances section&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Observing the Experiment Run
&lt;/h2&gt;

&lt;p&gt;Great, now your workflow is running and you can check it out, click on Go to Workflow and then select your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kzj1i0jp8cd61qsu4cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kzj1i0jp8cd61qsu4cw.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check the status of your instance in the Azure Portal to verify that the experiment is working as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forxan08vdhbmt0hlzug2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forxan08vdhbmt0hlzug2.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also click on azure-instance-stop to view the experiment logs. After the given chaos duration, the experiment will automatically power on the instance(s), and it will give a pass/fail verdict. In case the experiment fails, verify through the logs and portal that the instances have started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wg0lmhba5gpl3ca62l9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wg0lmhba5gpl3ca62l9.png" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was it, you have successfully run the Azure Instance Stop experiment using LitmusChaos 2.0 Chaos Center.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1aoun7jq3qikncfd7j2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1aoun7jq3qikncfd7j2.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we saw how we can perform the Azure Instance Stop experiment using LitmusChaos 2.0. You can learn more about this experiment from the &lt;a href="https://litmuschaos.github.io/litmus/experiments/categories/azure/azure-instance-stop/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;. This experiment is one of the many experiments Non-Kubernetes experiments in LitmusChaos, including experiments for AWS, GKS, VMWare, which are targeted towards making Litmus an absolute Chaos Engineering toolset for every enterprise regardless of the technology stack used.&lt;/p&gt;

&lt;p&gt;You can join the LitmusChaos community on &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt; and &lt;a href="https://www.notepadonline.org/wmtBaRICHQ" rel="noopener noreferrer"&gt;Slack&lt;/a&gt;. The community is very active and tries to solve queries quickly.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this journey and found the blog interesting. You can leave your queries or suggestions (appreciation as well) in the comments below.&lt;/p&gt;

&lt;p&gt;Show your ❤️ with a ⭐ on our &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt;. To learn more about Litmus, check out the &lt;a href="https://docs.litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus documentation&lt;/a&gt;. Thank you! 🙏&lt;/p&gt;

&lt;p&gt;Thank you for reading&lt;/p&gt;

&lt;p&gt;Akash Shrivastava&lt;/p&gt;

&lt;p&gt;Software Engineer at Harness&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/avaakash/" rel="noopener noreferrer"&gt;Linkedin &lt;/a&gt;| &lt;a href="https://github.com/avaakash" rel="noopener noreferrer"&gt;Github &lt;/a&gt;| &lt;a href="https://instagram.com/avaakash" rel="noopener noreferrer"&gt;Instagram &lt;/a&gt;| &lt;a href="https://twitter.com/_avaakash_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>litmuschaos</category>
      <category>azure</category>
      <category>vm</category>
    </item>
    <item>
      <title>Getting Started with LitmusChaos 2.0 in Azure Kubernetes Service</title>
      <dc:creator>Akash Shrivastava</dc:creator>
      <pubDate>Fri, 09 Jul 2021 15:45:00 +0000</pubDate>
      <link>https://dev.to/litmus-chaos/getting-started-with-litmus-2-0-in-azure-kubernetes-service-13f3</link>
      <guid>https://dev.to/litmus-chaos/getting-started-with-litmus-2-0-in-azure-kubernetes-service-13f3</guid>
      <description>&lt;p&gt;This is a quick tutorial on how to get started with LitmusChaos 2.0 in Azure Kubernetes Services. We will first create an AKS Cluster, followed by Installing LitmusChaos 2.0 on the cluster and then executing a simple pre-defined chaos workflow using LitmusChaos.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LitmusChaos
&lt;/h2&gt;

&lt;p&gt;LitmusChaos is a toolset to do cloud-native chaos engineering. It provides tools to orchestrate chaos on Kubernetes to help SREs find weaknesses in their deployments. SREs use Litmus to run chaos experiments initially in the staging environment and eventually in production to find bugs, vulnerabilities. Fixing the weaknesses leads to increased resilience of the system.&lt;/p&gt;

&lt;p&gt;Litmus takes a cloud-native approach to create, manage and monitor chaos. Chaos is orchestrated using the following Kubernetes Custom Resource Definitions (CRDs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;ChaosEngine: A resource to link a Kubernetes application or Kubernetes node to a ChaosExperiment. ChaosEngine is watched by Litmus’ Chaos-Operator which then invokes Chaos-Experiments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ChaosExperiment: A resource to group the configuration parameters of a chaos experiment. ChaosExperiment CRs are created by the operator when experiments are invoked by ChaosEngine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ChaosResult: A resource to hold the results of a chaos experiment. The Chaos-exporter reads the results and exports the metrics into a configured Prometheus server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information you can visit &lt;a href="https://litmuschaos.io/" rel="noopener noreferrer"&gt;litmuschaos.io&lt;/a&gt; or &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;github.com/litmuschaos/litmus&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Azure CLI — &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt" rel="noopener noreferrer"&gt;How to install on Linux/Debian&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubectl — &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt" rel="noopener noreferrer"&gt;How to install on Linux&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you feel lazy to install them, you can always use the Azure Cloud Shell, it already has the tools installed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an AKS Cluster
&lt;/h2&gt;

&lt;p&gt;The first step to installing LitmusChaos on an AKS Cluster is to have an AKS Cluster. So let’s do that. Open &lt;a href="http://portal.azure.com" rel="noopener noreferrer"&gt;Azure Portal&lt;/a&gt; and then log in with your account. You will be presented with the home screen. Now search for &lt;strong&gt;Kubernetes services&lt;/strong&gt; and open it.&lt;/p&gt;

&lt;p&gt;To create a cluster, click on create &lt;strong&gt;Create&lt;/strong&gt; option in the menu and then select &lt;strong&gt;Create a Kubernetes cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6psblbhbuwnr8mt7wjws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6psblbhbuwnr8mt7wjws.png" alt="Creating an AKS Cluster" width="800" height="447"&gt;&lt;/a&gt;&lt;em&gt;Creating an AKS Cluster&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now you have to fill in details about what kind of cluster you want to create. Since Azure doesn’t charge for cluster management, you will only have to pay for the Node Instance you will be running. Fill in the name of the cluster, it can be anything, also create a new &lt;strong&gt;Resource Group&lt;/strong&gt; if you haven’t. You can keep other settings as it is, or if you know what they do, can change it according to your need. For the &lt;strong&gt;Node Pool&lt;/strong&gt;, select a &lt;strong&gt;B2ms&lt;/strong&gt; size that has 2 vCPUs and 8 GiB of RAM and set the &lt;strong&gt;Node Count&lt;/strong&gt; to 1 as we only want to run LitmusChaos, this will suffice for it. Although you are free to choose your configuration, keeping a minimum of 2 vCPUs and 8 GiB of RAM will help in seamless running. Remember to check that the &lt;strong&gt;Scale Method&lt;/strong&gt; is set to &lt;strong&gt;Manual&lt;/strong&gt; to keep a check on the cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp50ngrhkm4datasl43dq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp50ngrhkm4datasl43dq.png" alt="Configuring AKS cluster" width="800" height="439"&gt;&lt;/a&gt;&lt;em&gt;Configuring AKS cluster&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can skip the rest of the configurations for now and directly click on &lt;strong&gt;Review + Create&lt;/strong&gt; which will start the creation of Cluster. It will take around 5–10 minutes, so you can sit back for some time, grab a glass of water, also read about &lt;a href="https://medium.com/litmus-chaos/a-beginners-practical-guide-to-containerisation-and-chaos-engineering-with-litmuschaos-2-0-5f4f3cf2a55d" rel="noopener noreferrer"&gt;ChaosEngineering&lt;/a&gt; and &lt;a href="https://medium.com/litmus-chaos/litmus-2-0-simplifying-chaos-engineering-for-enterprises-5c3d73ca98d6" rel="noopener noreferrer"&gt;LitmusChaos 2.0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshdtmfv55ayzwviheiby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshdtmfv55ayzwviheiby.png" alt="AKS Cluster Deployment in Progress" width="800" height="425"&gt;&lt;/a&gt;&lt;em&gt;AKS Cluster Deployment in Progress&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The cluster is ready and you can now install LitmusChaos on it. You can use the Azure Cloud Shell or your local system terminal to connect to the Cluster, the steps are the same for both. I personally prefer using my local system so I will use that for this tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to AKS Cluster
&lt;/h2&gt;

&lt;p&gt;Open your cluster and click on the &lt;strong&gt;Connect&lt;/strong&gt; button, this will show you two commands to run. Copy the two commands and run them one by one. The first command sets the account as per the subscription id provided, and the second command fetches the credentials for the specific resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91yale4gs401jds8hd9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91yale4gs401jds8hd9j.png" alt="Connecting to AKS Cluster" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Connecting to AKS Cluster&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing LitmusChaos
&lt;/h2&gt;

&lt;p&gt;Now you have the credentials to access the Cluster, you can go ahead and install LitmusChaos 2.0 and do some chaos. For installation, I will be following their &lt;a href="https://litmusdocs-beta.netlify.app/docs/litmus-install-namespace-mode" rel="noopener noreferrer"&gt;docs&lt;/a&gt;. There are two ways to install, one is by using helm, other is by applying the manifest file. I will follow the helm repo procedure, you can follow the other one if you want by going through their docs.&lt;/p&gt;

&lt;p&gt;Note: You will need to have Helm installed on your system. You can refer from &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, you will add the LitmusChaos Helm repository and then confirm that litmuschaos is present in the helm repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add litmuschaos [https://litmuschaos.github.io/litmus-helm/](https://litmuschaos.github.io/litmus-helm/)
helm repo list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famghtw5gg0sn2adtckd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famghtw5gg0sn2adtckd6.png" alt="Adding litmus to helm repo" width="800" height="210"&gt;&lt;/a&gt;&lt;em&gt;Adding litmus to helm repo&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, you will create the namespace, by default we use *litmus *as the namespace name, you are allowed to use any name of your choice, just remember to change it in the following commands.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create ns litmus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s install LitmusChaos using the helm repository you just added.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install chaos litmuschaos/litmus-2–0–0-beta --namespace=litmus --devel --set portalScope=namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1knqgiq6w2bwybgltpwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1knqgiq6w2bwybgltpwb.png" alt="Creating litmus namespace and installing LitmusChaos" width="800" height="274"&gt;&lt;/a&gt;&lt;em&gt;Creating litmus namespace and installing LitmusChaos&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are using helm2, you will have to run this command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install --name chaos litmuschaos/litmus-2–0–0-beta --namespace=litmus --devel --set portalScope=namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The final step is to install the LitmusChaos CRDs&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/litmus-portal/litmus-portal-crds.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhndidvuk45oa9m36z28y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhndidvuk45oa9m36z28y.png" alt="Installing LitmusChaos CRDs" width="800" height="206"&gt;&lt;/a&gt;&lt;em&gt;Installing LitmusChaos CRDs&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s verify that all the services are running, and there has been no issue&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yyj6tftpb5vszydb283.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yyj6tftpb5vszydb283.png" alt="LitmusChaos Services" width="800" height="206"&gt;&lt;/a&gt;&lt;em&gt;LitmusChaos Services&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The services are running properly but there is one more change that you need to do, since AKS doesn’t provide public-IP to nodes by default, we need to change the &lt;strong&gt;litmusportal-frontend-service&lt;/strong&gt; to a LoadBalancer service. You can do that by editing the service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit svc litmusportal-frontend-service -n litmus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;At the very end inside &lt;strong&gt;spec&lt;/strong&gt; there is &lt;strong&gt;type: NodePort&lt;/strong&gt;, you have to change it to &lt;strong&gt;type: LoadBalancer&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
 clusterIP: xxxxxxx
 externalTrafficPolicy: Cluster
 ports:
 — name: http
 nodePort: xxxxx
 port: 9091
 protocol: TCP
 targetPort: 8080
 selector:
 app.kubernetes.io/component: litmus-2–0–0-beta-frontend
 sessionAffinity: None
 # Change the type here from NodePort to LoadBalancer
 type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then save it, and list the services again. The External-IP might show pending for a minute, run the command again after a minute to get the IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii8knwco7ay1i4bhcq9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii8knwco7ay1i4bhcq9c.png" alt="LitmusChaos Services" width="800" height="135"&gt;&lt;/a&gt;&lt;em&gt;LitmusChaos Services&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;external-ip&amp;gt;:9091
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Change the  with what is showing to you for the litmusportal-frontend-service and then visit the address in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fniqhkfns8l8m2o9522.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fniqhkfns8l8m2o9522.png" alt="LitmusChaos Portal sign-in page" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;LitmusChaos Portal sign-in page&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ta-da! We are done with the installation of LitmusChaos 2.0 and now you can run a workflow. Login to the portal, the default credentials are&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;username: admin
password: litmus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It will ask you to set a new password, and then log in to the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Other than changing the frontend service to LoadBalancer, there is another way to make it work with NodePort by enabling public IP for individual nodes. I have not covered it in this article, but feel free to check it out &lt;a href="https://docs.microsoft.com/en-in/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-for-your-node-pools" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Chaos Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpl6e6chh92gso92nser.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpl6e6chh92gso92nser.png" alt="LitmusChaos Portal Dashboard Page" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;LitmusChaos Portal Dashboard Page&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s see some chaos happening now. LitmusChaos comes with few predefined workflows, which setups a service and then wreak havoc in them. We will be running a podtato-head workflow, which creates a simple deployment and then injects the pod-delete experiment into it.&lt;/p&gt;

&lt;p&gt;On the dashboard select &lt;strong&gt;Schedule a Workflow&lt;/strong&gt;. In the Workflows dashboard, select the &lt;strong&gt;Self-Agent&lt;/strong&gt; and then click on &lt;strong&gt;Next&lt;/strong&gt;. In the next screen, select &lt;strong&gt;Create a Workflow from Pre-defined Templates&lt;/strong&gt; and then select &lt;strong&gt;podtato-head&lt;/strong&gt; and then click on &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop7fzek3pg59mvodb6cc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop7fzek3pg59mvodb6cc.png" alt="Scheduling a podtato-head template-based workflow" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Scheduling a podtato-head template-based workflow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;On the next screen, you can define the &lt;strong&gt;Experiment name, description, and namespace,&lt;/strong&gt; leave the default values and click on &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On this screen, you can tune the workflow by editing the experiment manifest and adding/removing or arranging the experiments in the workflow. The podtato-head template comes with its own defined workflow so simply click on &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ki7pvmul1bqvytdhb5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ki7pvmul1bqvytdhb5w.png" alt="Tuning the Workflow" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Tuning the Workflow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next screen is to adjust the weights of the experiment on the reliability score since you are running only one experiment, you can keep any value, in the case where you have multiple experiments running, you can set the importance of each experiment according to your requirements to get a meaningful reliability score. For now, click on &lt;strong&gt;Next&lt;/strong&gt; and select &lt;strong&gt;Schedule now&lt;/strong&gt;, you can also create a recurring schedule if you want the experiment to keep running at certain intervals. The final screen is to confirm the workflow and schedule it. Click on &lt;strong&gt;Finish&lt;/strong&gt; to run the workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l2e58448xosrv7mjbjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l2e58448xosrv7mjbjc.png" alt="Workflow created" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Workflow created&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yay! The workflow is created and is running now. Click on &lt;strong&gt;Go to Workflow&lt;/strong&gt;, which will take you to the workflow screen, here you can see all your scheduled workflows. Click on the workflow to see its status.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5sy1tf0nbmfvwvqcthp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5sy1tf0nbmfvwvqcthp.png" alt="Workflow Dashboard Page" width="800" height="421"&gt;&lt;/a&gt;&lt;em&gt;Workflow Dashboard Page&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The workflow will take some minutes to run, you can take a break until then. Meanwhile, you can join the &lt;a href="https://www.notepadonline.org/wmtBaRICHQ" rel="noopener noreferrer"&gt;LitmusChaos community on slack&lt;/a&gt; to stay updated with new releases and get help from the community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7xekoa99pnnalcgimeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7xekoa99pnnalcgimeg.png" alt="Workflow Dashboard for the podtato-head workflow" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Workflow Dashboard for the podtato-head workflow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The workflow run is now complete, you can access the workflow details using the graph view or the table view.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ob1ymqql0j3l70w1ewi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ob1ymqql0j3l70w1ewi.png" alt="Workflow Completed Graph View" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Workflow Completed Graph View&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yh5j1bucuet9p778mfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yh5j1bucuet9p778mfd.png" alt="Workflow Completed Table View" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Workflow Completed Table View&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;View Logs &amp;amp; Results&lt;/strong&gt; to check out the logs and chaos results for the experiment&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnco3vk5ow8m7dstueoio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnco3vk5ow8m7dstueoio.png" alt="Experiment Logs and Results" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Experiment Logs and Results&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And we are done. You were able to create an AKS Cluster, install LitmusChaos 2.0 on it, log in to the LitmusChaos Portal and then finally schedule a Workflow.&lt;/p&gt;

&lt;p&gt;You can join the LitmusChaos community on &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Github&lt;/a&gt; and &lt;a href="https://www.notepadonline.org/wmtBaRICHQ" rel="noopener noreferrer"&gt;Slack&lt;/a&gt;. The community is very active and tries to solve queries quickly.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this journey and found the blog interesting. You can leave your queries or suggestions (maybe appreciation as well) in the comments below.&lt;/p&gt;

&lt;p&gt;Thank you for reading&lt;/p&gt;

&lt;p&gt;Akash Shrivastava&lt;/p&gt;

&lt;p&gt;Software Engineer at ChaosNative and a final year CSE student&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/avaakash/" rel="noopener noreferrer"&gt;Linkedin &lt;/a&gt;| &lt;a href="https://github.com/avaakash" rel="noopener noreferrer"&gt;Github &lt;/a&gt;| &lt;a href="https://instagram.com/avaakash" rel="noopener noreferrer"&gt;Instagram &lt;/a&gt;| &lt;a href="https://twitter.com/_avaakash_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>chaosengineering</category>
      <category>litmus</category>
    </item>
  </channel>
</rss>
