<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SamKnowsCoding</title>
    <description>The latest articles on DEV Community by SamKnowsCoding (@samknowscoding).</description>
    <link>https://dev.to/samknowscoding</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/samknowscoding"/>
    <language>en</language>
    <item>
      <title>What is Infrastructure as Code?</title>
      <dc:creator>SamKnowsCoding</dc:creator>
      <pubDate>Sat, 14 May 2022 14:10:25 +0000</pubDate>
      <link>https://dev.to/samknowscoding/what-is-infrastructure-as-code-5fek</link>
      <guid>https://dev.to/samknowscoding/what-is-infrastructure-as-code-5fek</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jtPRtr-o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqg7fo8kcjycsue716ll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jtPRtr-o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqg7fo8kcjycsue716ll.png" alt="Image description" width="880" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure as code(IaC)
&lt;/h3&gt;

&lt;p&gt;What is Infrastructure as code?&lt;br&gt;
TL;DR, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Infrastructure as code describes the infrastructure in executable text format. &lt;/li&gt;
&lt;li&gt;The short-lived infrastructure can be used and then discarded. Servers are built on demand, through automation. &lt;/li&gt;
&lt;li&gt;Rather than patching a running container, immutable delivery modifies the container image and then redeploys a new container.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Infrastructure-as-code is the practice of &lt;em&gt;&lt;strong&gt;describing infrastructure in text format&lt;/strong&gt;&lt;/em&gt;. This is not about documentation, I'm talking about an executable text format, which is also called code. You want to be able to configure your infrastructure in a textual description that you can give to a tool to execute.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration Management Systems
&lt;/h3&gt;

&lt;p&gt;So, the configuration, AKA code, will be stored in somewhere and executed later when in need. You want to be able to configure your infrastructure with textual descriptions that you can give to a tool to perform. The tools that accomplish this are called configuration management systems. &lt;em&gt;&lt;strong&gt;These tools, such as Ansible, Puppet, and Chef, allow you to describe your infrastructure in code, and then they create that infrastructure and maintain its state.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You never want to manually make system changes to the software configuration. This is not reproducible and is extremely error-prone. &lt;em&gt;&lt;strong&gt;You want to use templates and scripts that describe how to install and automate configuration elements&lt;/strong&gt;&lt;/em&gt;, such as systems, devices, software, and users. You can then take this text code and store it in your git so that you have a history of all changes. That way, everyone knows which version is the latest and what the infrastructure should look like. This way everyone knows which version is the latest version and how the infrastructure should look. Docker, Vagrant, Terraform, and even Kubernetes, also allow you to describe your infrastructure as code, and this code should be checked into version control.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ephemeral Immutable Infrastructure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0GrJMk1T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2zqvsc1djnzr9imo0pba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0GrJMk1T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2zqvsc1djnzr9imo0pba.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason this is so important is that server drift is a major cause of failure. Over time, servers are updated for a variety of reasons, and not always by the same people. This causes them to drift from their initial configuration. Sometimes the accumulation of these changes can cause failures in unpredictable ways. Worse, there are servers that should be the same, but one of them keeps failing due to a misconfiguration somewhere. &lt;strong&gt;&lt;em&gt;I think that the servers should be considered as cattle, instead of pets. You see, cattle there are thousands of them, it is waste of time to name and care each one of them. But the pet on the other hand? When they get sick, they are lovingly pampered and cared to keep healthy.&lt;/em&gt;&lt;/strong&gt; The message here is to not lovingly handcraft servers or spend too much time debugging them when they don't work. You want to be able to replace them with an identical server that works properly. This means you have to think of your infrastructure as something ephemeral or transient. It only exists for as long as you need it, and then you remove it when it is not in use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Immutable Delivery via Container
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qebJGj9R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m3aiboa4hat2v7nyg7uo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qebJGj9R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m3aiboa4hat2v7nyg7uo.png" alt="Image description" width="880" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Application are packaged in containers
&lt;/h4&gt;

&lt;p&gt;Docker is a packaging technology that allows us to bring things up and down in a consistent way in an isolated environment called a container. &lt;/p&gt;

&lt;h4&gt;
  
  
  Same container that running in production can run on your laptop
&lt;/h4&gt;

&lt;p&gt;Docker supports infrastructure-as-code, allowing you to specify how to build images from code called Dockerfiles. These Dockerfiles build the same images the same way every time. Docker then creates a container from that image in the same way each time it is deployed. This means that the same container running in production can be run on the developer's laptop. This is the ultimate development-production parity.&lt;/p&gt;

&lt;h4&gt;
  
  
  Immediate rollback and rolling update
&lt;/h4&gt;

&lt;p&gt;You're not installing the application, seeing if it works, and uninstalling it if it doesn't. You're just starting it with a new version of the Docker container. If it starts having problems, you stop it, shut it down, and take out the previous version that's already installed in its own container. It's literally a matter of seconds. You can use the same approach for containers that start misbehaving. You delete this container and create another one to replace it. The new container will be exactly the same as the old one was on the first day it appeared. The reason for this is simple: if you patch a container, it dies, a new container is brought up to replace it, and the new container is unpatched. &lt;/p&gt;

&lt;p&gt;So, IaC is an important part of the DevOps process that describes service facility in text and then executes that code to create environment. In practical development needs, by combining technologies such as webhook, through automated testing, and then CI/CD pipeline to release code, developers can release code faster and more efficiently.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>infrastructure</category>
      <category>aws</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>Design for failure</title>
      <dc:creator>SamKnowsCoding</dc:creator>
      <pubDate>Sat, 14 May 2022 08:41:08 +0000</pubDate>
      <link>https://dev.to/samknowscoding/design-for-failure-5f3n</link>
      <guid>https://dev.to/samknowscoding/design-for-failure-5f3n</guid>
      <description>&lt;p&gt;Once you design your application as a collection of &lt;strong&gt;stateless microservices&lt;/strong&gt;, there are a lot of moving parts, which means there is a lot of potential for things to go wrong.&lt;br&gt;
Services can occasionally be unresponsive or even break, so you can't always rely on them being available when you need them. Hopefully these events are very transient, but you don't want your application to fail just because some dependent service is running slow or has a lot of network latency on a given day. That's why you need to design for failure at the application level. Since &lt;strong&gt;failure is inevitable&lt;/strong&gt;, you must build your software to resist failure and to scale horizontally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Embrace the failure
&lt;/h3&gt;

&lt;p&gt;Failure will happen. So that is why we must design for failure. &lt;br&gt;
Failure is the only constant. We must change our thinking, from moving &lt;strong&gt;from how to avoid failure to how to identify failure when it happens, and what to do to recover from it&lt;/strong&gt;. This is one of the reasons why we moved DevOps measurements from “mean time to failure” to “mean time to recovery.” It’s not about trying not to fail. It’s about making sure that when failure happens, and it will, you can recover quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plan to be throttled and retry, degrade gracefully
&lt;/h3&gt;

&lt;p&gt;Plan to be throttled. You pay a certain level of quality of service for your backup service in the cloud, and they hold you to that agreement. Let's say you choose a plan that allows 20 database reads per second. When you exceed that limit, the service will throttle you. You will get a &lt;strong&gt;429_TOO_MANY_REQUESTS&lt;/strong&gt; error instead of &lt;strong&gt;200_OK&lt;/strong&gt;, and you will need to deal with this problem.&lt;br&gt;
In this case, you would retry. This logic needs to be in your application code. When you retry, you want to back off exponentially on failure. The idea is to degrade gracefully.&lt;br&gt;
Also, if you can, cache where appropriate so you don't always have to make remote calls to these services if the result won't change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retry Pattern
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LRd9NN_n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5r9a5lu78mfon3kd3psd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LRd9NN_n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5r9a5lu78mfon3kd3psd.png" alt="Image description" width="880" height="524"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
This allows the application to handle transient failures by transparently retrying and failing operations when trying to connect to a service or network resource. I have heard developers say, you have to deploy the database before starting my service because it expects the database to be there at startup. This is a fragile design that is not suitable for cloud-native applications. If the database is not there, your application should wait patiently and then retry again. You must be able to connect, and reconnect, as well as fail to connect and connect again. This is how you design robust cloud-native microservices. The key here is a retry mode that backs up exponentially, with longer delays between each attempt. Rather than retrying 10 times in a row and overwhelming the service, you retry and let it fail. You wait a second, and you retry again. Then you wait 2 seconds, then you wait 4 seconds, then you wait 8 seconds. &lt;strong&gt;Each time you retry, you increase the wait time a bit until all the retries have been used up, and then you return an error condition.&lt;/strong&gt; This gives the back-end service time to recover from the factors that caused the failure. It could just be a temporary network delay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Circuit Breaker Pattern
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DeOVTT1p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gc18yvqgghpnprrkqp3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DeOVTT1p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gc18yvqgghpnprrkqp3q.png" alt="Image description" width="710" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The circuit breaker pattern is similar to that of your home's electric circuit breaker. You may have experienced a tripped circuit breaker in your home. You may have done something that exceeded the circuit power limit and caused the lights to go out. That's when you took a flashlight down to the basement and reset the breaker to get the lights back on. This breaker pattern works in the same way. It is used to identify a problem and then do something to avoid a cascading failure. A cascading failure is when one service is unavailable and causes a cascading failure of other services. &lt;strong&gt;With breaker pattern, you can avoid this by tripping the breaker and having an alternate path back to something useful until the original service is restored and the breaker is off again.&lt;/strong&gt; It works in such a way that everything flows normally as long as the circuit breaker is closed. The circuit breaker is monitored for failure up to a certain limit. Once this limit threshold is reached, this particular threshold, the circuit breaker trips and all further invocations of the circuit breaker return an error, not even an invocation of the protected service. Then after the timeout, it enters this semi-open state and tries to communicate with the service again. If it fails, it goes back to the closed state. If it succeeds, it goes back to the fully open state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bulkhead Pattern
&lt;/h2&gt;

&lt;p&gt;Bulkhead Pattern can be used to isolate a failed service to limit the scope of the failure. This is a pattern where the use of a separate thread pool can help recover from a failed database connection by directing traffic to another thread pool that is still active. It gets its name from the design of bulkheads on ships. The compartments below the waterline are separated by walls called "bulkheads". If the ship is damaged, only one compartment will fill with water. Bulkheads prevent water from affecting other compartments and sinking the ship. &lt;strong&gt;Using bulkhead pattern isolates consumers from services that fail as a cascade, allowing them to retain some functionality in the event of a service failure. Other services and functions of the application continue to work.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In general, failure is inevitable, so we design for failure, rather than trying to avoid it. Developers need to build resilience so they can recover quickly. Retry pattern works by retrying failed operations. Circuit breaker pattern is designed to avoid cascading failures. Bulkhead pattern can be used to isolate failed services.&lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>microservices</category>
      <category>programming</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
