<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Divine Odazie</title>
    <description>The latest articles on DEV Community by Divine Odazie (@kikiodazie).</description>
    <link>https://dev.to/kikiodazie</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kikiodazie"/>
    <language>en</language>
    <item>
      <title>Can you use just any Kubernetes project in your regulated environment?</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Fri, 11 Apr 2025 13:16:10 +0000</pubDate>
      <link>https://dev.to/kikiodazie/can-you-use-just-any-kubernetes-project-in-your-regulated-environment-54m2</link>
      <guid>https://dev.to/kikiodazie/can-you-use-just-any-kubernetes-project-in-your-regulated-environment-54m2</guid>
      <description>&lt;p&gt;Can you use just any Kubernetes project in your regulated environment?&lt;/p&gt;

&lt;p&gt;We all know you can't.&lt;/p&gt;

&lt;p&gt;This is one of the challenge some shared on the k8sproject./com waitlist.&lt;/p&gt;

&lt;p&gt;The person shared that they are new to Kubernetes.&lt;/p&gt;

&lt;p&gt;And restricted to using it in the GovCloud region of AWS at work.&lt;/p&gt;

&lt;p&gt;Now where can they go to get updated information based on experience of others.&lt;/p&gt;

&lt;p&gt;This is why we are building "k8sprojects/.com".&lt;/p&gt;

&lt;p&gt;Tell us what to build: &lt;a href="https://everythingdevops.typeform.com/k8sprojects" rel="noopener noreferrer"&gt;https://everythingdevops.typeform.com/k8sprojects&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>docker</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>k8sprojects.com The Kuberentes Review Platform</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Mon, 31 Mar 2025 09:00:00 +0000</pubDate>
      <link>https://dev.to/kikiodazie/k8sprojectscom-the-kuberentes-review-platform-4i8l</link>
      <guid>https://dev.to/kikiodazie/k8sprojectscom-the-kuberentes-review-platform-4i8l</guid>
      <description>&lt;p&gt;Trustpilot for Kubernetes projects? Stay with me!&lt;/p&gt;

&lt;p&gt;KubeCon starts tomorrow; we are going to learn about exciting projects.&lt;/p&gt;

&lt;p&gt;With that, I am happy to announce a project I have been working on for a while.&lt;/p&gt;

&lt;p&gt;k8sprojects.com&lt;/p&gt;

&lt;p&gt;The idea is simple.&lt;/p&gt;

&lt;p&gt;A platform for engineers like you to Discover, Validate and Review new and existing Kubernetes projects.&lt;/p&gt;

&lt;p&gt;Over my years in the cloud native space, I have seen myself searching for reviews on the tools I want to use.&lt;/p&gt;

&lt;p&gt;I find most of those reviews on Reddit.&lt;/p&gt;

&lt;p&gt;But the sad thing is most are stale, some leave out context like&lt;/p&gt;

&lt;p&gt;↳Number of nodes&lt;/p&gt;

&lt;p&gt;↳Type of company. A fintech product is not the same as others&lt;/p&gt;

&lt;p&gt;↳Team size., etc.&lt;/p&gt;

&lt;p&gt;Also, not everyone is on Reddit or wants to be.&lt;/p&gt;

&lt;p&gt;What if there is a platform where engineering context is prioritized?&lt;/p&gt;

&lt;p&gt;Where you can easily share your thoughts through your GitHub account.&lt;/p&gt;

&lt;p&gt;What if there was a review platform built with cloud-native engineers in mind?&lt;/p&gt;

&lt;p&gt;This is what we are building.&lt;/p&gt;

&lt;p&gt;And if you like the idea, we want you to tell us what to build.&lt;/p&gt;

&lt;p&gt;Join the waitlist: &lt;a href="https://everythingdevops.typeform.com/k8sprojects" rel="noopener noreferrer"&gt;https://everythingdevops.typeform.com/k8sprojects&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And let us know what you want to see.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>cloud</category>
      <category>github</category>
    </item>
    <item>
      <title>Optimize AWS Storage Costs with Amazon S3 Lifecycle Configurations</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Wed, 13 Sep 2023 19:49:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/optimize-aws-storage-costs-with-amazon-s3-lifecycle-configurations-53ok</link>
      <guid>https://dev.to/aws-builders/optimize-aws-storage-costs-with-amazon-s3-lifecycle-configurations-53ok</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/optimize-aws-storage-costs-with-amazon-s3-lifecycle-configurations/" rel="noopener noreferrer"&gt;EverythingDevOps&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Data is the lifeblood of any business. Data is used to make decisions, drive innovation, and serve customers. But data can also be expensive to store at scale in the cloud. That's where storage lifecycle configurations come in. &lt;/p&gt;

&lt;p&gt;An Amazon s3 lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. With lifecycle configurations, you can automatically move old data to a lower-cost storage tier with &lt;strong&gt;Transition actions&lt;/strong&gt; or delete it with &lt;strong&gt;Expiration actions&lt;/strong&gt;. This can save you a significant amount of money on your AWS bill over time.&lt;/p&gt;

&lt;p&gt;This article will start by listing the available transition storage tiers in s3 and then explain scenarios where s3 lifecycle configurations can help optimize storage costs. After that, you will learn about the components of an s3 lifecycle configuration and how to create one. Ultimately, this article will share considerations to remember when using s3 lifecycle configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 storage classes you can move data to
&lt;/h2&gt;

&lt;p&gt;The following are some of the storage classes that you can move objects to from S3 Standard (default storage class for Amazon S3) with Amazon S3 Lifecycle Configurations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Standard-IA(Infrequently Accessed):&lt;/strong&gt; This is a lower-cost storage class than S3 Standard. It is a good choice for objects that are accessed less frequently. It has no minimum storage duration. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Intelligent-Tiering&lt;/strong&gt;: It is an option for data with changing or unknown access patterns. It has 30 days minimum storage duration. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 One Zone-IA:&lt;/strong&gt; This is for when you want to store infrequently accessed data in a single availability zone. It has a 30 days minimum storage duration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier Instant Retrieval:&lt;/strong&gt; You transition data to this storage class if you will only access the data once a quarter (three months). It has a 90 days minimum storage duration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier Flexible Retrieval&lt;/strong&gt;: You use this storage class to store data accessed once a year, but will retrieval time of minutes to hours. It has a 90 days minimum storage duration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier Deep Archive:&lt;/strong&gt; This is the lowest-cost storage class. It is a good choice for objects that are accessed once a year with a retrieval time of hours. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Use cases of Amazon s3 lifecycle configurations
&lt;/h2&gt;

&lt;p&gt;You can use Amazon s3 lifecycle configurations in many scenarios to help manage your storage costs and optimize your usage. Some prominent use cases include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To automatically move old objects to a lower-cost storage class:&lt;/strong&gt; For example, you could create a lifecycle rule that moves objects that have not been accessed in 30 days to s3 Standard-IA. This would save you money on storage costs while still allowing you to access the objects if you need them in the future.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To automatically delete old objects:&lt;/strong&gt; If you have an s3 bucket that contains temporary files your users create, you could create a lifecycle rule that deletes objects that are older than 7 days. This would help you to avoid storing unnecessary data in S3 and incurring unnecessary storage costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;To abort incomplete multipart uploads:&lt;/strong&gt; When you upload a large object to Amazon S3, you can use multipart upload to break the object up into smaller parts and upload them separately. This can be helpful for uploading large objects over a slow network connection. However, if you interrupt a multipart upload, the parts will remain in your S3 bucket, and you will continue to be charged for them.&lt;/p&gt;

&lt;p&gt;To avoid this, you can use a lifecycle configuration to abort incomplete multipart uploads after a specified number of days. This will help to free up storage space in your bucket and prevent you from forgetting about the incomplete upload.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;To keep track of your data retention policies:&lt;/strong&gt; You can use lifecycle configurations to ensure that your data is retained for the required amount of time in accordance with your organization's data retention and compliance policies. For example, you could create a lifecycle rule that keeps all logs for 7 years, then delete them.&lt;/p&gt;
&lt;h2&gt;
  
  
  Components of an s3 lifecycle configuration
&lt;/h2&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A lifecycle configuration contains a set of rules. A rule is made up of four components: ID, Filters, Status, and Actions. &lt;/p&gt;

&lt;h3&gt;
  
  
  ID
&lt;/h3&gt;

&lt;p&gt;The ID is a unique identifier of the lifecycle rule. This is crucial since one lifecycle configuration can have up to 1000 rules, and the ID will make it easier for you to remember what each rule performs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Filters
&lt;/h3&gt;

&lt;p&gt;The filters component defines WHICH objects in your bucket you’d like to take action on. You can decide whether to apply actions to every object in a bucket or only some of them. You can filter based on prefix, object tag, or object size if you select a subset of objects. Or, if you wanted to be more specific, you could filter using a combination of these characteristics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Status
&lt;/h3&gt;

&lt;p&gt;To enable and disable each lifecycle rule, you’d use the status component. When evaluating lifecycle settings and determining the most effective rules for your workload, this can be useful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Actions
&lt;/h3&gt;

&lt;p&gt;Arguably the most important component of them all. The actions component is where you define WHERE you want to happen to your — are you transitioning them to a lower storage class, or are you deleting them?&lt;/p&gt;

&lt;p&gt;There are six main actions you can use: Transition, expiration, &lt;strong&gt;NoncurrentVersionExpiration&lt;/strong&gt;, &lt;strong&gt;NoncurrentVersionTransition&lt;/strong&gt;, &lt;strong&gt;ExpiredObjectDeleteMarker&lt;/strong&gt;, and the &lt;strong&gt;AbortIncompleteMultipartUpload&lt;/strong&gt; action. With Transition actions, you can automatically move old data to a lower-cost storage tier, and expiration actions automate the deletion of your objects in S3.&lt;/p&gt;

&lt;p&gt;Transition actions and expiration actions only apply to the most recent version of your item if versioning is enabled for your bucket. Use the &lt;strong&gt;NoncurrentVersionTransition&lt;/strong&gt; action to transition between noncurrent versions of your object. Similarly to this, you must utilize the &lt;strong&gt;NoncurrentVersionExpiration&lt;/strong&gt; action to remove noncurrent versions of your object.&lt;/p&gt;

&lt;p&gt;With the &lt;strong&gt;AbortIncompleteMultipartUpload&lt;/strong&gt; action. You should perform this step if you need to clean up any incomplete multipart uploads. You can set the maximum number of days that your multipart uploads can be in progress using this action. &lt;/p&gt;

&lt;p&gt;To learn more about each of the above components, check out the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/intro-lifecycle-rules.html" rel="noopener noreferrer"&gt;Amazon s3 lifecycle rules documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide: How to Create a Lifecycle Configuration on an S3 Bucket (DEMO)
&lt;/h2&gt;

&lt;p&gt;Like creating an s3 bucket, there are several ways to create an s3 lifecycle configuration — using the AWS Console, CLI, SDKs, or Rest API. In this demo, you will create a lifecycle configuration for an s3 bucket using the AWS Console. &lt;/p&gt;

&lt;p&gt;The S3 lifecycle configuration you will create in this demo will transition new log data from the s3 Standard to Glacier Deep Archive after 30 days to store for compliance purposes and delete them after 7 years.  &lt;/p&gt;

&lt;p&gt;To follow along in this demo:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a demo s3 bucket. Learn how to create one &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Download the following sample log data for demo purposes:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/logpai/loghub/blob/master/Linux/Linux_2k.log" rel="noopener noreferrer"&gt;Linux server log data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/logpai/loghub/blob/master/Windows/Windows_2k.log" rel="noopener noreferrer"&gt;Windows server log data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/logpai/loghub/blob/master/Apache/Apache_2k.log" rel="noopener noreferrer"&gt;Apache server log data&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Upload the above 3 sample datasets into your demo s3 bucket as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqaactbqwroztpdoi9ofz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqaactbqwroztpdoi9ofz.png" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create the lifecycle configuration, at the top right corner, click on the “&lt;strong&gt;Management”&lt;/strong&gt; tab, as highlighted in the above image. In the “&lt;strong&gt;Management”&lt;/strong&gt; tab, click “&lt;strong&gt;Create lifecycle rule,”&lt;/strong&gt; as in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flexp9i08jlrfa8fvvlzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flexp9i08jlrfa8fvvlzu.png" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the next page, name your lifecycle rule, select the “&lt;strong&gt;Apply to all objects in the bucket”&lt;/strong&gt; rule scope option, and other rule actions as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbj9v5cwmw2z6qsr0ng9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbj9v5cwmw2z6qsr0ng9.png" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the image above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Selecting “&lt;strong&gt;Apply to all objects in the bucket”&lt;/strong&gt; means that the lifecycle rule applies to all the log data in the bucket. &lt;/li&gt;
&lt;li&gt;If you select the other option, you will be able to filter objects by prefix, object tags, object size, or whatever combination suits your use case. Learn more about filters &lt;a href="https://aws.amazon.com/blogs/storage/optimize-storage-costs-with-new-amazon-s3-lifecycle-filters-and-actions/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;For the “&lt;strong&gt;Lifecycle rule actions,”&lt;/strong&gt; seeing there are no other versions of the logs for this demo, this demo selects the “&lt;strong&gt;Move current versions of objects between storage classes”&lt;/strong&gt; and &lt;strong&gt;“Expire current versions of objects”&lt;/strong&gt; options. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next option is to select the storage class and days — 30 — to transition, as in the image below, and then the days — 2557 (7 years) —  to expire the objects after the retention period is over. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmom06i3ipwucu35txes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmom06i3ipwucu35txes.png" width="800" height="709"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can ignore the warning. It pops up because the objects set to transition to Glacier Deep Archive in this demo are relatively small to the size of objects you would transition in a real-world scenario. &lt;/p&gt;

&lt;p&gt;The next step is to review the rule transition and actions and then create it as in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1n7rsrrewauklng1sd5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1n7rsrrewauklng1sd5n.png" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you create the rule, you should see it in the image below. From the page, you can edit the rule, enable and disable it, and create more rules. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9o0w3s847pfy3mij45y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9o0w3s847pfy3mij45y.png" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cleaning up&lt;/strong&gt;&lt;br&gt;
Ensure you delete the demo lifecycle rule and s3 bucket to avoid incurring unnecessary AWS charges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Considerations for Creating S3 Lifecycle Configurations
&lt;/h2&gt;

&lt;p&gt;While a lifecycle configuration can be powerful, there are several considerations you should keep in mind when creating one: &lt;/p&gt;

&lt;h3&gt;
  
  
  Moving between storage classes
&lt;/h3&gt;

&lt;p&gt;Think about the s3 storage classes as a staircase. S3 Standard is at the top of the staircase, while s3 Glacier Deep Archive is at the bottom of the staircase. And all of the other storage classes are in between. &lt;/p&gt;

&lt;p&gt;With lifecycle configurations, this staircase only goes one way: down. Once you transition data down the staircase to a lower-cost storage class, you can't move objects back up. For example, let's say you move your data to S3 One Zone-IA. Once your data transitions to that storage class, you can't use a lifecycle configuration to move your data back to S3 Standard or S3 Intelligent Tiering. &lt;/p&gt;

&lt;h3&gt;
  
  
  Lifecycle configuration costs
&lt;/h3&gt;

&lt;p&gt;Costs follow a similar staircase model. The costs can be categorized in two ways: storage transition costs and minimum storage duration fees. Both of which increase as you move down the staircase. &lt;/p&gt;

&lt;p&gt;For storage transition costs, you will be charged $0.01 for every 1,000 lifecycle transition requests when objects are moved from the S3 Standard to the S3 Standard-IA storage class. As you go down all the way to S3 Glacier Deep Archive, this cost increases and can be up to $0.05 for every 1000 transition requests. &lt;/p&gt;

&lt;p&gt;For minimum storage duration fees, seeing most storage classes have a minimum storage duration before you can delete, overwrite, or transition those objects. And the minimum storage duration periods increase as you go down the staircase as well.&lt;/p&gt;

&lt;p&gt;To learn more about the considerations when creating S3 lifecycle configurations, check out &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html" rel="noopener noreferrer"&gt;this documentation&lt;/a&gt; on that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, you learned all you need to get started creating Amazon s3 lifecycle configurations. From its use cases to key considerations to make. There is so much more to learn about Amazon s3. To do so, explore &lt;a href="https://docs.aws.amazon.com/s3/index.html" rel="noopener noreferrer"&gt;the documentation on s3&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>database</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Rapid development on AWS EKS using Garden</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Tue, 23 May 2023 09:42:24 +0000</pubDate>
      <link>https://dev.to/hackmamba/rapid-development-on-aws-eks-using-garden-4o8b</link>
      <guid>https://dev.to/hackmamba/rapid-development-on-aws-eks-using-garden-4o8b</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://hackmamba.io/blog/2023/05/rapid-development-on-aws-eks-using-garden/" rel="noopener noreferrer"&gt;Hackmamba&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the past two decades, massive changes have occurred in software development. From waterfall to agile methodology, silos to DevOps philosophy, on-premise to cloud computing, and so on, these changes allow engineering teams to develop high-quality software at the fastest pace ever.&lt;/p&gt;

&lt;p&gt;"Fastest pace ever?" some developers will question. With all the improvements, they recall the days in which they had to wait before testing their changes due to messy shared environments and their struggles with internal tooling. At best, this is frustrating; at worst, it is a severe drain on developer productivity. &lt;/p&gt;

&lt;p&gt;This article introduces &lt;a href="https://garden.io/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Garden&lt;/a&gt;, explains how it works, and walk you through how to set up Garden for development on an AWS EKS cluster using an example project. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Garden.io?
&lt;/h2&gt;

&lt;p&gt;Garden is an all-in-one platform that simplifies and speeds up software development by combining rapid development, testing, and DevOps automation. Garden allows you to create realistic cloud-native environments for every stage of your software development lifecycle (SDLC) without worrying about the difference between each environment (dev, CI and prod).&lt;/p&gt;

&lt;p&gt;With Garden, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;program your workflows across all stages of your SDLC,&lt;/li&gt;
&lt;li&gt;develop faster in production-like environments at each stage, accompanied by live reloading,&lt;/li&gt;
&lt;li&gt;write end-to-end tests faster, and&lt;/li&gt;
&lt;li&gt;reduce lead time thanks to smart caching, which dramatically speeds up every step of the process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does Garden work?
&lt;/h2&gt;

&lt;p&gt;You might ask, “how does Garden achieve all this?” The high-level answer is that Garden deploys a &lt;a href="https://docs.garden.io/basics/stack-graph?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Stack Graph&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The Stack Graph is an opinionated graph structure that allows you to describe your whole stack in a consistent and structured way without having to write massive scripts or monolithic configuration files.&lt;/p&gt;

&lt;p&gt;The Stack Graph is based on the idea that all &lt;a href="https://www.blameless.com/blog/devops-workflow" rel="noopener noreferrer"&gt;DevOps workflows&lt;/a&gt; can be fully described in terms of the following four actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;build it&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;deploy it&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;test it&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;run&lt;/strong&gt; (for running ad-hoc tasks)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Stack Graph, you define each component of your stack independently with respect to the four actions above using straightforward, understandable YAML declarations—without altering any of your current code. Garden compiles every declaration you make, even those spread over different repositories, into a complete graph of your stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6ctlkidas3m73tbqi1j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6ctlkidas3m73tbqi1j.jpg" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source — &lt;a href="https://github.com/garden-io/garden" rel="noopener noreferrer"&gt;Garden on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Stack Graph makes your workflows portable and reproducible across your entire SLDC. Garden can execute the Stack Graph in environments as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5waez7di5ms3ynhafnup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5waez7di5ms3ynhafnup.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source — &lt;a href="https://youtu.be/3gMJWGV0WE8" rel="noopener noreferrer"&gt;Garden on Youtube&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Moreover, you can easily add components to your stack without introducing more complexity to your workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpyimalqox1pisnpve5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpyimalqox1pisnpve5v.png" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source — &lt;a href="https://youtu.be/3gMJWGV0WE8" rel="noopener noreferrer"&gt;Garden on Youtube&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about how Garden works, &lt;a href="https://docs.garden.io/basics/how-garden-works" rel="noopener noreferrer"&gt;check out its documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow along with this article’s demo, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Garden CLI installed on your machine — see how to install it &lt;a href="https://docs.garden.io/basics/quickstart#step-1-install-garden" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Basic-to-intermediate understanding of AWS and EKS (Amazon Elastic Kubernetes Service).&lt;/li&gt;
&lt;li&gt;A running AWS EKS cluster. If you don't have one, you can create a demo cluster using &lt;a href="https://gist.github.com/Kikiodazie/4e2a3cdc79821c5e3e7429a2203647a2" rel="noopener noreferrer"&gt;this YAML configuration&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;kubectl&lt;/code&gt; command-line tool configured to communicate with the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Configuring a project for use with Garden
&lt;/h2&gt;

&lt;p&gt;This article will use a simple example project created by Garden. To clone the project and enter its directory, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/garden-io/garden.git
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;garden/examples/demo-project-start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;demo-project-start&lt;/code&gt; project contains two directories, with one container service each; &lt;code&gt;backend&lt;/code&gt; and &lt;code&gt;frontend&lt;/code&gt;. To configure this project for use with Garden, you must first define a boilerplate Garden project and then a Garden module for each service.&lt;/p&gt;

&lt;p&gt;To create the boilerplate Garden project, use the following helper command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;garden create project &lt;span class="nt"&gt;--skip-comments&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will create a basic boilerplate project configuration — &lt;code&gt;project.garden.yml&lt;/code&gt; — in the current directory, as seen below. This is the project's configuration root. The &lt;code&gt;--skip-comments&lt;/code&gt; flag removes all the comments that reveal all the available options for the configuration. You can omit it to see all of the options.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Project&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-project-start&lt;/span&gt;
&lt;span class="na"&gt;environments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;providers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-kubernetes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every Garden command is run against one of the environments defined in the &lt;code&gt;project.garden.yml&lt;/code&gt; configuration. The above configuration has one environment (&lt;code&gt;default&lt;/code&gt;) and a single provider. A provider is a resource upon which you will build an environment. The two most commonly used providers in Garden are the local Kubernetes provider and the remote Kubernetes provider.&lt;/p&gt;

&lt;p&gt;To learn more about environments and providers, see the &lt;a href="https://docs.garden.io/using-garden/projects" rel="noopener noreferrer"&gt;Garden Projects documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Later in the article, you will edit this boilerplate config to define a &lt;code&gt;dev&lt;/code&gt; environment, connect to your AWS EKS cluster, etc. &lt;/p&gt;

&lt;p&gt;Next, create &lt;a href="https://docs.garden.io/using-garden/modules" rel="noopener noreferrer"&gt;module&lt;/a&gt; configs (&lt;code&gt;garden.yml&lt;/code&gt;) for each container service, starting with &lt;code&gt;backend&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;backend
&lt;span class="nv"&gt;$ &lt;/span&gt;garden create module &lt;span class="nt"&gt;--skip-comments&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll get a suggestion to make it a &lt;code&gt;container&lt;/code&gt; module. Choose that, and give it the default name as well. Then do the same for the &lt;code&gt;frontend&lt;/code&gt; module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;frontend
&lt;span class="nv"&gt;$ &lt;/span&gt;garden create module &lt;span class="nt"&gt;--skip-comments&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is now enough configuration to build the project. But before you can deploy it, you need to configure &lt;code&gt;services&lt;/code&gt; in each module configuration and then connect to a remote EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring services in each module&lt;/strong&gt;&lt;br&gt;
Starting with the &lt;code&gt;backend&lt;/code&gt; container service, open the &lt;code&gt;garden.yml&lt;/code&gt; file and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
        &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
        &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;ingresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/hello-backend&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above is enough information for Garden to deploy and expose the &lt;code&gt;backend&lt;/code&gt; service. The full module config should look like that of the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fvm4x4adkqkotzi5g16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fvm4x4adkqkotzi5g16.png" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, for the &lt;code&gt;frontend&lt;/code&gt; service, add the following to its &lt;code&gt;garden.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
        &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
    &lt;span class="na"&gt;ingresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/hello-frontend&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/call-backend&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above does the same as the backend service, adding a runtime dependency on the &lt;code&gt;backend&lt;/code&gt; service. The full module config should look like that of the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tzuystbap7dcstj3n7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tzuystbap7dcstj3n7e.png" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to the EKS Cluster and ECR container registry
&lt;/h2&gt;

&lt;p&gt;Garden has great features, the most powerful of which is the ability to build images in your development cluster, thus avoiding the need for local clusters. To enable in-cluster building, you will need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;configure the remote Kubernetes plugin&lt;/li&gt;
&lt;li&gt;and then configure access to the remote deployment registry for built images. While testing, you can skip this step and utilize the in-cluster registry already provided; however, remember that you might run into scaling problems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuring the remote Kubernetes plugin&lt;/strong&gt;&lt;br&gt;
To configure the remote Kubernetes plugin, update the project-level configuration file, &lt;code&gt;project.garden.yml&lt;/code&gt;, with the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The context for your EKS cluster. You can get the context of your EKS cluster with the following kubectl command:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl config current-context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnitjuypdldn0bl6ffz2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnitjuypdldn0bl6ffz2i.png" width="800" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The hostname for your services.&lt;/li&gt;
&lt;li&gt;The build mode. This article uses &lt;a href="https://docs.garden.io/kubernetes-plugins/advanced/in-cluster-building#kaniko" rel="noopener noreferrer"&gt;kaniko&lt;/a&gt; build mode, which works well for most scenarios.&lt;/li&gt;
&lt;li&gt;The image deployment registry.&lt;/li&gt;
&lt;li&gt;The name(s) and namespace(s) of the ImagePullSecret(s) used by your cluster. This article uses just one ImagePullSecret to authenticate your AWS ECR.&lt;/li&gt;
&lt;li&gt;A TLS secret (optional).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before updating the &lt;code&gt;project.garden.yml&lt;/code&gt;, create the ImagePullSecret. To do so, first, create a &lt;code&gt;config.json&lt;/code&gt; file, and add the following JSON configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"credHelpers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"&amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ecr-login"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;&amp;lt;aws_account_id&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;region&amp;gt;&lt;/code&gt; are placeholders that you need to replace for your registry. &lt;/p&gt;

&lt;p&gt;Next, create the ImagePullSecret in your cluster (you can replace the default namespace; make sure it's correctly referenced in the &lt;code&gt;project.garden.yml&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; default create secret generic ecr-config &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.dockerconfigjson&lt;span class="o"&gt;=&lt;/span&gt;./config.json &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubernetes.io/dockerconfigjson
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, update your &lt;code&gt;project.garden.yml&lt;/code&gt; to be as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Project&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-project-start&lt;/span&gt;
&lt;span class="na"&gt;environments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
  &lt;span class="na"&gt;providers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes&lt;/span&gt;
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your_eks_cluster_context&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;defaultHostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test.com //for demo purposes&lt;/span&gt;
    &lt;span class="na"&gt;buildMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kaniko&lt;/span&gt; 
    &lt;span class="na"&gt;deploymentRegistry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
    &lt;span class="na"&gt;imagePullSecrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ecr-config&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;defaultEnvironment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above YAML configuration, it is important to note that when you specify &lt;code&gt;&amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com&lt;/code&gt; and &lt;code&gt;namespace: test&lt;/code&gt; for the &lt;code&gt;deploymentRegistry&lt;/code&gt; field, and you have a container module named &lt;code&gt;backend&lt;/code&gt; in your project, it will be tagged and pushed to &lt;code&gt;&amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/test/backend:v:&amp;lt;module-version&amp;gt;&lt;/code&gt; after building. That image ID will then be used in Kubernetes manifests when running containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring access to ECR container registry&lt;/strong&gt;&lt;br&gt;
Before you configure access to ECR, create two repositories for the &lt;code&gt;backend&lt;/code&gt; and &lt;code&gt;frontend&lt;/code&gt; containers respectively as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8gdzs8w88n17tm5i0v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8gdzs8w88n17tm5i0v0.png" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To configure access to ECR, grant your service account the right permission to push to ECR by adding the policy below to each repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AllowPushPull"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;account-id&amp;gt;:role/&amp;lt;k8s_worker_iam_role&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:BatchGetImage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:BatchCheckLayerAvailability"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:CompleteLayerUpload"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:GetDownloadUrlForLayer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:InitiateLayerUpload"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:PutImage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:UploadLayerPart"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above policy, &lt;code&gt;arn:aws:iam::&amp;lt;account-id&amp;gt;:role/&amp;lt;k8s_worker_iam_role&amp;gt;&lt;/code&gt; will be the &lt;a href="https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html" rel="noopener noreferrer"&gt;ARN&lt;/a&gt; of your &lt;a href="https://everythingdevops.dev/kubernetes-architecture-explained-worker-nodes-in-a-cluster/" rel="noopener noreferrer"&gt;cluster worker nodes&lt;/a&gt;. In the &lt;strong&gt;Roles&lt;/strong&gt; section of your &lt;strong&gt;IAM&lt;/strong&gt; dashboard, search for your cluster name, as seen in the image below. Edit the policy such that it contains your cluster’s ARN, then copy it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqsjgs3e2hzrj4h7g0vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqsjgs3e2hzrj4h7g0vg.png" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To add the above policy, navigate to the &lt;strong&gt;Permissions&lt;/strong&gt; section of each repository as in the image below, click &lt;strong&gt;Edit policy JSON&lt;/strong&gt;&lt;strong&gt;,&lt;/strong&gt; and paste the policy, then save. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pheq9b9rqvrc2vt7b8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pheq9b9rqvrc2vt7b8b.png" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After saving, the policy should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo3in0ghnbx0zadm7xiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo3in0ghnbx0zadm7xiq.png" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all that done, you can now deploy and test your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying and testing the Garden project
&lt;/h2&gt;

&lt;p&gt;In the directory, you have the &lt;code&gt;project.garden.yml&lt;/code&gt;. Now, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;garden deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your services display similarly to the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xygf5kw28y4uvqqwrvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xygf5kw28y4uvqqwrvp.png" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can verify that the images were built on ECR and the pods are running using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; demo-project-start-default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbfv82q8it1du2c85kd4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbfv82q8it1du2c85kd4.png" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To set up tests, similar to how you configured the &lt;code&gt;services&lt;/code&gt; earlier, open the &lt;code&gt;frontend/garden.yml&lt;/code&gt; config and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;tests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unit&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;npm&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;integ&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;npm&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;integ&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above config defines two simple test suites. One runs the unit tests of the &lt;code&gt;frontend&lt;/code&gt; service. The other runs a basic integration test that relies on the &lt;code&gt;frontend&lt;/code&gt; service being up and running.&lt;/p&gt;

&lt;p&gt;To run the test, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;garden &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the tests pass, similar to that shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax76asuk58424k0oq0a7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax76asuk58424k0oq0a7.png" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all that done, you can move on from this simple example to &lt;a href="https://docs.garden.io/tutorials/your-first-project/4-configure-your-project" rel="noopener noreferrer"&gt;configuring your own project with these steps&lt;/a&gt;, regardless of its complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, you learned about Garden, how it works, and how to configure an existing project to use it for development on an AWS EKS cluster. There is so much more to learn about Garden; to do so, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.garden.io/" rel="noopener noreferrer"&gt;Garden documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://garden.io/blog/retool-developer-satisfaction-score" rel="noopener noreferrer"&gt;Case study: How Retool improved developer satisfaction scores by 50%&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://garden.io/blog/remote-dev-codespaces" rel="noopener noreferrer"&gt;The ultimate remote development experience with GitHub Codespaces and Garden&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://garden.io/blog/environment-as-a-service-tooling" rel="noopener noreferrer"&gt;How Environment-as-a-Service tooling reduces friction across the SDLC&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
      <category>aws</category>
      <category>garden</category>
    </item>
    <item>
      <title>How to restart Kubernetes Pods with kubectl</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Thu, 16 Mar 2023 01:26:46 +0000</pubDate>
      <link>https://dev.to/everythingdevops/how-to-restart-kubernetes-pods-with-kubectl-4g5h</link>
      <guid>https://dev.to/everythingdevops/how-to-restart-kubernetes-pods-with-kubectl-4g5h</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/how-to-restart-kubernetes-pods-with-kubectl/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Anyone who has used Kubernetes for an extended period of time will know that things don’t always go as smoothly as you’d like. In production, unexpected things happen, and Pods can crash or fail in some unforeseen way. When this happens, you need a reliable way to restart the Pods. &lt;/p&gt;

&lt;p&gt;Restarting a pod is not the same as restarting a container, as a Pod is not a process but an environment for running container(s). A Pod persists until it finishes execution, is deleted, is &lt;em&gt;evicted&lt;/em&gt; for lack of resources, or its host node fails.&lt;/p&gt;

&lt;p&gt;This article will list 4 scenarios where you might want to restart a Kubernetes Pod and walk you through methods to restart Pods with kubectl.&lt;/p&gt;

&lt;h1&gt;
  
  
  4 scenarios where you might want to restart a Pod
&lt;/h1&gt;

&lt;p&gt;There are several scenarios where you neeed to restart a Pod. The following are 4 of them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Unexpected errors such as “&lt;strong&gt;Pods stuck in an inactive state&lt;/strong&gt;” (e.g., pending),  &lt;strong&gt;“Out of Memory&lt;/strong&gt;” (occurs Pods try to go beyond the memory limits set in your manifest file), etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To easily upgrade a Pod with a newly-pushed container image if you previously set the PodSpec &lt;code&gt;imagePullPolicy&lt;/code&gt; to &lt;code&gt;Always&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To update configurations and secrets. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You would want to restart Pods when the application running in the Pod has a corrupted internal state that needs to be cleared.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you’ve seen some scenarios where you might want to restart a Pod. Next, you will learn how to restart Pods with kubectl.&lt;/p&gt;

&lt;h1&gt;
  
  
  Restarting Kubernetes pods with kubectl
&lt;/h1&gt;

&lt;p&gt;kubectl, by design, doesn’t have a direct command for restarting Pods. Because of this, to restart Pods with kubectl, you have to use one of the following methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restarting Kubernetes Pods by changing the number of replicas with &lt;code&gt;kubectl scale&lt;/code&gt; command&lt;/li&gt;
&lt;li&gt;Downtimeless restarts with &lt;code&gt;kubectl rollout restart&lt;/code&gt; command&lt;/li&gt;
&lt;li&gt;Automatic restarts by updating the Pod’s environment variable&lt;/li&gt;
&lt;li&gt;Restarting Pods by deleting them
## Prerequisites&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before you learn how to use each of the above methods, ensure you have the following prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster. The demo in this article was done using &lt;a href="https://minikube.sigs.k8s.io/docs/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; — a single Node Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;The kubectl command-line tool configured to communicate with the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For demo purposes, in any desired directory, create a &lt;code&gt;httpd-deployment.yaml&lt;/code&gt; file with &lt;code&gt;replicas&lt;/code&gt; set to &lt;code&gt;2&lt;/code&gt; using the following YAML configurations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd-pod&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your terminal, change to the directory where you saved the deployment file, and run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f httpd-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will create the httpd deployment with two pods. To verify the number of Pods, run the &lt;code&gt;$ kubectl get pods&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6mci31igxffhj5rd4le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6mci31igxffhj5rd4le.png" alt="Creating and verifying an httpd deployment with kubectl" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have the Pods of the httpd deployment running. Next, you will use each of the methods outlined earlier to restart the Pods. &lt;/p&gt;

&lt;h2&gt;
  
  
  Restarting Kubernetes Pods by changing the number of replicas
&lt;/h2&gt;

&lt;p&gt;In this method of restarting Kubernetes Pods, you scale the number of the deployment replicas down to zero, which stops and terminates all the Pods. Then you scale them back up to the desired state, which initializes new pods. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: It is important to note that when you set the number of replicas to zero, seeing the Pods stop running, there will be some application downtime. &lt;/p&gt;

&lt;p&gt;To scale down the httpd deployment replicas you created, run the following &lt;code&gt;kubectl scale&lt;/code&gt; command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl scale deployment httpd-deployment --replicas=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will show an output indicating that Pods have been scaled, as shown in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h9tpqt1fxcsh1gylttv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h9tpqt1fxcsh1gylttv.png" alt="Scaling Pods down" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To confirm that the pods were stopped and terminated, run &lt;code&gt;$ kubectl get pods&lt;/code&gt;, and you should get the “&lt;strong&gt;No resources are found in default namespace”&lt;/strong&gt; message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkujkhrbpuodmwbw6c1ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkujkhrbpuodmwbw6c1ab.png" alt="Showing Pods" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To scale up the replicas, run the same &lt;code&gt;kubectl scale&lt;/code&gt;, but this time with &lt;code&gt;--replicas=2&lt;/code&gt;. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl scale deployment httpd-deployment --replicas=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, to verify the number of pods running, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And you should see each Pod back up and running after restarting, as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff595jefd8f9sln27g7ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff595jefd8f9sln27g7ci.png" alt="Scaling Pods up" width="800" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Downtimeless restarts with Rollout restart
&lt;/h2&gt;

&lt;p&gt;In the previous method, you scaled down the number of replicas to zero to restart the Pods; doing so caused an outage and downtime of the application. To restart without any outage and downtime, use the &lt;code&gt;kubectl rollout restart&lt;/code&gt; command, which restarts the Pods one by one without impacting the deployment.&lt;/p&gt;

&lt;p&gt;To use &lt;code&gt;rollout restart&lt;/code&gt; on your httpd deployment, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout restart deployment httpd-deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now to view the Pods restarting, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice in the image below Kubernetes creates a new Pod before &lt;code&gt;Terminating&lt;/code&gt; each of the previous ones as soon as the new Pod gets to &lt;code&gt;Running&lt;/code&gt; status. Because of this approach, there is no downtime in this restart method. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95lb0e11j3f137f2obzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95lb0e11j3f137f2obzb.png" alt="Using kubectl rollout restart" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic restarts by updating the Pod’s environment variable
&lt;/h2&gt;

&lt;p&gt;So far, you’ve learned two ways of restarting Pods in Kubernetes; one by changing the replicas and the other by rollout restart. The methods work, but you explicitly restarted the pods with both of them. &lt;/p&gt;

&lt;p&gt;In this method, once you update the Pod’s &lt;a href="https://kubebyexample.com/en/concept/environment-variables" rel="noopener noreferrer"&gt;environment variable&lt;/a&gt;, the change will automatically restart the Pods.&lt;/p&gt;

&lt;p&gt;To update the environment variables of the Pods in your httpd deployment, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl set env deployment httpd-deployment DATE=$()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, which adds a &lt;code&gt;DATE&lt;/code&gt; environment variable in the Pods with a null value (&lt;code&gt;=$()&lt;/code&gt;), run &lt;code&gt;$ kubectl get pods&lt;/code&gt; and see the Pods restarting, similar to the rollout restart method. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7oxvt6h8emchs0ik3c8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7oxvt6h8emchs0ik3c8.png" alt="Adding an environment variable to Pods" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can verify that each Pod’s &lt;code&gt;DATE&lt;/code&gt; environment variable is null with the &lt;code&gt;kubectl describe&lt;/code&gt; command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe pod &amp;lt;pod_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, the &lt;code&gt;DATE&lt;/code&gt; variable is empty (null) like in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r0f0r1t9ftg8qtsc70v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r0f0r1t9ftg8qtsc70v.png" alt="Verifying environment variable addition" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Restarting Pods by deleting them
&lt;/h2&gt;

&lt;p&gt;Because the Kubernetes API is declarative, it automatically creates a replacement when you delete a Pod that’s part of a ReplicaSet or Deployment. The ReplicaSet will notice the Pod is no longer available as the number of container instances will drop below the target replica count.&lt;/p&gt;

&lt;p&gt;To delete a Pod, use the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete pod &amp;lt;pod_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Though this method works quickly, it is not recommended except if you have a failed or misbehaving Pod or set of Pods. For regular restarts like updating configurations, it is better to use the &lt;code&gt;kubectl scale&lt;/code&gt; or &lt;code&gt;kubectl rollout&lt;/code&gt; commands designed for that use case.&lt;/p&gt;

&lt;p&gt;To delete all failed Pods for this restart technique, use this command:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete pods --field-selector=status.phase=Failed&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Cleaning up&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Clean up the entire setup by deleting the deployment with the command below:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete deployment httpd-deployment&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;This article discussed 5 scenarios where you might want to restart Kubernetes Pods and walked you through 4 methods with kubectl. There is more to learn about kubectl. To learn more, check out the &lt;a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="noopener noreferrer"&gt;kubectl commands reference&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>cloudnative</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Deploy a Multi Container Docker Compose Application On Amazon EC2</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Thu, 16 Mar 2023 01:24:32 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-deploy-a-multi-container-docker-compose-application-on-amazon-ec2-59n2</link>
      <guid>https://dev.to/aws-builders/how-to-deploy-a-multi-container-docker-compose-application-on-amazon-ec2-59n2</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/how-to-deploy-a-multi-container-docker-compose-application-on-amazon-ec2/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Container technology streamlined how you’d build, test, and deploy software from local environments, to the cloud or on-premise data centers. But with the benefit of building applications with container technology, there was the problem of manually starting and stopping each container while building multi-container applications.&lt;/p&gt;

&lt;p&gt;To solve this problem, Docker Inc created &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;. You can use Docker Compose to simplify the running of multi-container applications with as little as two commands; &lt;code&gt;docker-compose up&lt;/code&gt; and &lt;code&gt;docker-compose down&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;In this article, you will learn &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Amazon EC2 is, &lt;/li&gt;
&lt;li&gt;How to create and connect to an Amazon EC2 instance, and &lt;/li&gt;
&lt;li&gt;How to deploy a Docker compose application on EC2&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow along in this article, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of the terminal.&lt;/li&gt;
&lt;li&gt;Some experience with Docker and Docker Compose.&lt;/li&gt;
&lt;li&gt;An AWS account — Learn how to create one &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Amazon EC2?
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Compute Cloud (EC2) provides secure, resizable, and scalable computing capacity for virtually any workload in &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;Amazon Web Services (AWS) Cloud&lt;/a&gt;. By using Amazon EC2, you can eliminate the overhead of investing in hardware up front so you can focus on developing and deploying your applications faster. &lt;/p&gt;

&lt;p&gt;With Amazon EC2, you can launch as many or as few virtual servers (instances) as you need, configure security and networking, and manage storage. Also, Amazon EC2 enables you to easily scale your applications up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.&lt;/p&gt;

&lt;p&gt;To learn more about Amazon EC2, see the &lt;a href="https://aws.amazon.com/ec2" rel="noopener noreferrer"&gt;Amazon EC2 product page&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating and connecting to an Amazon EC2 instance
&lt;/h2&gt;

&lt;p&gt;After creating an AWS account, you create and connect to an Amazon EC2 instance with the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select your desired region for the instance&lt;/li&gt;
&lt;li&gt;Navigate to the EC2 console&lt;/li&gt;
&lt;li&gt;Launch the instance &lt;/li&gt;
&lt;li&gt;Connect to the instance from your local computer via SSH&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Selecting your desired region to deploy
&lt;/h3&gt;

&lt;p&gt;To select a region on your AWS console, click the dropdown as annotated on the top right corner of the image below and select your desired region. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa14071vd4dad27g86l3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa14071vd4dad27g86l3a.png" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may ask, “How would I know which region is best for my application?” To know the best region to deploy your application check out this article on “&lt;a href="https://aws.amazon.com/blogs/architecture/what-to-consider-when-selecting-a-region-for-your-workloads/" rel="noopener noreferrer"&gt;What to Consider when Selecting a Region for your Workloads&lt;/a&gt;.”&lt;/p&gt;

&lt;h3&gt;
  
  
  Navigating to the EC2 console
&lt;/h3&gt;

&lt;p&gt;After selecting your desired AWS region, to go to the EC2 Console, search for “EC2”  in the search box as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqostm4p0u9tx6u85kbs2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqostm4p0u9tx6u85kbs2.png" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you arrive on the EC2 console, locate the &lt;strong&gt;Launch Instance&lt;/strong&gt; button and click on it to start a new EC2 instance launch flow. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k4vc3gt1au509lq46es.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k4vc3gt1au509lq46es.png" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Launch the instance
&lt;/h3&gt;

&lt;p&gt;In the EC2 instance launch flow, you will be required to configure your instance with the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Application and OS Images (Amazon Machine Image):&lt;/strong&gt; An Amazon Machine Image &lt;strong&gt;(&lt;/strong&gt;AMI) is a template that contains the software configuration (operating system, application server, and applications) required to launch your instance.&lt;/p&gt;

&lt;p&gt;This article will use a Linux Ubuntu Server 22.04 with the configurations seen in the image below.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmdc1mk7t5qy91nzmzwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmdc1mk7t5qy91nzmzwu.png" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Instance type:&lt;/strong&gt; An ****Instance type is a combination of CPU, memory, storage, and networking capacity. There are various EC2 instance types to give you the flexibility to choose the appropriate mix of resources for your applications.&lt;/p&gt;

&lt;p&gt;This article will use a &lt;strong&gt;t2.micro&lt;/strong&gt; with the CPU and memory seen in the image below. To learn more about Amazon EC2 instance types, check out this &lt;a href="https://aws.amazon.com/ec2/instance-types/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo1erkhwe2i8w6touemm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo1erkhwe2i8w6touemm.png" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Key pair (login):&lt;/strong&gt; You can use a key pair to securely connect to your instance remotely from your local computer. To create a key pair, click on &lt;strong&gt;Create new key pair&lt;/strong&gt; as seen in the image below.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx7joq5176j82xq3qxl0.png" width="800" height="203"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After clicking &lt;strong&gt;Create new key pair&lt;/strong&gt;, you will see a pop up. In the popup, name your pair and leave the rest of the configurations as default (if your local computer is Windows OS, select &lt;code&gt;.ppk&lt;/code&gt; private key file format), then click &lt;strong&gt;Create key pair.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;After clicking &lt;strong&gt;Create key pair&lt;/strong&gt;, a &lt;code&gt;.pem&lt;/code&gt; private key file will be automatically downloaded on your computer; store the private key in a secure and accessible location on your computer. &lt;strong&gt;You will need it later to connect to your instance.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;To learn more about Amazon EC2 key pairs, check out this &lt;a href="https://docs.aws.amazon.com/console/ec2/key-pairs/create" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexidh44yt00tlkmq9vmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexidh44yt00tlkmq9vmo.png" width="800" height="715"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Network settings:&lt;/strong&gt; You can leave the &lt;strong&gt;Network settings&lt;/strong&gt; as default. The default network setting as in the image below uses:

&lt;ul&gt;
&lt;li&gt;The default &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html" rel="noopener noreferrer"&gt;VPC and Subnet&lt;/a&gt; of your AWS account, and enables auto-assigning a public address to your instance.&lt;/li&gt;
&lt;li&gt;It also creates a new security group, a set of firewall rules that control the traffic for your instance. The default is the allow SSH traffic from anywhere; this works for this article, but in production use cases, you should set more strict rules. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgzes0kmuj9clp6via0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgzes0kmuj9clp6via0b.png" width="800" height="710"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure storage:&lt;/strong&gt; You can leave the storage configuration as default for this article.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xlrdbj20zuwi7nh27y3.png" width="800" height="376"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The last configurations are the advanced settings, you can ignore them as they are out of scope of this article. To learn more about them click the &lt;strong&gt;Info&lt;/strong&gt; text as seen in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c6sc56lniwuot28k62p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c6sc56lniwuot28k62p.png" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After all the above configurations, click on &lt;strong&gt;Launch instance&lt;/strong&gt; and wait a few mins for your instance to launch as in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9d2gu9la7ztsm5tcf1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9d2gu9la7ztsm5tcf1r.png" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click on &lt;strong&gt;View all Instances&lt;/strong&gt; as in the above image and select the instance as in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F991avtf4uw87oljgr8w0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F991avtf4uw87oljgr8w0.png" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Connect to your Amazon EC2 instance
&lt;/h3&gt;

&lt;p&gt;With your EC2 instance selected, at the top as annotated in the image below, click &lt;strong&gt;Connect.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu37mjz33t7vcypsnu3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu37mjz33t7vcypsnu3l.png" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking &lt;strong&gt;Connect&lt;/strong&gt; you will see a page with options to connect to your instance. This article will us the SSH client option.&lt;/p&gt;

&lt;p&gt;Follow the steps you see on the page as in the image below using the &lt;code&gt;.pem&lt;/code&gt; private key that was downloaded upon creation of you key pair. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxqcwltuthijht3e4pan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxqcwltuthijht3e4pan.png" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After following the above steps, you should be connected to your instance like in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ehwn9un2rmgobbtpfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ehwn9un2rmgobbtpfc.png" width="800" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you will deploy a demo multi container Docker Compose application that has a React frontend, Node.js backend and MongoDB database containers, and make it accessible on the internet. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying a multi container Docker Compose application on Amazon EC2
&lt;/h2&gt;

&lt;p&gt;Before you can deploy the Docker Compose application, you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First install Docker Engine and Docker compose on your EC2 instance.&lt;/li&gt;
&lt;li&gt;Then for this article clone the demo Todo Docker Compose application from Github on your instance &lt;/li&gt;
&lt;li&gt;And then deploy the application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installing Docker Engine on an Ubuntu EC2 instance&lt;/strong&gt;&lt;br&gt;
The quickest way to install the Docker engine is using the &lt;code&gt;get-docker.sh&lt;/code&gt; bash script on &lt;a href="https://get.docker.com/" rel="noopener noreferrer"&gt;get.docker.com&lt;/a&gt;. Run the command below to download the script on your EC2 instance.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -fsSL https://get.docker.com -o get-docker.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command confirm that it downloaded the &lt;code&gt;get-docker.sh&lt;/code&gt; like the image below.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv14hqi3iuvcizeke9ugg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv14hqi3iuvcizeke9ugg.png" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run the script, run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sh get-docker.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And after a while Docker will be installed on the Ubuntu EC2 instance. To confirm the installation run &lt;code&gt;$ docker&lt;/code&gt; &lt;code&gt;--&lt;/code&gt;&lt;code&gt;version&lt;/code&gt; and you should see the Docker version as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pz5u72rk4dpdsmyuwd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pz5u72rk4dpdsmyuwd8.png" width="800" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you used a different AMI, check out the &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;Docker installation documentation&lt;/a&gt; to learn how to install on your choice AMI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Docker Compose on an Ubuntu EC2 instance&lt;/strong&gt;&lt;br&gt;
Now to install Docker Compose, run the following command to get the release from Github:\&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And then the following command to make it executable:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo chmod +x /usr/local/bin/docker-compose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With that done, when you run &lt;code&gt;$ docker-compose -v&lt;/code&gt;, you should see the version you just installed as in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyncnsbymp2s6uzt931cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyncnsbymp2s6uzt931cr.png" width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloning the demo Docker Compose application&lt;/strong&gt;&lt;br&gt;
The multi container todo application you will deploy is one of the &lt;a href="https://github.com/docker/awesome-compose/tree/master/react-express-mongodb" rel="noopener noreferrer"&gt;Awesome Docker Compose Github repository samples&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To access the application, clone the repository on your instance with the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone https://github.com/docker/awesome-compose.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After cloning the project, run the following command to change into the &lt;code&gt;react-express-mongodb&lt;/code&gt; directory where the todo application is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd awesome-compose/react-express-mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvvnjnudbe8kz1mick3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvvnjnudbe8kz1mick3f.png" width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once in the project directory, to start the containers using &lt;code&gt;compose.yaml&lt;/code&gt; file as annotated above run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And after Docker pulls the images and creates the network for the containers to communicate with each other, you will see an output like in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v8to8tmklbsv26hlobk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v8to8tmklbsv26hlobk.png" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To confirm that the 3 containers are running, run &lt;code&gt;$ docker ps&lt;/code&gt; and you should see an output similar to the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8acju0ew9077xymn4mbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8acju0ew9077xymn4mbr.png" width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you will test the deployment to be sure the todo application works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing your deployment
&lt;/h2&gt;

&lt;p&gt;The application you just deployed talks on port &lt;code&gt;3000&lt;/code&gt; but if you visit port &lt;code&gt;3000&lt;/code&gt; with the public address of your EC2 instance, you won’t be able to view it. This is because of the security group settings which currently only allows inbound traffic from SSH. &lt;/p&gt;

&lt;p&gt;To enable viewing your application, on the EC2 instance console, with your EC2 instance selected, move to the &lt;strong&gt;Security&lt;/strong&gt; tab and then click on the security group as in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwpfk5ellxry5awwzhl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwpfk5ellxry5awwzhl9.png" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking on the security group, in the next menu, click on &lt;strong&gt;Edit Inbound rules&lt;/strong&gt; to add the rule that allows inbound traffic through port &lt;code&gt;3000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81nijtepwn1xxb3ejtui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81nijtepwn1xxb3ejtui.png" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then, add the rule as seen in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg7i195djwld5wa7pss0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg7i195djwld5wa7pss0.png" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that done, when you visit the &lt;code&gt;your_instance_public_ip:3000&lt;/code&gt; you should see the Todo application as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flazgmdw9g9a0nompbf2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flazgmdw9g9a0nompbf2n.png" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, you learned what Amazon EC2 is, how to create and connect to an Amazon EC2 instance, and how to deploy a Docker compose application on EC2. &lt;/p&gt;

&lt;p&gt;There is so much more to learn about Amazon EC2. To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/training/" rel="noopener noreferrer"&gt;AWS training website&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pluralsight.com/browse/cloud-computing/AWS/global-infrastructure/aws-ec2?aid=7010a000002LUv2AAG&amp;amp;promo=&amp;amp;utm_source=non_branded&amp;amp;utm_medium=digital_paid_search_google&amp;amp;utm_campaign=XYZ_EMEA_Dynamic&amp;amp;utm_content=&amp;amp;cq_cmp=1576650371&amp;amp;gclid=Cj0KCQjwguGYBhDRARIsAHgRm48kbwhFfG_wO2zFQtl9a1bjz2Eg3GTNXojxsLT8XJaxFGDuB-VnLcYaAjGuEALw_wcB" rel="noopener noreferrer"&gt;Amazon EC2 courses on Pluralsight&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/javarevisited/7-best-aws-ec2-amazon-elastic-compute-cloud-online-courses-for-beginners-in-2021-f7a1a55ea719" rel="noopener noreferrer"&gt;7 Best AWS EC2 [Amazon Elastic Compute Cloud] Online Courses for Beginners in 2022&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>cloud</category>
      <category>docker</category>
    </item>
    <item>
      <title>Best Practices when using Docker Compose</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Tue, 27 Sep 2022 18:14:49 +0000</pubDate>
      <link>https://dev.to/hackmamba/best-practices-when-using-docker-compose-49m8</link>
      <guid>https://dev.to/hackmamba/best-practices-when-using-docker-compose-49m8</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://hackmamba.io/blog/2022/09/best-practices-when-using-docker-compose/" rel="noopener noreferrer"&gt;Hackmamba&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Container technology has streamlined how you’d build, test, and deploy software from local environments, to on-premise data centers, or the cloud. But while using container technology, a new problem of manually starting and stopping each container arose, making it tedious to build multi-container applications. &lt;/p&gt;

&lt;p&gt;To solve this problem, Docker Inc created &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;. You can use Docker Compose to simplify the running of multi-container applications with as little as two commands; &lt;code&gt;docker-compose up&lt;/code&gt; and &lt;code&gt;docker-compose down&lt;/code&gt;. But as with every software tool, there are best practices to use it efficiently.&lt;/p&gt;

&lt;p&gt;This article will discuss 4 best practices you should consider when using Docker Compose to orchestrate multi-container Docker applications. The best practices are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Substitute environment variables in Docker Compose files&lt;/li&gt;
&lt;li&gt;If possible, avoid multiple Compose files for different environments&lt;/li&gt;
&lt;li&gt;Use YAML templates to avoid repetition&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;docker-compose up&lt;/code&gt; flags where necessary&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Substitute Environment Variables in Docker Compose Files
&lt;/h2&gt;

&lt;p&gt;Ideally, when defining a Compose file, you would have environment variables with secrets you wouldn’t want to push to any source code management platform like GitHub. &lt;/p&gt;

&lt;p&gt;It is best to configure those environmental variables in the machine's shell, where you will deploy the multi-container application, so there are no secret leaks. And then populate the environment variables inside the Docker Compose file by substituting as seen below. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongoDB:
    environment:
      - MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
      - MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With the above configuration in your Docker Compose file, when you run &lt;code&gt;docker-compose up&lt;/code&gt;, Docker Compose will look for the &lt;code&gt;MONGO_INITDB_ROOT_USERNAME&lt;/code&gt; and &lt;code&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/code&gt; environment variables in the shell and then substitute their values in the file. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to&lt;/strong&gt; &lt;strong&gt;A&lt;/strong&gt;&lt;strong&gt;dd&lt;/strong&gt; &lt;strong&gt;E&lt;/strong&gt;&lt;strong&gt;nvironment&lt;/strong&gt; &lt;strong&gt;V&lt;/strong&gt;&lt;strong&gt;ariables to your&lt;/strong&gt; &lt;strong&gt;S&lt;/strong&gt;&lt;strong&gt;hell&lt;/strong&gt;&lt;br&gt;
Ideally, to add to a shell, you would use &lt;code&gt;export VARIABLE=&amp;lt;variable_value&amp;gt;&lt;/code&gt;. But with that method, if the host machine reboots, those environment variables will be lost.&lt;/p&gt;

&lt;p&gt;To add environment variables that would persist through reboots, create an environment file with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And in that file, store your environment variables like in the image below and save the file. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdgc2a709hyzybadsjpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdgc2a709hyzybadsjpf.png" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in the root directory of your production machine, run &lt;code&gt;$ ls -la&lt;/code&gt;, which should show you a &lt;code&gt;.profile&lt;/code&gt; as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ac6zwtme053tl9kjbc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ac6zwtme053tl9kjbc0.png" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open up the profile file with &lt;code&gt;$ vi .profile&lt;/code&gt; and at the end of the file, add this configuration:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set -o allexport; source /&amp;lt;path_to_the_directory_of_.env_file&amp;gt;/.env; set +o allexport
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above configuration will loop through all the environment variables you set and add them to your machine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0htmufqx4f8mxcxnbrc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0htmufqx4f8mxcxnbrc1.png" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And to ensure the configuration takes effect, log out of your shell session, log back in, and then run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ printenv 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see your environment variables, as shown in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc40pvnsjwzxvsigmjqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc40pvnsjwzxvsigmjqv.png" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check out this &lt;a href="https://docs.docker.com/compose/environment-variables/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; to learn more about best practices using environment variables in Docker Compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  If Possible, Avoid Multiple Compose Files for Different Environments
&lt;/h2&gt;

&lt;p&gt;Although &lt;a href="https://docs.docker.com/compose/production/" rel="noopener noreferrer"&gt;Docker suggests defining an additional Compose file for a specific environment&lt;/a&gt;, e.g., &lt;code&gt;docker-compose-prod.yaml&lt;/code&gt; for your production environment. This approach can cause issues as it is manual and error-prone when reapplying all modifications from one environment to another. Also, it increases the complexity of your development process when you consider CI, staging, or QA environments.&lt;/p&gt;

&lt;p&gt;As a rule, you should always try to keep only a single Docker Compose file for all environments. But in cases where you might want to use &lt;a href="https://nodemon.io/" rel="noopener noreferrer"&gt;nodemon&lt;/a&gt; to monitor changes in your node application during development. Or perhaps you would use a managed MongoDB database in production but run MongoDB locally in a container for development. &lt;/p&gt;

&lt;p&gt;For these cases, you can use the &lt;code&gt;docker-compose.override.yml&lt;/code&gt; file. As its name implies, the override file will contain configuration overrides for existing or entirely new services in your &lt;code&gt;docker-compose.yaml&lt;/code&gt; file. &lt;/p&gt;

&lt;p&gt;To run your project with the override file, you still run the default &lt;code&gt;docker-compose up&lt;/code&gt; command. Then Docker Compose will automatically merge the &lt;code&gt;docker-compose.yml&lt;/code&gt; and &lt;code&gt;docker-compose.override.yml&lt;/code&gt;  files into one.  Suppose you defined a service in both files; Docker Compose merges the configurations using the rules described in &lt;a href="https://docs.docker.com/compose/extends/#adding-and-overriding-configuration" rel="noopener noreferrer"&gt;Adding and overriding configuration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can add the override file to your &lt;code&gt;.gitignore&lt;/code&gt; file, so it won't be there when you push code to production. After doing this, if you still see the need to create more Compose files, then go ahead but keep in mind the complexity. &lt;/p&gt;

&lt;h2&gt;
  
  
  Use YAML Templates to Avoid Repetition
&lt;/h2&gt;

&lt;p&gt;When a service has options that will repeat in other services, you can create a template from the initial service to reuse in the other services instead of continuously repeating yourself. &lt;/p&gt;

&lt;p&gt;The following illustrates Docker Compose YAML templating:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.9'
services:
  web: &amp;amp;service_default
    image: .
    init: true
    restart: always 
  backend:
    &amp;lt;&amp;lt;: *service_default # inheriting the service defaults definitions
    image: &amp;lt;image_name&amp;gt;
    env_file: .env
    environment:
      XDEBUG_CONFIG: "remote_host=${DOCKER_HOST_NAME_OR_IP}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above YAML configuration, the first service defines &lt;code&gt;restart: always&lt;/code&gt;, which will restart the container automatically if it crashes. Instead of adding &lt;code&gt;restart: always&lt;/code&gt; and other recurring configs you might have to all your services, you can replicate them with &lt;code&gt;&amp;lt;&amp;lt;: *service_default&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Check out this &lt;a href="https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd" rel="noopener noreferrer"&gt;article&lt;/a&gt; to learn more about YAML templating capabilities for Docker Compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use &lt;code&gt;docker-compose up&lt;/code&gt; Flags Where Necessary
&lt;/h2&gt;

&lt;p&gt;To make your development process a lot easier when creating and starting up containers with &lt;code&gt;docker-compose up&lt;/code&gt;, you could use the features provided by &lt;code&gt;docker-compose up&lt;/code&gt; flags.&lt;/p&gt;

&lt;p&gt;To see these flags, you can run the command below in your terminal:&lt;br&gt;
&lt;code&gt;$ docker-compose up --help&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbry067sv420e3adoyjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbry067sv420e3adoyjm.png" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, you can see them in the &lt;a href="https://docs.docker.com/engine/reference/commandline/compose_up/" rel="noopener noreferrer"&gt;compose up command line reference&lt;/a&gt;, but viewing them on your terminal is easier during development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article discussed 4 best practices you should consider when using Docker Compose to orchestrate multi-container Docker applications. There are other best practices you can consider; to learn more about them, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.cloud66.com/10-tips-for-docker-compose-hosting-in-production" rel="noopener noreferrer"&gt;10 Tips for Docker Compose Hosting in Production&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nickjanetakis.com/blog/best-practices-around-production-ready-web-apps-with-docker-compose" rel="noopener noreferrer"&gt;Best Practices Around Production Ready Web Apps with Docker Compose&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/factualopinions/docker-compose-tricks-and-best-practices-5e7e43eba8eb" rel="noopener noreferrer"&gt;Docker-compose Tricks and Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Set Environment Variables on a Linux Machine</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Wed, 14 Sep 2022 15:41:38 +0000</pubDate>
      <link>https://dev.to/everythingdevops/how-to-set-environment-variables-on-a-linux-machine-1ojc</link>
      <guid>https://dev.to/everythingdevops/how-to-set-environment-variables-on-a-linux-machine-1ojc</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/how-to-set-environment-variables-on-a-linux-machine/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When building software, you start in a development environment (your local computer). You then move to another environment(s) (Staging, QA, etc.), and finally, the production environment where users can use the application. &lt;/p&gt;

&lt;p&gt;While moving through each of these environments, there may be some configuration options that will be different. For example, in development, you may want to test &lt;a href="https://en.wikipedia.org/wiki/Create,_read,_update_and_delete" rel="noopener noreferrer"&gt;CRUD&lt;/a&gt; operations with a dummy database with varying configuration values to the live database with real user data. &lt;/p&gt;

&lt;p&gt;To ensure a seamless workflow and not have to regularly change the database configurations in code when moving to different environments, you can set environment variables for each. &lt;/p&gt;

&lt;p&gt;In this tutorial, you will learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What environment variables are, and&lt;/li&gt;
&lt;li&gt;How to set environment variables on a Linux machine. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisite
&lt;/h1&gt;

&lt;p&gt;To follow along in this tutorial, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of the terminal.&lt;/li&gt;
&lt;li&gt;Access to a Linux machine — This article uses &lt;a href="https://ubuntu.com/blog/ubuntu-22-04-lts-released" rel="noopener noreferrer"&gt;Ubuntu 22.04 (LTS) x64&lt;/a&gt; distribution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What are environment variables?
&lt;/h1&gt;

&lt;p&gt;Environment variables are variables whose values are set outside the code of an application. They are typically set through the built-in functionality of an operating system. Environment variables are made up of name and value pairs, and you can create as many as you wish to be available for reference at a point in time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting environment variables on a Linux machine
&lt;/h1&gt;

&lt;p&gt;To set environment variables on a Linux machine, normally, in the shell session of your terminal, you would run the &lt;code&gt;export&lt;/code&gt; command on each environment variable’s name and value like the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ENVIRONMENT_VARIABLE_NAME = &amp;lt;value&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But doing so, if and when the particular shell session ends, all the environment variables will be lost. All the environment variables will be lost because the &lt;code&gt;export&lt;/code&gt; command exports variables to the shell session’s environment, not the Linux machine environment.  &lt;/p&gt;

&lt;p&gt;To persist environment variables on a Linux machine, in any directory aside from your application’s directory, create an environment file with the vi editor using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will create and open a &lt;code&gt;.env&lt;/code&gt; file, then to edit the file using the vi editor, press &lt;code&gt;i&lt;/code&gt; and add your environment variables like in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603799811_Screenshot%2B2022-08-15%2Bat%2B23.49.56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603799811_Screenshot%2B2022-08-15%2Bat%2B23.49.56.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding your environment variables, to save the file, press &lt;code&gt;esc&lt;/code&gt;, then type &lt;code&gt;:wq&lt;/code&gt; and press &lt;code&gt;enter&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603956339_annotely_image%2B48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603956339_annotely_image%2B48.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After saving the file, in the root directory of your Linux machine, run &lt;code&gt;$ ls -la&lt;/code&gt; to view all files, including hidden ones, which should show you a &lt;code&gt;.profile&lt;/code&gt; as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660602926799_annotely_image%2B45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660602926799_annotely_image%2B45.png" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open up the profile file with &lt;code&gt;$ vi .profile&lt;/code&gt;,  press &lt;code&gt;i&lt;/code&gt; to edit the file, and at the end of the file, add the following configuration:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set -o allexport; source /&amp;lt;path_to_the_directory_of_.env_file&amp;gt;/.env; set +o allexport
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above configuration will loop through all the environment variables you added to the &lt;code&gt;.env&lt;/code&gt; file and set them on the Linux machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603418687_annotely_image%2B47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603418687_annotely_image%2B47.png" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To save the configuration, press &lt;code&gt;esc&lt;/code&gt;, then type &lt;code&gt;:wq&lt;/code&gt; and press &lt;code&gt;enter&lt;/code&gt; as you did previously.&lt;/p&gt;

&lt;p&gt;To confirm that the configuration took effect and your environment variables have been set, log out of your current shell session, log back in and then run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ printenv 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, you should see your environment variables, as shown in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660607459982_annotely_image%2B50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660607459982_annotely_image%2B50.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This tutorial explained environment variables and taught how to set them on a Linux machine. There is more to learn about environment variables in Linux. To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.geeksforgeeks.org/environment-variables-in-linux-unix/" rel="noopener noreferrer"&gt;Environment Variables in Linux/Unix&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/" rel="noopener noreferrer"&gt;How to Set and List Environment Variables in Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.guru99.com/linux-environment-variables.html" rel="noopener noreferrer"&gt;List of Environment Variables in Linux/Unix&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Top 3 Security Risks Facing Infrastructure as Code and their Preventive Measures</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Tue, 30 Aug 2022 08:00:53 +0000</pubDate>
      <link>https://dev.to/hackmamba/top-3-security-risks-facing-infrastructure-as-code-and-their-preventive-measures-2do3</link>
      <guid>https://dev.to/hackmamba/top-3-security-risks-facing-infrastructure-as-code-and-their-preventive-measures-2do3</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://hackmamba.io/blog/2022/08/top-3-security-risks-facing-infrastructure-as-code-and-their-prevention/" rel="noopener noreferrer"&gt;Hackmamba&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before Infrastructure as Code (IaC) managing IT infrastructure was a daunting task. System administrators, operation teams, and developers had to manually configure and manage all hardware and software required for applications to run. &lt;/p&gt;

&lt;p&gt;With IaC, developers can quickly provision servers with specific operating systems, run containers, Kubernetes clusters, and even integrate third-party services using machine-readable templates. &lt;/p&gt;

&lt;p&gt;Through IaC, organizations can build scalable and resilient software faster, reducing cost and addressing inconsistencies between the development and production environment. But with all these benefits, as with every software process, security risks exist.&lt;/p&gt;

&lt;p&gt;This article discusses the top 3 security risks facing Infrastructure as Code and measures DevOps teams can take to avoid attacks. The top 3 security risks are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Misconfigurations in IaC templates&lt;/li&gt;
&lt;li&gt;Infrastructure drift&lt;/li&gt;
&lt;li&gt;Ghost resources&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Misconfigurations in IaC Templates
&lt;/h1&gt;

&lt;p&gt;Misconfigurations in an IaC template (such as YAML files, Terraform, or Helm Charts) can easily expose an organization’s environment leaving them vulnerable to attacks.&lt;/p&gt;

&lt;p&gt;According to this &lt;a href="https://unit42.paloaltonetworks.com/cloud-threat-report-intro/" rel="noopener noreferrer"&gt;report by Palo Alto Networks&lt;/a&gt;, nearly 200,000 insecure IaC templates are in use in production environments, and most of these vulnerabilities are due to misconfigurations. On top of that, more than 43% of cloud databases are currently unencrypted and only 60% of cloud storage services have logging enabled.&lt;/p&gt;

&lt;p&gt;Now one might ask, how do these misconfigurations happen, and why are they at this scale? They are at this scale because as more people write open source boilerplate templates and blog posts, most forget to review to ensure they conform to IaC security best practices.&lt;/p&gt;

&lt;p&gt;The image below from this &lt;a href="https://youtu.be/kUGAiEONFd0" rel="noopener noreferrer"&gt;talk on&lt;/a&gt; &lt;a href="https://youtu.be/kUGAiEONFd0" rel="noopener noreferrer"&gt;Infrastructure-as-code Security&lt;/a&gt; shows data of misconfigured open source Terraform modules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_86CDF415191435FF5FADD575236ED6DC03B8783C8592F810FB22983735747F0E_1658697619053_Screenshot%2B2022-07-24%2Bat%2B22.20.14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_86CDF415191435FF5FADD575236ED6DC03B8783C8592F810FB22983735747F0E_1658697619053_Screenshot%2B2022-07-24%2Bat%2B22.20.14.png" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And those misconfigured modules were downloaded 10 million times, as seen in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_86CDF415191435FF5FADD575236ED6DC03B8783C8592F810FB22983735747F0E_1658697750695_Screenshot%2B2022-07-24%2Bat%2B22.22.26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_86CDF415191435FF5FADD575236ED6DC03B8783C8592F810FB22983735747F0E_1658697750695_Screenshot%2B2022-07-24%2Bat%2B22.22.26.png" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although services provisioned with those misconfigurations aren’t necessarily exploitable, they still pose a huge risk. &lt;/p&gt;

&lt;h2&gt;
  
  
  How to Prevent IaC Template Misconfigurations
&lt;/h2&gt;

&lt;p&gt;To prevent IaC template misconfigurations, DevOps teams must scan for these templates during pre-production. Scanning for misconfigurations in IaC pre-production templates means introducing checks and remediation during the development phase and represents a fundamental step for a secure DevOps workflow. &lt;/p&gt;

&lt;p&gt;To integrate this step into their DevOps workflow, organizations can use tools like &lt;a href="https://bridgecrew.io/" rel="noopener noreferrer"&gt;Bridgecrew&lt;/a&gt; to track every change in their IaC, scan those changes, and automatically fix misconfigurations before they move to the production environment.&lt;/p&gt;

&lt;h1&gt;
  
  
  Infrastructure Drift
&lt;/h1&gt;

&lt;p&gt;In IaC, the concept of &lt;strong&gt;drift&lt;/strong&gt; represents the difference between the originally defined values in a configuration to what’s running in production. A &lt;strong&gt;drift&lt;/strong&gt; can be introduced by external actors (humans or scripts) or the IaC dependency on external data sources. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drifts by&lt;/strong&gt; &lt;strong&gt;E&lt;/strong&gt;&lt;strong&gt;xternal&lt;/strong&gt; &lt;strong&gt;A&lt;/strong&gt;&lt;strong&gt;ctors&lt;/strong&gt;&lt;br&gt;
If an on-call SRE (site reliability engineer) logs on to the Cloud environment and manually creates or modifies resources otherwise controlled by Terraform, they introduce a drift. Also, suppose an external script updates a Kubernetes cluster in a way that conflicts with its CloudFormation definition; in that case, that is a drift as well. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drifts by&lt;/strong&gt; &lt;strong&gt;E&lt;/strong&gt;&lt;strong&gt;xternal&lt;/strong&gt; &lt;strong&gt;D&lt;/strong&gt;&lt;strong&gt;ata&lt;/strong&gt; &lt;strong&gt;S&lt;/strong&gt;&lt;strong&gt;ources&lt;/strong&gt;&lt;br&gt;
If there's any change to the external data source it will show up as a drift too. For example, if a load balancer only expects to receive traffic from &lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;Amazon CloudFront&lt;/a&gt;, the DevOps team may want to restrict ingress to a predefined range of IP addresses. However, that range may be dynamic and their IaC tool queries it every time it runs.&lt;/p&gt;

&lt;p&gt;When any of the above drift occurs, if unmanaged, it can lead to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data breaches&lt;/li&gt;
&lt;li&gt;Application downtime&lt;/li&gt;
&lt;li&gt;Possible Deployment failures&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Mitigate Infrastructure Drift
&lt;/h2&gt;

&lt;p&gt;In the above scenarios, the drift caused by external actors is an unwanted by-product of emergencies or broken processes. The drift caused by external data sources is both desired and inevitable. That said, it is clear that drift occurs and teams can’t entirely prevent it.&lt;/p&gt;

&lt;p&gt;But what can teams do? Well, what DevOps teams can do is to detect and reconcile drifts as they happen. See how the following tools can help teams detect and reconcile drifts: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.bridgecrew.io/docs/drift-detection" rel="noopener noreferrer"&gt;Drift detection with Bridgecrew&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.spacelift.io/concepts/stack/drift-detection" rel="noopener noreferrer"&gt;Drift detection with Spacelift&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://snyk.io/product/infrastructure-as-code-security/drift-management/" rel="noopener noreferrer"&gt;Drift management with Snyk&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Ghost Resources
&lt;/h1&gt;

&lt;p&gt;Tagging cloud assets during development is critical to ensure compliance and governance in IaC. Failing to tag assets during IaC operations can result in “ghost” resources. These untagged assets are hard to detect and difficult for developers to observe as the observability of these assets may not be equivalent to the rest of the system.&lt;/p&gt;

&lt;p&gt;Ghost assets can go undetected for long periods while consuming resources and creating potential attack vectors for an organization's infrastructure as code. In addition to the implications on security, ghost resources make it very challenging to assess the effect on operations like cost, maintenance, and reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Prevent Ghost Resources
&lt;/h2&gt;

&lt;p&gt;The only way to mitigate ghost resources is by careful tagging and monitoring for untagged resources.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This article explained the top 3 security risks facing Infrastructure as Code and measures DevOps teams can take to avoid attacks. &lt;/p&gt;

&lt;p&gt;To learn more about other security risks facing IaC, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/pulse/securing-infrastructure-code-chaitanya-jawale/" rel="noopener noreferrer"&gt;Securing Infrastructure as Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://snyk.io/learn/devops-security/" rel="noopener noreferrer"&gt;DevOps Security best practices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cycode.com/blog/8-best-practices-for-securing-infrastructure-as-code/" rel="noopener noreferrer"&gt;8 Infrastructure as Code Best Practices for Security&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>devops</category>
      <category>iac</category>
    </item>
    <item>
      <title>How to avoid merge commits when syncing a fork</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Mon, 22 Aug 2022 01:19:00 +0000</pubDate>
      <link>https://dev.to/everythingdevops/how-to-avoid-merge-commits-when-syncing-a-fork-3b6f</link>
      <guid>https://dev.to/everythingdevops/how-to-avoid-merge-commits-when-syncing-a-fork-3b6f</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/how-to-avoid-merge-commits-when-syncing-a-fork/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Whenever you work on open source projects, you usually maintain your copy (a &lt;a href="https://docs.github.com/en/get-started/quickstart/fork-a-repo" rel="noopener noreferrer"&gt;fork&lt;/a&gt;) of the original codebase. To propose changes, you open up a &lt;a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests" rel="noopener noreferrer"&gt;Pull Request (PR)&lt;/a&gt;. After you create a PR, there are chances that during its review process, commits will be made to the original codebase, which will require you to sync your fork. &lt;/p&gt;

&lt;p&gt;To sync your fork with the original codebase, ideally, you would use the web UI provided by your Git hosting service or run a &lt;code&gt;git fetch&lt;/code&gt; and &lt;code&gt;git merge&lt;/code&gt; in your terminal, as indicated in this &lt;a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork" rel="noopener noreferrer"&gt;Github tutorial&lt;/a&gt;. But with a PR open, syncing your fork that way will introduce an unwanted merge commit to your PR. &lt;/p&gt;

&lt;p&gt;In this article, you will learn what merge commits are and how to avoid them with &lt;code&gt;git rebase&lt;/code&gt; when syncing a fork with an original codebase. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a merge commit?
&lt;/h2&gt;

&lt;p&gt;A merge commit is just like any other commit, it is the state of a repository at a point in time plus the history it evolved from. But there is one thing unique about a merge commit: it has at least two parent commits. &lt;/p&gt;

&lt;p&gt;When you create a merge commit, Git automatically merges the histories of two separate commits. This merge commit can cause conflicts and mess up a project’s Git history if present in a merged PR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_4038E27EBB83FC06153F31425FE835219A7D0419F41A63A1B27836B12EF94A13_1659630585735_annotely_image%2B42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_4038E27EBB83FC06153F31425FE835219A7D0419F41A63A1B27836B12EF94A13_1659630585735_annotely_image%2B42.png" width="800" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The annotated section in the image above shows a merge commit of two parent commits. Here is the &lt;a href="https://github.com/cilium/cilium/commit/fcc7837499b630df4e576396fd28d44007f77db1" rel="noopener noreferrer"&gt;link to the merge commit&lt;/a&gt; to the image.&lt;/p&gt;

&lt;p&gt;Now you know what a merge commit is. Next, you will learn how to avoid it when syncing your fork with an original codebase.  &lt;/p&gt;

&lt;h2&gt;
  
  
  How to avoid merge commits when syncing a fork in Git.
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
To avoid merge commits, you need to &lt;a href="http://git-scm.com/book/en/Git-Branching-Rebasing" rel="noopener noreferrer"&gt;rebase&lt;/a&gt; the changes from the original remote codebase in your local fork before pushing them to your remote fork by following the steps below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;br&gt;
Create a link with the original remote repository to track and get the changes from the codebase with the command below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote add upstream https://github.com/com/original/original.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, you will now have two remotes. One for your fork and one for the original codebase. If you run &lt;code&gt;$ git remote -v&lt;/code&gt;, you will see the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;origin https://github.com/your_username/your_fork.git (fetch)
origin https://github.com/your_username/your_fork.git (push)
upstream https://github.com/original/original.git (fetch)
upstream https://github.com/original/original.git (push)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above, &lt;code&gt;upstream&lt;/code&gt; refers to the original repository from which you created the fork.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;br&gt;
In this step, you fetch all the branches of the remote upstream with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git fetch upstream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;br&gt;
Next, you rewrite your fork’s master with the upstream’s master using &lt;code&gt;git rebase&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rebase upstream/master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt;&lt;br&gt;
Then finally, push the updates to your remote fork. You may need to force the push with “&lt;code&gt;--force&lt;/code&gt;.” &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git push origin master --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;br&gt;
You can skip the &lt;code&gt;git fetch&lt;/code&gt; step by using &lt;code&gt;git pull&lt;/code&gt; which is &lt;code&gt;git fetch&lt;/code&gt; + &lt;code&gt;git merge&lt;/code&gt; but with the &lt;code&gt;--&lt;/code&gt;&lt;code&gt;rebase&lt;/code&gt;  flag to override the &lt;code&gt;git merge&lt;/code&gt;. The pull command will be:&lt;/p&gt;



&lt;br&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git pull --rebase upstream master&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, you learned about merge commits and how you can avoid them when syncing your fork in Git using &lt;code&gt;git rebase&lt;/code&gt;. There is a lot more to learn about &lt;code&gt;git rebase&lt;/code&gt;. To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/docs/git-rebase" rel="noopener noreferrer"&gt;Git rebase official documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.simplilearn.com/what-is-git-rebase-command-article" rel="noopener noreferrer"&gt;What is Git Rebase, and How Do You Use It?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.baeldung.com/git-merge-vs-rebase" rel="noopener noreferrer"&gt;Difference Between git merge and rebase&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building x86 Images on an Apple M1 Chip</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Tue, 16 Aug 2022 15:00:00 +0000</pubDate>
      <link>https://dev.to/everythingdevops/building-x86-images-on-an-apple-m1-chip-3eac</link>
      <guid>https://dev.to/everythingdevops/building-x86-images-on-an-apple-m1-chip-3eac</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/building-x86-images-on-an-apple-m1-chip/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A few months ago, while deploying an application in Amazon Elastic Kubernetes Service (EKS), my pods crashed with a  &lt;code&gt;standard_init_linux.go:228: exec user process caused: exec format error&lt;/code&gt; error.&lt;/p&gt;

&lt;p&gt;After a bit of research, I found out that the error tends to happen when the architecture an image is built on differs from the architecture it is running on. I then remembered that I was building the image on a MacBook with Apple M1 Chip which is based on &lt;a href="https://en.wikipedia.org/wiki/ARM_architecture_family" rel="noopener noreferrer"&gt;ARM64&lt;/a&gt; architecture, and the worker nodes in the EKS cluster I deployed on are based on &lt;a href="https://en.wikipedia.org/wiki/X86" rel="noopener noreferrer"&gt;x86&lt;/a&gt; architecture. &lt;/p&gt;

&lt;p&gt;I had two options to fix the error: create new ARM-based worker nodes or build the image on x86 architecture. I couldn’t create new worker nodes for obvious reasons, so I had to figure out how to build x86 images on my Apple M1 chip.&lt;/p&gt;

&lt;p&gt;In this article, I will walk you through how I built my application’s Docker image with x86 architecture on an Apple M1 chip using &lt;a href="https://docs.docker.com/build/buildx/" rel="noopener noreferrer"&gt;Docker Buildx&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Buildx?
&lt;/h2&gt;

&lt;p&gt;Docker Buildx is a CLI plugin that extends the docker command. Docker Buildx provides the same user experience as &lt;code&gt;docker build&lt;/code&gt; with many new features like the ability to specify the target architecture for which Docker should build the image. These new features are made possible with the help of the &lt;a href="https://github.com/moby/buildkit" rel="noopener noreferrer"&gt;Moby BuildKit&lt;/a&gt; builder toolkit.&lt;/p&gt;

&lt;p&gt;Before you can build x86-64 images on an Apple M1 chip with Docker Buildx, you first need to install Docker Buildx.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Docker Buildx
&lt;/h2&gt;

&lt;p&gt;If you use &lt;a href="https://docs.docker.com/desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt; or have Docker version 20.x, Docker Buildx is already included in it, and you don’t need a separate installation. Verify that you have Docker Buildx with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But if you are like me that use another tool to get the Docker runtime, install Docker Buildx through the binary with the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ARCH=arm64
$ VERSION=v0.8.2 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above commands set temporary environment variables for the architecture and version of the Docker Buildx binary you will download. See the Docker Buildx &lt;a href="https://github.com/docker/buildx/releases/latest" rel="noopener noreferrer"&gt;releases page on GitHub&lt;/a&gt; for the latest version. &lt;/p&gt;

&lt;p&gt;After setting the temporary environment variables, download the binary with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -LO https://github.com/docker/buildx/releases/download/${VERSION}/buildx-${VERSION}.darwin-${ARCH}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After downloading the binary, create a folder in your home directory to hold Docker CLI plugins with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir -p ~/.docker/cli-plugins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then move the binary to the Docker CLI plugins folder with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mv buildx-${VERSION}.darwin-${ARCH} ~/.docker/cli-plugins/docker-buildx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After that, make the binary executable with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ chmod +x ~/.docker/cli-plugins/docker-buildx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To verify the installation, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker buildx version 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Building x86-64 images on an Apple M1 chip with Docker Buildx
&lt;/h2&gt;

&lt;p&gt;After installing Docker Buildx, you can now easily build your application image to x86-64 on an Apple M1 chip with this command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker buildx build --platform=linux/amd64 -t &amp;lt;image-name&amp;gt; .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;buildx&lt;/code&gt; builds the image using the BuildKit engine and does not require the &lt;code&gt;DOCKER_BUILDKIT=1&lt;/code&gt; environment variable to start the builds.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;--platform&lt;/code&gt; flag specifies the target architecture (platform) to build the image for. In this case, &lt;code&gt;linux/amd64&lt;/code&gt;, which is of x86 architecture.&lt;/li&gt;
&lt;li&gt;And the &lt;code&gt;&amp;lt;image-name&amp;gt;&lt;/code&gt; is a placeholder for putting an image tag.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3hpbniki02cp6f9tp66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3hpbniki02cp6f9tp66.png" alt="building the image with buildx" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To verify that Docker built the image to &lt;code&gt;linux/amd64&lt;/code&gt;, use the &lt;code&gt;docker image inspect &amp;lt;image_name&amp;gt;&lt;/code&gt; command as you can see annotated screenshot above. &lt;/p&gt;

&lt;p&gt;The inspect command will display detailed information about the image in JSON format. Scroll down, and you should see the &lt;code&gt;Architecture&lt;/code&gt; and &lt;code&gt;Os&lt;/code&gt; information as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkh2gxr7kfxommrdob1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkh2gxr7kfxommrdob1w.png" alt="Showing image architecture" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article explored building images based on x86 architecture on an Apple M1 chip using Docker Buildx. There is so much more to learn about Docker Buildx. To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubesimplify.com/the-secret-gems-behind-building-container-images-enter-buildkit-and-docker-buildx" rel="noopener noreferrer"&gt;The secret gems behind building container images, Enter: BuildKit &amp;amp; Docker Buildx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408" rel="noopener noreferrer"&gt;Building Multi-Architecture Docker Images With Buildx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://support.circleci.com/hc/en-us/articles/360058095471-How-To-Use-Docker-Buildx-in-Remote-Docker-" rel="noopener noreferrer"&gt;How To Use Docker Buildx in Remote Docker?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>docker</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Kubernetes Architecture Explained: Worker Nodes in a Cluster</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Mon, 08 Aug 2022 19:07:15 +0000</pubDate>
      <link>https://dev.to/everythingdevops/kubernetes-architecture-explained-worker-nodes-in-a-cluster-5ap3</link>
      <guid>https://dev.to/everythingdevops/kubernetes-architecture-explained-worker-nodes-in-a-cluster-5ap3</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/kubernetes-architecture-explained-worker-nodes-in-a-cluster/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When you deploy Kubernetes, you get a cluster. And the cluster you get upon deployment would consist of one or more worker machines (virtual or physical) called nodes for you to run your containerized applications in pods.&lt;/p&gt;

&lt;p&gt;For each worker node to run containerized applications, it must contain a &lt;strong&gt;container runtime&lt;/strong&gt; for running the containers, a &lt;strong&gt;kubelet&lt;/strong&gt; to ensure everything runs, and the &lt;strong&gt;kube-proxy&lt;/strong&gt; for handling networking. &lt;/p&gt;

&lt;p&gt;In this article, you will learn more about what each of the above components does to enable the running of containerized applications in a Kubernetes cluster. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To properly understand this article, you should have an understanding of the Kubernetes control plane.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container runtime
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;container runtime&lt;/strong&gt; in a worker node is responsible for running containers. The &lt;strong&gt;container runtime&lt;/strong&gt; is also responsible for pulling container images from a repository, monitoring local system resources, isolating system resources for the use of a container, and managing the container lifecycle. &lt;/p&gt;

&lt;p&gt;In Kubernetes, there is support for container runtimes such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://containerd.io/" rel="noopener noreferrer"&gt;containerd&lt;/a&gt;: An industry-standard container runtime with an emphasis on simplicity, robustness, and portability.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cri-o.io/" rel="noopener noreferrer"&gt;CRI-O&lt;/a&gt;: A lightweight container runtime specifically built for Kubernetes.&lt;/li&gt;
&lt;li&gt;And any other implementation of the &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md" rel="noopener noreferrer"&gt;Kubernetes Container Runtime Interface (CRI)&lt;/a&gt; —  a plugin enabling the kubelet to use other container runtimes without recompiling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may wonder why you didn’t see Docker (a major container runtime) in the list above. Docker isn’t there because of the &lt;a href="https://kubernetes.io/blog/2022/02/17/dockershim-faq/" rel="noopener noreferrer"&gt;removal of dockershim&lt;/a&gt; — the component that allows use of Docker as a container runtime — in Kubernetes release v1.24. &lt;/p&gt;

&lt;p&gt;But not to worry, you can still use Docker as a container runtime in Kubernetes using the &lt;a href="https://github.com/Mirantis/cri-dockerd" rel="noopener noreferrer"&gt;cri-dockerd&lt;/a&gt; adapter. cri-dockerd provides a &lt;a href="https://stackoverflow.com/questions/2116142/what-is-a-shim" rel="noopener noreferrer"&gt;shim&lt;/a&gt; for Docker Engine that lets you control Docker via the Kubernetes CRI. ****&lt;/p&gt;

&lt;h2&gt;
  
  
  kubelet
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;kubelet&lt;/strong&gt; is the primary Kubernetes node agent. The &lt;strong&gt;kubelet&lt;/strong&gt; is responsible for running the containers for the pods scheduled to its node. The &lt;strong&gt;kubelet&lt;/strong&gt; runs on every machine (control plane, too) in the cluster. &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;kubelet&lt;/strong&gt; for each node keeps a watch on pod resources in the control plane’s &lt;strong&gt;kube-apiserver.&lt;/strong&gt; Whenever the &lt;strong&gt;kube-scheduler&lt;/strong&gt; assigns a pod to a node, the &lt;strong&gt;kubelet&lt;/strong&gt; for that node reads the PodSpec (a YAML or JSON object that describes a pod) and instructs the container runtime using the CRI to spin up containers to satisfy that spec. The &lt;strong&gt;container runtime&lt;/strong&gt; will then pull the container images if they aren’t present in the node and start them. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns9w24saclgsqohfuk1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns9w24saclgsqohfuk1l.png" alt="kubelet working" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is important to note that the &lt;strong&gt;kubelet&lt;/strong&gt; is a Kubernetes component that does not run in a container. The &lt;strong&gt;kubelet,&lt;/strong&gt; along with the &lt;strong&gt;container runtime,&lt;/strong&gt; are installed and run directly on the machine that forms the node in the cluster, and that’s why they are responsible for managing the containerized workloads. &lt;/p&gt;

&lt;h2&gt;
  
  
  kube-proxy
&lt;/h2&gt;

&lt;p&gt;Like the &lt;strong&gt;kubelet&lt;/strong&gt;, &lt;strong&gt;kube-proxy&lt;/strong&gt; runs on every node in the cluster, but unlike the kubelet, kube-proxy typically runs in a Kubernetes pod as a part of a Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noopener noreferrer"&gt;DaemonSet&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kube-proxy&lt;/strong&gt; implements essential functionalities for Kubernetes services.  &lt;strong&gt;kube-proxy&lt;/strong&gt; maintains network rules that allow communication to your Pods from network sessions inside or outside your cluster.&lt;/p&gt;

&lt;p&gt;To dig deep into kube-proxy’s role, you must understand how &lt;strong&gt;Service&lt;/strong&gt; resources work in Kubernetes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service in Kubernetes&lt;/strong&gt;&lt;br&gt;
A &lt;strong&gt;Service&lt;/strong&gt; provides a stable IP address for connecting to pods. There are &lt;strong&gt;Service&lt;/strong&gt; resources in Kubernetes because without them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a client application needs to connect to server pods in a cluster, it would need to retrieve and maintain a list of each pod’s IP address, which will burden the client.&lt;/li&gt;
&lt;li&gt;And also, it will be hard to maintain those connections, seeing that in Kubernetes, pods will come and go due to scaling, update or hardware failure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2exvo6voa8au8t0qeclk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2exvo6voa8au8t0qeclk.png" alt="Service working" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the above setup using a &lt;strong&gt;Service&lt;/strong&gt; resource, the client will just need the address of the &lt;strong&gt;Service,&lt;/strong&gt; and the rest is taken care of for you automatically. &lt;/p&gt;

&lt;p&gt;And how that’s taken care of automatically is that when you create a &lt;strong&gt;Service&lt;/strong&gt; with a &lt;code&gt;selector&lt;/code&gt; that references a &lt;code&gt;label&lt;/code&gt; applied to a set of pods, the &lt;strong&gt;Endpoints controller&lt;/strong&gt; in the &lt;strong&gt;kube-controller-manager&lt;/strong&gt; will create an &lt;strong&gt;Endpoints&lt;/strong&gt; resource with the pods IP addresses. In the &lt;strong&gt;Endpoints&lt;/strong&gt; resource, the addresses are maintained and if they change, they will be updated automatically and the client’s requests routed accordingly. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F981gqgvqco1rfvtozlql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F981gqgvqco1rfvtozlql.png" alt="Endpoints controller working" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above image, you might get the impression that a &lt;strong&gt;Service&lt;/strong&gt; is a literal proxy that load balances requests to backends like Nginx. But that’s almost not the case in modern Kubernetes clusters. &lt;/p&gt;

&lt;p&gt;Now with your understanding of &lt;strong&gt;Service&lt;/strong&gt; resources, let’s dig into how &lt;strong&gt;kube-proxy&lt;/strong&gt; implements essential functionalities for Kubernetes services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How kube-proxy implements functionalities for Kubernetes Service resources.&lt;/strong&gt;&lt;br&gt;
There is an &lt;strong&gt;Endpoints controller&lt;/strong&gt; in the &lt;strong&gt;kube-controller-manager&lt;/strong&gt; that manages the endpoint resources. The &lt;strong&gt;Endpoints controller&lt;/strong&gt; also manages the associations between services and pods. &lt;/p&gt;

&lt;p&gt;With the &lt;strong&gt;kube-proxy&lt;/strong&gt; running on each node in the cluster, it watches &lt;strong&gt;Service&lt;/strong&gt; and &lt;strong&gt;Endpoint&lt;/strong&gt; resources. So when a change that needs updating occurs, the &lt;strong&gt;kube-proxy&lt;/strong&gt; will update the rules in iptables. iptables is a network packet filtering utility that allows rules to be set in the network stack of the Linux kernel. &lt;/p&gt;

&lt;p&gt;Now, when a client pod sends a request to a &lt;strong&gt;Service’s&lt;/strong&gt; IP, the Linux kernel routes the request to one of the pod’s IP according to the rule &lt;strong&gt;kube-proxy&lt;/strong&gt; sets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: kube-proxy supports alternatives to using iptables to manage network packet routing. But iptables is the most common and suitable for the majority of installations.&lt;/p&gt;

&lt;p&gt;Also, note that with iptables, the pod selection will be random. It would not be &lt;a href="https://avinetworks.com/glossary/round-robin-load-balancing/" rel="noopener noreferrer"&gt;round robin&lt;/a&gt; or other common load balancing strategies. For load balancing, you will need to use IPVS (IP Virtual Server), an alternative to iptables that implements a layer for load balancing in the Linux kernel. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u6s521vknqco4t6wkwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u6s521vknqco4t6wkwa.png" alt="kube-proxy working" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, It is important to point out that a &lt;strong&gt;Service’s&lt;/strong&gt; IP is a “virtual IP”, so if you try to ping that IP, you won’t get a response the way you usually would if you ping a pod’s IP, which is an actual endpoint on the network. &lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Service’s&lt;/strong&gt; IP is essentially a key in the rules set by iptables that gives network packet routing instructions to the host’s kernel. But the important part is that a client pod can use the &lt;strong&gt;Service&lt;/strong&gt; IP like it usually would and get the expected behavior as if it were calling an actual pod IP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article explained the roles of the &lt;strong&gt;container runtime&lt;/strong&gt;, &lt;strong&gt;kubelet&lt;/strong&gt;, and &lt;strong&gt;kube-proxy&lt;/strong&gt; components in a Kubernetes worker node. There is so much more to learn about these three node components. &lt;/p&gt;

&lt;p&gt;To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.aquasec.com/cloud-native-academy/container-security/container-runtime/" rel="noopener noreferrer"&gt;3 Types of Container Runtime and the Kubernetes Connection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mo4tech.com/kubelet-create-pod-principle-in-depth-analysis.html" rel="noopener noreferrer"&gt;Kubelet create pod principle in-depth analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://betterprogramming.pub/k8s-a-closer-look-at-kube-proxy-372c4e8b090" rel="noopener noreferrer"&gt;K8s: A Closer Look at Kube-Proxy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
