<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shrihari Haridass</title>
    <description>The latest articles on DEV Community by Shrihari Haridass (@shrihariharidass).</description>
    <link>https://dev.to/shrihariharidass</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shrihariharidass"/>
    <language>en</language>
    <item>
      <title>Mastering Infrastructure with CloudPosse Atmos and Terraform</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Fri, 09 Aug 2024 09:05:52 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/mastering-infrastructure-with-cloudposse-atmos-and-terraform-4088</link>
      <guid>https://dev.to/shrihariharidass/mastering-infrastructure-with-cloudposse-atmos-and-terraform-4088</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today's blog, we’ll dive deep into learning and hands-on practice with CloudPosse Atmos. I'm not sure how familiar people are with this technology, but I’m confident that after reading this and visiting &lt;a href="https://atmos.tools/introduction/" rel="noopener noreferrer"&gt;Atmos' official documentation&lt;/a&gt;, you’ll be intrigued by it. In this blog, I've included some content from the official sources, but I've also added my own insights and projects. I hope you enjoy this technical blog, which I’m presenting in a slightly different format today. It might be a bit longer compared to my previous posts, but I only publish blogs about topics I have hands-on experience with. I hope you find it valuable—let’s get started!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-1-. What is SweetOps..?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SweetOps is a methodology for building modern, secure infrastructure on top of Amazon Web Services (AWS). It provides a toolset, library of reusable Infrastructure as Code (IaC), and opinionated patterns to help you bootstrap robust cloud native architectures. Built in an Open Source first fashion by Cloud Posse, it is utilized by many high performing startups to ensure their cloud infrastructure is an advantage instead of a liability. In short, SweetOps makes working in the DevOps world Sweet!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who is this for?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SweetOps is for DevOps or platform engineering teams that want an opinionated way to build software platforms in the cloud. If the below sounds like you, then SweetOps is what you're looking for:&lt;/p&gt;

&lt;p&gt;-1-. You're on AWS&lt;br&gt;
-2-. You're using Terraform as your IaC tool&lt;br&gt;
-3-. Your platform needs to be secure and potentially requires passing compliance audits (PCI, SOC2, HIPAA, HITRUST, FedRAMP, etc.)&lt;br&gt;
-4-. You don't want to reinvent the wheel&lt;/p&gt;

&lt;p&gt;With SweetOps you can implement the following complex architectural patterns with ease:&lt;/p&gt;

&lt;p&gt;-1-. An AWS multi-account Landing Zone built on strong, well-established principles including Separation of Concerns and Principle of Least Privilege (POLP).&lt;br&gt;
-2-. Multi-region, globally available application environments with disaster recovery capabilities.&lt;br&gt;
-3-. Foundational AWS-focused security practices that make complex compliance audits a breeze.&lt;br&gt;
-4-. Microservice architectures that are ready for massive scale running on Docker and Kubernetes.&lt;br&gt;
-5-. Reusable service catalogs and components to promote reuse across an organization and accelerate adoption&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-2-. What is Atmos and its uses&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cloudposse Atmos is an open-source tool for managing and orchestrating infrastructure and applications on cloud providers like AWS, GCP, and Azure. It's designed to simplify and automate the process of provisioning, deploying, and managing infrastructure and applications in a cloud-agnostic way.&lt;/p&gt;

&lt;p&gt;Atmos provides a declarative configuration language that allows you to define your infrastructure and applications in a human-readable format, and then automates the provisioning and deployment process using Terraform and other tools.&lt;/p&gt;

&lt;p&gt;Some key features of Cloudposse Atmos include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declarative configuration language&lt;/li&gt;
&lt;li&gt;Cloud-agnostic architecture&lt;/li&gt;
&lt;li&gt;Support for multiple cloud providers (AWS, GCP, Azure)&lt;/li&gt;
&lt;li&gt;Integration with Terraform and other tools&lt;/li&gt;
&lt;li&gt;Automated provisioning and deployment&lt;/li&gt;
&lt;li&gt;Support for microservices and containerized applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cloudposse Atmos is often used by DevOps teams and cloud engineers to streamline their infrastructure and application management workflows, and to promote infrastructure-as-code (IaC) practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-3-. Launch a t2.micro EC2 instance, update the machine, and follow the commands below to install Atmos&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update and install utilities

sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y apt-utils curl

# Add the Cloud Posse repository

curl -1sLf 'https://dl.cloudsmith.io/public/cloudposse/packages/cfg/setup/bash.deb.sh' | sudo bash

# Update package lists

sudo apt-get update

# Install atmos version 1.84.0

sudo apt-get install -y atmos=1.84.0-1

# Verify installation

atmos version


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, make sure you run the command to clone the CloudPosse repo in a single line. We have successfully installed Atmos on our system.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8cu5fkox4ciziyfk5y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8cu5fkox4ciziyfk5y1.png" alt="Image description" width="800" height="651"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6qcg9vpcml4ifjwewc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6qcg9vpcml4ifjwewc7.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-4-. After that, we need to create components, stacks, and some files. Make sure you follow the folder structure below, as it must be organized this way. Before creating the structure, create a main folder, and then set up the following structure within it for better understanding.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3lkglcn3noub4zenttz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3lkglcn3noub4zenttz.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3t0zo56noqt1mwodjhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3t0zo56noqt1mwodjhy.png" alt="Image description" width="747" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-5-. Now, we will start writing the Terraform files within that folder structure. Let’s get started!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-6-. First, add the following code to the &lt;code&gt;atmos.yaml&lt;/code&gt; file. We need to configure &lt;code&gt;atmos.yaml&lt;/code&gt; for our project.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;base_path: "./"

components:
  terraform:
    base_path: "components/terraform"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: false

stacks:
  base_path: "stacks"
  included_paths:
    - "deploy/**/*"
  excluded_paths:
    - "**/_defaults.yaml"
  name_pattern: "{stage}"

logs:
  file: "/dev/stderr"
  level: Info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To configure Atmos for your project, we’ll create a file called &lt;code&gt;atmos.yaml&lt;/code&gt; to specify where Atmos can find the Terraform components and Atmos stacks. This file allows you to configure almost everything in Atmos.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-7-. After that, navigate to the &lt;code&gt;myapp&lt;/code&gt;component to write the Terraform code/module. First, create a &lt;code&gt;variables.tf&lt;/code&gt; file and add the following code to it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;you’ll see that everything is just plain Terraform (HCL) with nothing specific to Atmos. That’s intentional: we want to demonstrate that Atmos works seamlessly with plain Terraform. Atmos introduces conventions around how you use Terraform with its framework, which will become more evident in the subsequent lessons.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;variable.tf&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "stage" {
  description = "Stage where it will be deployed"
  type        = string
}

variable "location" {
  description = "Location for which the weather."
  type        = string
  default     = "Los Angeles"
}

variable "options" {
  description = "Options to customize the output."
  type        = string
  default     = "0T"
}

variable "format" {
  description = "Format of the output."
  type        = string
  default     = "v2"
}

variable "lang" {
  description = "Language in which the weather is displayed."
  type        = string
  default     = "en"
}

variable "units" {
  description = "Units in which the weather is displayed."
  type        = string
  default     = "m"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make the best use of Atmos, ensure your root modules are highly reusable by accepting parameters, allowing them to be deployed multiple times without conflicts. This also usually means provisioning resources with unique names.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;main.tf&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;code&gt;main.tf&lt;/code&gt; file is where the main implementation of your component resides. This is where you define all the business logic for what you're trying to achieve-the core functionality of your root module. If this file becomes too large or complex, you can break it into multiple files in a way that makes sense. However, sometimes that is also a red flag, indicating that the component is trying to do too much and should be broken down into smaller components.&lt;br&gt;
In this example, we define a local variable that creates a URL using the variable inputs we receive. We also set up a data source to perform an HTTP request to that endpoint and retrieve the current weather. Additionally, we write this output to a file to demonstrate a stateful resource&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  url = format("https://wttr.in/%v?%v&amp;amp;format=%v&amp;amp;lang=%v&amp;amp;u=%v",
    urlencode(var.location),
    urlencode(var.options),
    urlencode(var.format),
    urlencode(var.lang),
    urlencode(var.units),
  )
}

data "http" "weather" {
  url = local.url
  request_headers = {
    User-Agent = "curl"
  }
}

# Now write this to a file (as an example of a resource)
resource "local_file" "cache" {
  filename = "cache.${var.stage}.txt"
  content  = data.http.weather.response_body
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;version.tf&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;code&gt;versions.tf&lt;/code&gt; file is where provider pinning is typically defined. Provider pinning increases the stability of your components and ensures consistency between deployments in multiple environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 1.0.0"    #you can use latest version as well

  required_providers {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output.tf&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;code&gt;outputs.tf&lt;/code&gt; file is where, by convention in Terraform, you define any outputs you want to expose from your root module. Outputs are crucial for passing state between root modules and can be used with &lt;a href="https://atmos.tools/core-concepts/components/terraform/remote-state" rel="noopener noreferrer"&gt;remote state&lt;/a&gt; or the &lt;a href="https://atmos.tools/core-concepts/stacks/templates/functions" rel="noopener noreferrer"&gt;Atmos function to retrieve the state of other components&lt;/a&gt;. In object-oriented terms, think of outputs as the 'public' attributes of the module, intended to be accessed by other modules. This convention helps maintain clarity and organization within your Terraform configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "weather" {
  value = data.http.weather.response_body
}

output "url" {
  value = local.url
}

output "stage" {
  value       = var.stage
  description = "Stage where it was deployed"
}

output "location" {
  value       = var.location
  description = "Location of the weather report."
}

output "lang" {
  value       = var.lang
  description = "Language which the weather is displayed."
}

output "units" {
  value       = var.units
  description = "Units the weather is displayed."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;The sequence of defining these files does not matter&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-8-. Now we are configuring our stack for deployment&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;components:
  terraform:
    &amp;lt;name-of-component&amp;gt;:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To specify which component to use, set the &lt;code&gt;metadata.component&lt;/code&gt; property to the path of the component's directory, relative to the &lt;code&gt;components.base_path&lt;/code&gt; defined in the &lt;code&gt;atmos.yaml&lt;/code&gt;. In our case, the &lt;code&gt;components.base_path&lt;/code&gt; is &lt;code&gt;components/terraform&lt;/code&gt;, so you can simply specify &lt;code&gt;weather&lt;/code&gt;as the path&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;components:
  terraform:
    station:
      metadata:
        component: weather
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;-9-. Next, go to &lt;code&gt;/stacks/catalog&lt;/code&gt; and create a file named &lt;code&gt;station.yaml&lt;/code&gt;.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;components:
  terraform:
    station:
      metadata:
        component: weather
      vars:
        location: Los Angeles
        lang: en
        format: ''
        options: '0'
        units: m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;-10-. Next, we’ll define the environment-specific configurations for our Terraform root module. We’ll create a separate file for each environment and stage. In our case, we have three environments: &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, and &lt;code&gt;prod&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When Atmos processes this stack configuration, it will first import and deep-merge all the variables from the imported files, then apply the inline configuration. While the order of keys in a YAML map doesn’t affect behavior, lists are strictly ordered, so the sequence of &lt;code&gt;imports&lt;/code&gt;is important&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-11-. Define the configuration for the &lt;code&gt;dev&lt;/code&gt;environment.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;dev&lt;/code&gt;stack configuration, Atmos first processes the &lt;code&gt;imports&lt;/code&gt;in the order defined. It then applies the global &lt;code&gt;vars&lt;/code&gt;specified in the top-level section. Include only those &lt;code&gt;vars&lt;/code&gt;in the globals that are applicable to every single component in the stack. For variables that aren't universally applicable, define them on a per-component basis.&lt;/p&gt;

&lt;p&gt;For example, by setting &lt;code&gt;var.stage&lt;/code&gt; to &lt;code&gt;dev&lt;/code&gt;at a global level, we assume that every component in this stack will have a stage variable.&lt;/p&gt;

&lt;p&gt;Finally, in the component-specific configuration for the &lt;code&gt;station&lt;/code&gt;, set the fine-tuned parameters for this environment. Everything else will be inherited from its baseline configuration. There are no strict rules about where to place configurations; organize them in a way that makes logical sense for your infrastructure’s data model.&lt;/p&gt;

&lt;p&gt;To accomplish this, go to &lt;code&gt;/stacks/deploy&lt;/code&gt; and create a file named &lt;code&gt;dev.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vars:
  stage: dev

import:
  - catalog/station

components:
  terraform:
    myapp:
      vars:
        location: India
        lang: en
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;-12-. In this demo, we will focus on the &lt;code&gt;dev&lt;/code&gt;environment only. If you want to create configurations for &lt;code&gt;stage&lt;/code&gt;and &lt;code&gt;prod&lt;/code&gt;, you can refer to the official Atmos documentation for guidance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-13-. After completing all the steps, the final file and folder structure should look like this:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ppn5y2p05vk9rr0xwki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ppn5y2p05vk9rr0xwki.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-14-. Now that we have written all the modules in Terraform, we still need to install Terraform itself. Let’s proceed with the installation.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-1-. sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y gnupg software-properties-common

-2-. wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg &amp;gt; /dev/null

-3-. gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint

-4-. echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list

-5-. sudo apt update

-6-. sudo apt-get install terraform

-7-. terraform version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvte2s4yub93bwbbifru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvte2s4yub93bwbbifru.png" alt="Image description" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-15-. Now, let’s deploy the module using the command below. Make sure to run this command from the root folder where the &lt;code&gt;atmos.yaml&lt;/code&gt; file is located&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;atmos terraform apply myapp -s dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmbdm7p5728asduotsrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmbdm7p5728asduotsrr.png" alt="Image description" width="800" height="772"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-16-. Here, you can view either a graph or a report for the country India. Additionally, if you need to run the &lt;code&gt;init&lt;/code&gt;or &lt;code&gt;plan&lt;/code&gt;commands, you can execute them as well.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-17-. Atmos can change how you think about the Terraform code you write to build your infrastructure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you design cloud architectures with Atmos, you will first break them apart into pieces called components. Then, you will implement Terraform "root modules" for each of those components. Finally, compose your components in any way you like using stacks, without the need to write any code or messy templates for code generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog, we’ve walked through the process of setting up and configuring Atmos with Terraform. We started by creating the necessary directory structure and defining the &lt;code&gt;atmos.yaml&lt;/code&gt; configuration file. We then wrote the Terraform code for our components and environment-specific configurations.&lt;/p&gt;

&lt;p&gt;We covered the steps to install Terraform and deploy the module, ensuring that everything was executed from the correct directory. By the end of this guide, you should have a solid understanding of how Atmos integrates with Terraform and how to manage infrastructure using this framework.&lt;/p&gt;

&lt;p&gt;Feel free to explore Atmos further by consulting the official documentation for additional environments like &lt;code&gt;staging&lt;/code&gt;and &lt;code&gt;production&lt;/code&gt;. With this setup, you’re well-equipped to manage and scale your infrastructure effectively.&lt;/p&gt;

&lt;p&gt;Thank you for following along, and I hope you found this tutorial helpful!&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://atmos.tools/introduction/" rel="noopener noreferrer"&gt;Getting Started with Atmos | atmos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://atmos.tools/" rel="noopener noreferrer"&gt;Hello from atmos | atmos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cloudposse" rel="noopener noreferrer"&gt;Cloud Posse (github.com)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.cloudposse.com/fundamentals/introduction/" rel="noopener noreferrer"&gt;Introduction | The Cloud Posse Developer Hub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>infrastructureascode</category>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Monitor your AWS EC2/Workspace with Datadog</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Mon, 08 Jul 2024 10:30:19 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/how-to-monitor-your-aws-ec2workspace-with-datadog-15jd</link>
      <guid>https://dev.to/shrihariharidass/how-to-monitor-your-aws-ec2workspace-with-datadog-15jd</guid>
      <description>&lt;p&gt;-1- Log in to your AWS and Datadog accounts. In this example, I will configure AWS Workspace. If you have an EC2 instance, you can follow these steps too. My OS is Ubuntu.&lt;/p&gt;

&lt;p&gt;-2-. After deploying an EC2 instance or logging into an AWS Workspace, proceed to update the machine.&lt;/p&gt;

&lt;p&gt;-3-. Next, navigate to Datadog → Integrations → Agent → Ubuntu and install the Datadog agent on your host machine. This agent will send metrics to Datadog and does not integrate directly with any AWS services.&lt;/p&gt;

&lt;p&gt;-4-. Then click on 'Select API Key,' create a new key, and give it a name. Below that, you will see a command to install the Datadog agent on your system. Copy that command and run it on your server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0ez0wb1qcw9wit4rf59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0ez0wb1qcw9wit4rf59.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

DD_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX DD_SITE="datadoghq.com"  bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_agent7.sh)"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;-5-. The command will configure and install the Datadog agent on your machine.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo usermod -a -G docker dd-agent
systemctl status datadog-agent
datadog-agent version
hostname


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can run the above command. Since I have Docker installed on my system, I've added the Datadog agent to the Docker group to monitor Docker. After running the command, check the service status, Datadog version, and finally, the hostname. If you're using an EC2 instance, it will display the instance ID; for a Workspace, it will show the Workspace ID.&lt;/p&gt;

&lt;p&gt;-6-. After installing the agent, navigate to your Datadog dashboard, go to 'Infrastructure,' and search for your instance ID or workspace ID. You will now see your host machine on the default dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7cd6ko7qup6vhpbyeo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7cd6ko7qup6vhpbyeo9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq00wxn3qtkjyt568fyf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq00wxn3qtkjyt568fyf7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-7-. If you notice, initially, you'll see only a few metrics. However, you can explore other options such as 'Host Info,' 'Containers,' 'Processes,' 'Network,' 'Logs,' and more. By default, you can view host info and metrics on the dashboard. If you want to see 'Processes' and 'Logs,' you'll need to enable them in the Datadog configuration file.&lt;/p&gt;

&lt;p&gt;-8-. To enable viewing 'Processes' in the Datadog dashboard, you'll need to edit the configuration file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

vi /etc/datadog-agent/datadog.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following line to your configuration file to enable 'Processes' monitoring.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

process_config:
  process_collection:
    enabled: true



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi000mwyzzhmoqwp2cbyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi000mwyzzhmoqwp2cbyx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-9-. After adding the line to the configuration file, restart the Datadog service. Then, navigate to Datadog, click on 'Processes,' and you'll now see the dashboard displaying Processes, PID, total CPU, and RSS memory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo systemctl restart datadog-agent


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xcmtfxwc4sxfq03og6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xcmtfxwc4sxfq03og6p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10-. Now that Processes are visible, you can expand the dashboard by clicking 'Open in Live Process' on the right-hand side for a larger view. Next, let's configure 'Logs.'&lt;/p&gt;

&lt;p&gt;-11-. To configure 'Logs,' uncomment the line 'Logs: true' in the Datadog configuration file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

vi /etc/datadog-agent/datadog.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9muf4ty2vf5x0ce4h78s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9muf4ty2vf5x0ce4h78s.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-12-. Create a new directory for system logs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo mkdir /etc/datadog-agent/conf.d/system_logs.d


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;-13-. Create the conf.yaml file in the new directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo vi /etc/datadog-agent/conf.d/system_logs.d/conf.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;-14-. Add the log collection configuration to the conf.yaml file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

logs:
  - type: file
    path: /var/log/syslog
    service: syslog
    source: syslog
    sourcecategory: system

  - type: file
    path: /var/log/auth.log
    service: auth
    source: auth
    sourcecategory: system

  - type: file
    path: /var/log/kern.log
    service: kernel
    source: kernel
    sourcecategory: system

  - type: file
    path: /var/log/messages
    service: messages
    source: messages
    sourcecategory: system

  # Add other log files as needed


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;-15-. After making the changes, save and exit the file. Typically, you can use 'wq!' to save and exit, but 'x' also works.&lt;/p&gt;

&lt;p&gt;-16-. Ensure that the Datadog Agent user has the appropriate permissions to read the log files. Afterward, restart the agent.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo usermod -a -G adm dd-agent
sudo systemctl restart datadog-agent


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;-17-. Now go to Datadog again and go to "Logs" tab&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf9qet7heahctp2ok5pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf9qet7heahctp2ok5pd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-18-. Finally, with all these different metrics, create a new dashboard and consolidate all windows into one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4zqeaf3l1411rtv4h5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4zqeaf3l1411rtv4h5k.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-19-. Of course, if you want to monitor additional services or metrics, feel free to explore the official &lt;a href="https://www.datadoghq.com/lpg/?utm_source=advertisement&amp;amp;utm_medium=search&amp;amp;utm_campaign=dg-google-brand-ww&amp;amp;utm_keyword=datadog&amp;amp;utm_matchtype=b&amp;amp;igaag=95325237782&amp;amp;igaat=&amp;amp;igacm=9551169254&amp;amp;igacr=422979063270&amp;amp;igakw=datadog&amp;amp;igamt=b&amp;amp;igant=g&amp;amp;utm_campaignid=9551169254&amp;amp;utm_adgroupid=95325237782&amp;amp;gad_source=1&amp;amp;gclid=CjwKCAjwnK60BhA9EiwAmpHZw5p1f0__jWPvg7LlYYM1AGBF4TxShF7opH80PrQPs62r8fZHnmZX_BoCNC0QAvD_BwE" rel="noopener noreferrer"&gt;Datadog website&lt;/a&gt; and refer to the documentation.&lt;/p&gt;

&lt;p&gt;-20. In conclusion, this guide has demonstrated how to effectively monitor your AWS EC2 or Workspace using Datadog. We covered installing the Datadog agent, configuring metrics like Processes and Logs, and creating a unified dashboard. It's important to note that Datadog is a paid tool, so if you're practicing as a single user or student, be mindful of potential costs. However, Datadog offers a 14-day trial period, allowing you to explore its features for free during this time. Remember, depending on your specific use case, you can further explore Datadog to unlock additional metrics and services tailored to your needs. For more detailed options, visit the official Datadog website and consult their documentation.&lt;/p&gt;

</description>
      <category>datadog</category>
      <category>monitoring</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Importance of Security Groups (SGs) and Network Access Control Lists (NACLs) in AWS</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Thu, 25 Apr 2024 09:41:33 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/importance-of-security-groups-sgs-and-network-access-control-lists-nacls-in-aws-2h17</link>
      <guid>https://dev.to/shrihariharidass/importance-of-security-groups-sgs-and-network-access-control-lists-nacls-in-aws-2h17</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If I'm mistaken or if you disagree with any point, please feel free to share your thoughts. My aim is to highlight the importance of these concepts for newcomers and individuals starting their journey in the cloud.&lt;/p&gt;

&lt;p&gt;-1-. So let's assume you're going to the office. When you enter the office premises, firstly, the security guard identifies your identity and allows you to proceed. Then, as you enter the building, another guard verifies your identity before you swipe your card to gain entry. Of course, swiping your card marks your attendance, but it also verifies that you are indeed the right person to enter the office.&lt;/p&gt;

&lt;p&gt;-2-. Now, you are part of the cybersecurity analyst team, and you have a lot of responsibilities. For this, you have a separate team to monitor security. When you enter the office, everyone can access it, but the security room is accessible only to those who have the authority. This adds another level of security. Your card includes these additional steps, so when you enter that room, it will check if you are an authorized person to access the room, ensuring that not everyone can access this area.&lt;/p&gt;

&lt;p&gt;-3-. Now, these last two security steps are most important and play a very crucial role in the organization, right?&lt;/p&gt;

&lt;p&gt;-4-. Now, the point I'm making is, consider this: You have an application inside your VPC, which consists of various components like IGW, subnets, ELB, route tables, SG, NACL, NAT gateway, etc. Now, let's relate all of these to the story I mentioned earlier. What I'm trying to convey is that, just like in the story where multiple security checkpoints were in place before reaching a secure area, in your VPC environment, there are similar security measures. The last two security points I mentioned - subnet-level security and instance-level security - act as crucial checkpoints. These ensure that user requests are securely directed to your application or server after passing through these layers of protection.&lt;/p&gt;

&lt;p&gt;-5-. Of course, before reaching the subnet and instance level security, there are several other security checks you can implement. However, in today's blog, I'm focusing more on Security Groups, which operate on the instance level, and Network Access Control Lists (NACLs), which work on subnet-level security. I'll delve into how you can set up these security measures in your organization or application layer.&lt;/p&gt;

&lt;p&gt;-6-. In AWS, security is a mutual responsibility shared between AWS and the user. AWS follows the principle of denying all inbound traffic by default when launching instances. If you want to allow inbound traffic, you must specifically allow access to particular ports. However, opening ports to all traffic can pose security risks. It's essential to restrict access to specific IP addresses or employ other secure methods to ensure a safer environment.&lt;/p&gt;

&lt;p&gt;-7-. As a DevOps engineer or AWS admin, managing Security Groups (SGs) for all instances individually can be time-consuming. However, Network Access Control Lists (NACLs), which operate at the subnet level, offer a solution. By defining port and traffic rules at the subnet level, changes automatically apply to all instances within that subnet. This not only reduces administrative tasks but also enhances security by providing consistent rules across multiple instances.&lt;/p&gt;

&lt;p&gt;-8-. In this blog, we'll walk through the process of creating a Virtual Private Cloud (VPC) on AWS. Within this VPC, we'll set up an EC2 instance and deploy a Python application. Throughout the tutorial, we'll explore the configuration of Security Groups (SGs) and Network Access Control Lists (NACLs) to better understand their roles in securing our infrastructure.&lt;/p&gt;

&lt;p&gt;-9-. Let's begin by logging into your AWS account and navigating to the 'VPC' service. "click on create VPC"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F838vgod7i1mfj4ybepmv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F838vgod7i1mfj4ybepmv.PNG" alt="Image description" width="708" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10- Next, select the 'VPC and more' option, as we're currently focusing on creating a VPC. Provide a name for the VPC, specify the CIDR block, choose the Availability Zone (AZ), select subnets including private subnets, and set up a VPC endpoint as an S3 gateway. Finally, click on 'Create VPC' to complete the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hch6pom8cfkahpds3hi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hch6pom8cfkahpds3hi.png" alt="Image description" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-11-. You'll also have a visual representation of your VPC, subnets, route tables, and network connections, which helps you understand how traffic flows within the VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndsbbsyqv1ufrj25wmkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndsbbsyqv1ufrj25wmkb.png" alt="Image description" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknv3xae1c2v0afyc9ziv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknv3xae1c2v0afyc9ziv.png" alt="Image description" width="800" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-12-. After creating the VPC, navigate to the EC2 service and launch a t2.micro instance with default settings. Simply choose the VPC you created and any public subnet, then proceed to launch the instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d0xdfmz5f68bkptb80d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d0xdfmz5f68bkptb80d.png" alt="Image description" width="604" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-13-. After launching the instance, wait for it to start. Once it's running, log in to your server. Since I am using an Ubuntu image, update your machine to ensure you have the latest version of Python installed.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flutntzpe54krakoan8ef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flutntzpe54krakoan8ef.png" alt="Image description" width="502" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-14-. Run the following command to start a simple Python server:&lt;br&gt;
&lt;code&gt;python3 -m http.server 8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fpfvz0pjqxr6dizyf5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fpfvz0pjqxr6dizyf5s.png" alt="Image description" width="660" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-15-. If you copy the public IP and try to access the application using port 8080, you may find it inaccessible. As I mentioned earlier, AWS implements mutual security measures, so by default, all traffic is denied when you launch a server. To address this, navigate to the "Security" tab and click on your "SG-Name". You'll notice that by default, only port 22 is allowed for server login.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrr6lk38rl1ktkivdz0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrr6lk38rl1ktkivdz0q.png" alt="Image description" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-16-. To access the application, click on the Security Group (SG) associated with your instance. Then, select "Edit Inbound Rules" and allow port 8080 for all traffic. Keep in mind that opening all traffic is not recommended for production environments; instead, you can choose to allow traffic only from specific IP addresses. However, for study purposes, we're opening all traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff81gjc26pyt0wnec30ap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff81gjc26pyt0wnec30ap.png" alt="Image description" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-17-. After saving the rule, copy the public IP address and access it on port 8080. You should now be able to see your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6cofskapzofp4jxev05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6cofskapzofp4jxev05.png" alt="Image description" width="453" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-18-. Up until now, you've witnessed the importance of Security Groups at the instance level in providing security. Of course, we opened all traffic for study purposes, but I must reiterate that opening all traffic is not recommended. You should allow access only from specific IP addresses to enhance security. Keep this in mind as you proceed.&lt;/p&gt;

&lt;p&gt;-19-. In an organization with numerous instances running across different environments like dev, prod, and uat, managing each instance's security group policies individually becomes impractical. Developers may also launch instances for their purposes and define their own rules, potentially conflicting with organizational security policies.&lt;/p&gt;

&lt;p&gt;To address this challenge, defining rules at the subnet level becomes crucial. By setting up Network Access Control Lists (NACLs) at the subnet level, you can control the traffic entering and leaving that subnet. This way, regardless of how many instances are launched within the subnet, they all adhere to the defined rules, ensuring consistent and manageable security across your infrastructure.&lt;/p&gt;

&lt;p&gt;-20-. let's navigate back to the VPC console and select "Network ACLs". Then, open the VPC that you created earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujcsaay9uev4yztipbzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujcsaay9uev4yztipbzy.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-21-. In the "Inbound Rules" section, you'll notice that rules start from Rule 100 and continue with an asterisk (*), indicating that they follow the order from 100 until N. To modify the inbound rules, let's first remove Rule 100. Then, click on "Add New Rule" and set it to deny port 8080.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaxku6v7fklkd56kx7wm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaxku6v7fklkd56kx7wm.png" alt="Image description" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9emxu0qdth2yzkizrvzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9emxu0qdth2yzkizrvzx.png" alt="Image description" width="800" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-22-. Now, if you refresh the application page, you'll notice that the application is no longer accessible on port 8080. Although we allowed port 8080 at the instance level, modifying the rule at the subnet level denied traffic for all instances within that subnet. Even if someone launches an instance and allows traffic on port 8080, it won't work due to the NACL policy we set up. This demonstrates the importance of NACLs in VPC from a security standpoint, as they provide an additional layer of control over traffic flow within subnets, ensuring consistent security policies across instances.&lt;/p&gt;

&lt;p&gt;-23-. Exactly, by modifying the NACL rules to allow traffic, the application will be accessible again. This flexibility in NACL rules allows you to adjust security measures as needed while maintaining control over traffic flow within your VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm993tobwrr9xufyez636.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm993tobwrr9xufyez636.png" alt="Image description" width="732" height="831"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-24-. n the NACL group, if I set Rule 100 to deny port 8080 and Rule 200 to allow traffic for port 8080, will our application be accessible or not? The answer is within this blog. Feel free to try it out and let me know.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Terraform &amp; HashiCorp Vault Integration: Seamless Secrets Management</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Sat, 23 Mar 2024 04:00:00 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/terraform-hashicorp-vault-integration-seamless-secrets-management-4jkk</link>
      <guid>https://dev.to/shrihariharidass/terraform-hashicorp-vault-integration-seamless-secrets-management-4jkk</guid>
      <description>&lt;p&gt;&lt;strong&gt;what is Hashicorp vault.?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine HashiCorp Vault as a secure digital vault for all your sensitive information like passwords, API keys, and encryption certificates. It acts like a central location to store, access, and manage these secrets. Here's why it's useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Strong Security&lt;/strong&gt;: Vault encrypts your secrets and controls access with different permission levels. This minimizes the risk of unauthorized access and keeps your sensitive information safe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Management&lt;/strong&gt;: No more scattered secrets! Vault keeps everything in one place, making it easier to control and audit who can access what.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can store various secrets, from database credentials to cloud API keys. Vault also integrates with different tools and platforms you might already be using.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, using Vault helps organizations improve security, simplify secret management, and streamline access control for critical information. While Vault has a free open-source version with limited features, most businesses opt for the paid edition with additional functionalities like advanced auditing and disaster recovery&lt;/p&gt;

&lt;p&gt;-1-. In today's demo, we will explore Terraform Vault integration with a real example and troubleshooting. Let's get started!&lt;/p&gt;

&lt;p&gt;-2-. Launch one EC2 instance with basic configuration and access it.&lt;/p&gt;

&lt;p&gt;-3-. Then update your machine and install GPG as well.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install gpg&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;-4-. Then download the signing key to a new keyring.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;-5-. Then verify the key's fingerprint.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;-6-. Clone the Vault repository from GitHub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p $GOPATH/src/github.com/hashicorp &amp;amp;&amp;amp; cd $_
git clone https://github.com/hashicorp/vault.git
cd vault
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgasrjbiwm9c75hxnn6fs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgasrjbiwm9c75hxnn6fs.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-7-. Then install Vault.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;snap install vault&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8t0rx5uu15dwnocejk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8t0rx5uu15dwnocejk2.png" alt="Image description" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-8-. To start Vault, you can use the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vault server -dev -dev-listen-address="0.0.0.0:8200"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ehlf4feipjv3ge7wt1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ehlf4feipjv3ge7wt1t.png" alt="Image description" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-9-. You may need to set the following environment variables. To do so, create a duplicate session and copy the command, then paste it into the new session.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export VAULT_ADDR='http://0.0.0.0:8200'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fisg276qczyug9ct1bvro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fisg276qczyug9ct1bvro.png" alt="Image description" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10- Then go to your EC2 instance's security group and open port 8200 to access the Vault UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42swuab876a6w6c6ybdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42swuab876a6w6c6ybdu.png" alt="Image description" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-11-. And then copy the public IP and paste it into your browser's address bar followed by port 8200. You will see the login page appear.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vqaf72zo7y5dn8a51kw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vqaf72zo7y5dn8a51kw.png" alt="Image description" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-12-. For logging in as root, select 'Token' as the default method. Copy the token from your server when you start Vault; it will be displayed to you. After that, you will see the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy4p3aonx975pqczpmkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy4p3aonx975pqczpmkf.png" alt="Image description" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmgt1yidd0bz0bbvrti5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmgt1yidd0bz0bbvrti5.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-13-. Then, in this demo, we will store the secret as 'Key &amp;amp; value'. To do so, click on 'Secret Engines', then select 'KV' which stands for 'Key/value', and click on 'Next'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwe9nfbhzcyr8pzg161s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwe9nfbhzcyr8pzg161s.png" alt="Image description" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-14-. Next, on the following window, provide a path and click 'Next' to proceed, where you will notice that no credentials are created or stored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynovk7biy0kar8mdayn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynovk7biy0kar8mdayn7.png" alt="Image description" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnfworo9hj9bij1mkwp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnfworo9hj9bij1mkwp3.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-15-. Now, click on 'Create Secret' and input your secret. I will use 'Username &amp;amp; password' as an example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufmvcklerhvf99hfgpt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufmvcklerhvf99hfgpt0.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrlbam3x6iqycm0tt8xd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrlbam3x6iqycm0tt8xd.png" alt="Image description" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-16-. Then, to access these credentials in Terraform, you need to assign policies or roles similar to IAM. Click on 'Policies', then click on 'Enable New Method', and select 'App Role' as the authentication method. This is similar to IAM. Enable the role. Using this App Role, I will authenticate Terraform or whichever tool you are using.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favpvhm17rrovw8dsb01x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favpvhm17rrovw8dsb01x.png" alt="Image description" width="732" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvjn92osoyaomhztkzac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvjn92osoyaomhztkzac.png" alt="Image description" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-17-. Then, create a duplicate session of your machine and run the following command on it.&lt;/p&gt;

&lt;p&gt;-18-. Because HashiCorp Vault does not support the creation of users in the UI, go to the console and create a user. It's very easy. Before creating the role, we need to create a policy for that role. This policy enables Terraform to access the 'kv' and 'secret' folders in Vault.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault policy write terraform - &amp;lt;&amp;lt;EOF
path "*" {
  capabilities = ["list", "read"]
}

path "secrets/data/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "kv/data/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "secret/data/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "auth/token/create" {
capabilities = ["create", "read", "update", "list"]
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-19-. Now, create a role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault write auth/approle/role/terraform \
    secret_id_ttl=10m \
    token_num_uses=10 \
    token_ttl=20m \
    token_max_ttl=30m \
    secret_id_num_uses=40 \
    token_policies=terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rqbal2tv9sa7096h1j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rqbal2tv9sa7096h1j4.png" alt="Image description" width="800" height="647"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-20-. Similar to AWS, in Vault, we have Role ID and Secret ID. This is sensitive information, so do not share it with anyone.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault read auth/approle/role/terraform/role-id

vault write -f auth/approle/role/terraform/secret-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrjlwvhhznobynv5on0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrjlwvhhznobynv5on0s.png" alt="Image description" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-21-. Now that we are done with Vault, the next step is to write down the Terraform project and check whether Terraform is able to read the secret from Vault.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#main.tf

provider "aws" {
  region = "us-east-1"
}

provider "vault" {
  address = "http://&amp;lt;your-ip_address&amp;gt;:8200"
  skip_child_token = true

  auth_login {
    path = "auth/approle/login"

    parameters = {
      role_id = "&amp;lt;your-role_id&amp;gt;"
      secret_id = "&amp;lt;your-secret_id&amp;gt;"
    }
  }
}

data "vault_kv_secret_v2" "example" {
  mount = "kv" // change it according to your mount
  name  = "test-secret" // change it according to your secret
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-22-. Then, to check if Terraform is able to retrieve the credentials from Vault, run the 'terraform init' and 'terraform apply' commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5s05dfyit15if9ies7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5s05dfyit15if9ies7q.png" alt="Image description" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-23-. Now, let's add the block to create an EC2 instance and apply it again. You will see that in the EC2 tag, our password will be taken.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#main.tf

provider "aws" {
  region = "us-east-1"
}

provider "vault" {
  address = "http://&amp;lt;your-ip_address&amp;gt;:8200"
  skip_child_token = true

  auth_login {
    path = "auth/approle/login"

    parameters = {
      role_id = "&amp;lt;your-role_id&amp;gt;"
      secret_id = "&amp;lt;your-secret_id&amp;gt;"
    }
  }
}

data "vault_kv_secret_v2" "example" {
  mount = "kv" // change it according to your mount
  name  = "test-secret" // change it according to your secret
}

resource "aws_instance" "example" {
  ami = "ami-0c7217cdde317cfec"
  instance_type = "t2.micro"

  tags = {
    secret = data.vault_kv_secret_v2.example.data["username"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7ghn712sg8r7fbi6sd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7ghn712sg8r7fbi6sd7.png" alt="Image description" width="537" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8d3e6now55x2o9azw0jn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8d3e6now55x2o9azw0jn.png" alt="Image description" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk2fpy92u4t3j8hhqzzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk2fpy92u4t3j8hhqzzk.png" alt="Image description" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-24-. If you encounter the following error while running 'terraform apply' command, then run it again to recreate 'role-id' and 'secret-id' with the same command as mentioned in step no. 20. After that, paste the new credentials into the code. This will solve your problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wqq9w0fuksap8g1kh8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wqq9w0fuksap8g1kh8f.png" alt="Image description" width="473" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>security</category>
      <category>devops</category>
      <category>secret</category>
    </item>
    <item>
      <title>Automating ECR Image Notifications in Slack with EventBridge and Lambda.</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Sat, 17 Feb 2024 03:51:00 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/automating-ecr-image-notifications-in-slack-with-eventbridge-and-lambda-2078</link>
      <guid>https://dev.to/shrihariharidass/automating-ecr-image-notifications-in-slack-with-eventbridge-and-lambda-2078</guid>
      <description>&lt;p&gt;This blog covers how you can enable notifications in Slack whenever new Docker images are pushed into an AWS ECR repo. This is achieved with the assistance of AWS services such as EventBridge and Lambda. Additionally, to send notifications on Slack automatically, a Slack Webhook is also required.&lt;/p&gt;

&lt;p&gt;-1-. As a first step, create an account on Slack, then navigate to the &lt;a href="https://api.slack.com/apps"&gt;Slack API: Applications | Slack&lt;/a&gt;. Click on 'Create an App,' and choose the option to create it from scratch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb3bmd2gohoecoq8n1qf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb3bmd2gohoecoq8n1qf.png" alt="Image description" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-2-. Provide an 'App Name,' choose your workspace created to receive notifications, and then click on 'Create App.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7kmq986l9kx491xj2je.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7kmq986l9kx491xj2je.png" alt="Image description" width="528" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-3-. On the left side menu bar, locate the 'Incoming Webhook' option and click on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4z2t7hvkswovszbo471.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4z2t7hvkswovszbo471.png" alt="Image description" width="647" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-4-. Activate the incoming webhook by clicking on the 'On' toggle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vuho5ml581eyj28a71v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vuho5ml581eyj28a71v.png" alt="Image description" width="706" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-5-. To add the webhook to the workspace, click on 'Add new webhook to workspace.' A pop-up will appear with Slack channel permission window; click 'Allow' to enable the reception of incoming webhooks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvzzix9vvlof7x7k5xfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvzzix9vvlof7x7k5xfa.png" alt="Image description" width="554" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-6-. Once done, you will see a generated 'Webhook URL.' Copy this URL and save it on Notepad or any text editor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrqaeiswcemcsf1il1tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrqaeiswcemcsf1il1tl.png" alt="Image description" width="677" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-7-. Next, log in to your AWS account and navigate to 'AWS Lambda.' Create a new function, choose the runtime as 'Python 3.12,' and then create the function. Since Lambda doesn't support the 'request' package, you'll need to write the Python code on your local machine where Python is already installed. Alternatively, you can use the provided code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import requests

def lambda_handler(event, context):
    # Print the entire event to CloudWatch Logs for debugging
    print(json.dumps(event))

    # Access imageDigest and repositoryName dynamically
    image_digest = event.get('imageDigest') or event.get('detail', {}).get('imageDigest')
    repo_name = event.get('repositoryName') or event.get('detail', {}).get('repositoryName')

    if not image_digest or not repo_name:
        return {
            'statusCode': 400,
            'body': json.dumps('Invalid event structure. Missing required keys.')
        }

    slack_webhook_url = 'Your Slack URL'

    message = f"New image pushed to ECR: {repo_name}:{image_digest}"
    requests.post(slack_webhook_url, json={'text': message})

    return {
        'statusCode': 200,
        'body': json.dumps('Notification sent to Slack!')
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-8-. After running the command 'pip install requests,' multiple files will be generated. Zip all these files, excluding the Slack webhook file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F377ze6dxp1vymianpcgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F377ze6dxp1vymianpcgh.png" alt="Image description" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-9-. Navigate back to the Lambda function, and on the right-hand side, click on 'Upload from' and choose '.zip.' Select your zipped file and upload your code. In the test event, set the key-value pairs accordingly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn3hhzctfjd414fg1gu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn3hhzctfjd414fg1gu4.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmkjnb0t1z8fnldzp0fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmkjnb0t1z8fnldzp0fw.png" alt="Image description" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10-. Finally, deploy the function.&lt;/p&gt;

&lt;p&gt;-11-. Now, go to 'EventBridge,' and click on 'Create Rule.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc36m2yeiv1edv0sj6t0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc36m2yeiv1edv0sj6t0m.png" alt="Image description" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-12-. Enter a name and description for the rule, then click on 'Next.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqx603n4nlxq4j3zh4cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqx603n4nlxq4j3zh4cy.png" alt="Image description" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-13-. Now, an important step is to configure the 'Event Pattern.' Select the value as 'my image' and click on 'Next.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10z7un7dsne91onek5lm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10z7un7dsne91onek5lm.png" alt="Image description" width="800" height="972"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-14-. Choose the target as 'Lambda,' select your function, click 'Next,' and create the rule.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb20algozkfi6yfh9c516.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb20algozkfi6yfh9c516.png" alt="Image description" width="800" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmz48eb96f7mf2ikq0h4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmz48eb96f7mf2ikq0h4.png" alt="Image description" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-15-. Now, go to Amazon ECR, create a public or private repository, push an image into that repository, and you will see a notification received in your Slack workspace as configured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqsv2d9kq1atqs9uuy6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqsv2d9kq1atqs9uuy6a.png" alt="Image description" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtpwgcxtl15x2magmlfn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtpwgcxtl15x2magmlfn.png" alt="Image description" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-16-. So, from now on, whenever you push a new image to ECR using EventBridge and Lambda, it will automatically send a notification to your Slack group.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, this is how you can configure notifications for ECR images. By leveraging EventBridge and Lambda along with Slack webhooks, you can receive automatic notifications in your Slack workspace whenever a new image is pushed to your ECR repository.&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>slack</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS CodePipeline</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Wed, 07 Feb 2024 05:20:39 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/aws-codepipeline-157o</link>
      <guid>https://dev.to/shrihariharidass/aws-codepipeline-157o</guid>
      <description>&lt;p&gt;-1-. Now, all stages have been properly configured, and it is running smoothly. However, we are currently executing it manually. Therefore, as the final step in this series, we will configure AWS CodePipeline and explore its functionalities.&lt;/p&gt;

&lt;p&gt;-2-. Navigate to 'CodePipeline' and then click on 'Create Pipeline'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0ggejojb90chgbl1lcu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0ggejojb90chgbl1lcu.png" alt="Image description" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-3-. Next, provide a name for your pipeline, select the pipeline type, choose 'New Role', and then click on 'Next'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz9eoffepysg6hvjxyd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz9eoffepysg6hvjxyd6.png" alt="Image description" width="800" height="989"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-4-. Next, in the following window, select 'CodeCommit' as the source provider since our code is stored there. Choose the 'Repository Name' where your code resides, and then select the branch from which you want to build and deploy your application. For the detection option, choose 'AWS CodePipeline', as it is essential for tracking changes in the repository. This ensures that any new changes trigger our pipeline. Finally, click on 'Next'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb3mhxhhvh3aetsrlys8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb3mhxhhvh3aetsrlys8.png" alt="Image description" width="800" height="794"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-5-. In the next stage, the 'Build Stage,' select the 'Build Provider,' indicating where you want to build your code. Choose your region, specify your project name, and then click on 'Next'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi25vrlu316ifnbsjidq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi25vrlu316ifnbsjidq6.png" alt="Image description" width="800" height="733"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-6-. In the deploy stage, select the 'Deploy Provider' and specify the application name and deployment group where we have configured the deployment. Finally, click on 'Next'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04sz6c6qang5mir8r93m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04sz6c6qang5mir8r93m.png" alt="Image description" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-7-. Review your pipeline settings and configurations, ensuring they are accurate. Then, click on 'Create Pipeline' to initiate the process, which will fetch your code from CodeCommit, build it, and deploy it on EC2.&lt;/p&gt;

&lt;p&gt;-8-. Here, you can observe that our pipeline has run successfully, with all stages completing without errors. This indicates that our application has been deployed. You can verify this by checking the latest commit that we made, which triggered the pipeline automatically. To confirm the deployment, you can make changes to the 'index.html' file and see if the changes reflect correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ktb4aq5ex4uz8d11ith.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ktb4aq5ex4uz8d11ith.png" alt="Image description" width="775" height="1125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qp1rmol5zh7pzwq3sgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qp1rmol5zh7pzwq3sgw.png" alt="Image description" width="492" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-9-. Exactly! AWS CI/CD, including services like AWS CodePipeline, is indeed a critical component of DevOps practices. Building pipelines in AWS allows for automated and efficient deployment of applications, ensuring faster delivery and higher reliability. I'm glad you found the process of configuring the code pipeline informative. It's an essential skill for modern software development and deployment. If you have any more questions or need further clarification on any aspect, feel free to ask!&lt;/p&gt;

&lt;p&gt;This is the last part of our 'AWS DevOps' series. We covered topics such as 'Introduction to AWS DevOps,' 'AWS CodeCommit,' 'AWS CodeBuild,' 'AWS CodeDeploy,' and 'AWS CodePipeline.' We delved into each step in detail, exploring their importance. Make sure to try this out, and after you're finished, remember to terminate or delete the pipeline to avoid unnecessary billing. Please share, like, and let me know in the comment box your thoughts for the next series or any questions you may have. Happy learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsdevops</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Automating Dockerfile Vulnerability Scanning in GitHub Actions Using Snyk and CodeQL</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Fri, 02 Feb 2024 18:30:00 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/automating-dockerfile-vulnerability-scanning-in-github-actions-using-snyk-and-codeql-2630</link>
      <guid>https://dev.to/shrihariharidass/automating-dockerfile-vulnerability-scanning-in-github-actions-using-snyk-and-codeql-2630</guid>
      <description>&lt;p&gt;&lt;strong&gt;What will be covered in this blog?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;-&amp;gt; Building secure software is like building a sturdy house - you wouldn't wait until it's finished to check for termites, right? That's where Dockerfile scanning comes in. It's like checking your construction plans for weak spots before hammering any nails.&lt;/p&gt;

&lt;p&gt;-&amp;gt; Think of Snyk and CodeQL as your security inspectors. They scan your Dockerfile, a blueprint for your container image, and point out any hidden vulnerabilities, like rickety doors or leaky windows.&lt;/p&gt;

&lt;p&gt;-&amp;gt; By integrating this scanning into your GitHub Actions, like an automated construction manager, you catch these issues early, even before building the image. This saves you time, money, and headaches later on, because fixing a broken door after the house is built is much harder than preventing it in the first place!&lt;/p&gt;

&lt;p&gt;-&amp;gt; This is what DevSecOps is all about - baking security into every step of building software, not just the final inspection. By automating Dockerfile scanning with Snyk and CodeQL, you build stronger, more secure software with confidence and peace of mind, just like a well-built house that can withstand any storm.&lt;/p&gt;

&lt;p&gt;-&amp;gt; So in this setup what we do we are using snyk to scan Dockerfile and then outputting the result in SARIF format to upload to GitHub code scanning&lt;/p&gt;

&lt;p&gt;-&amp;gt; SARIF (Static Analysis Results Interchange Format) is an &lt;strong&gt;OASIS Standard that defines an output file format.&lt;/strong&gt; The SARIF standard is used to streamline how static analysis tools share their results.&lt;/p&gt;

&lt;p&gt;(1). Go to 'Snyk,' create your account, then navigate to the Dashboard. In the left-side menu bar, click on your account name, and select 'Account Settings' from the dropdown menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo96jgs8yfigx5jzlg4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo96jgs8yfigx5jzlg4w.png" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(2). Once inside, proceed to the 'General' tab. You will find the 'Generate Auth Token' option; click on it, then copy the token. Save it in Notepad. In my case, I have already created that token.&lt;/p&gt;

&lt;p&gt;(3). Next, navigate to your GitHub account and create a repository with your desired name. Create a 'Dockerfile'; for demonstration, you can use the following example Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use an official Node.js runtime as a parent image
FROM node:14

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy the rest of the application code to the working directory
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Define the command to run the application
CMD ["npm", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;save this file&lt;/p&gt;

&lt;p&gt;(4). Proceed to 'Settings' → 'Settings and variables' → 'Actions.' Click on 'New Repository Secret,' provide the name as 'SNYK_TOKEN,' paste the token, and save it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8spmf6kmw4bwwycth72r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8spmf6kmw4bwwycth72r.png" alt="Image description" width="345" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9a7qpwneru78bhwq1bp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9a7qpwneru78bhwq1bp.png" alt="Image description" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(5). Navigate to the 'Actions' tab, search for 'Snyk,' and select 'Snyk Container.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwh5q66cvv5urqkuvgvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwh5q66cvv5urqkuvgvz.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(6). It will generate a '.yaml' file. You can customize this script based on your requirements, but for this demo, it's okay. Now, you can 'Commit' the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1p975nqhjnp14o2nwgq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1p975nqhjnp14o2nwgq.png" alt="Image description" width="800" height="846"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(7). Navigate to the 'Actions' tab and click on the pipeline; your pipeline will be triggered automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42su9uwe8coy809dceeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42su9uwe8coy809dceeu.png" alt="Image description" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(8). Now you can see that my job has run successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv43czwdoiwmd1kyze17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv43czwdoiwmd1kyze17.png" alt="Image description" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(9). Now, go to the 'Snyk Dashboard.' Note that when creating an account on Snyk, you can connect it with your 'GitHub' account for access to your repository. Alternatively, you can go to the 'Projects' option and click on 'Add Project.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3pk8b3qn02t0kljn831.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3pk8b3qn02t0kljn831.png" alt="Image description" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(10). Select the 'GitHub' option, then choose your repository and click on 'Add selected repos.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96ax2awindb6wms5qife.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96ax2awindb6wms5qife.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(11). Here, you will find your project report and the scan report for the 'Dockerfile.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkiheld13x9xjrwugyqdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkiheld13x9xjrwugyqdy.png" alt="Image description" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23v3gcukn23zeggeiqkn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23v3gcukn23zeggeiqkn.png" alt="Image description" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(12). Now, return to our 'GitHub Account,' navigate to 'Security Tab,' and click on 'Code Scanning.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5r55iza8eqrvtet73k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5r55iza8eqrvtet73k5.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(13). Click on 'Code Scanning,' and you will find a detailed report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabpyr9xh8z3arje3kgh2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabpyr9xh8z3arje3kgh2.png" alt="Image description" width="800" height="844"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz302yqz6otq9bsyfsab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz302yqz6otq9bsyfsab.png" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(14). you can clone My repo also&lt;/p&gt;

&lt;p&gt;.&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shri2904"&gt;
        shri2904
      &lt;/a&gt; / &lt;a href="https://github.com/shri2904/snyk-blog"&gt;
        snyk-blog
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Description 
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
snyk-blog&lt;/h1&gt;
&lt;p&gt;Description&lt;/p&gt;
&lt;/div&gt;

  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/shri2904/snyk-blog"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



</description>
      <category>docker</category>
      <category>devsecops</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AWS CodeDeploy</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Wed, 31 Jan 2024 08:55:34 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/aws-codedeploy-2nk8</link>
      <guid>https://dev.to/shrihariharidass/aws-codedeploy-2nk8</guid>
      <description>&lt;p&gt;Hi, guys! I hope you are practicing AWS DevOps and enhancing your skills. Last week, we covered the AWS CodeBuild service, and today we are moving on to the next part, which is CodeDeploy. After building your code, we need to deploy it, right? With the help of the CodeDeploy service, we can automate our deployment process as well. &lt;/p&gt;

&lt;p&gt;AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises.&lt;/p&gt;

&lt;p&gt;-1-. Let's deploy our code with the AWS CodeDeploy service. To deploy an application using CodeDeploy, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to CodeDeploy.&lt;/li&gt;
&lt;li&gt;Click on "Create Application".&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxmrsyx9uickvm0xnti1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxmrsyx9uickvm0xnti1.png" alt="Image description" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-2-. Then, provide an "application name" and select the compute platform as "EC2." Finally, click on "Create."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoprylf161p0h5nhw333.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoprylf161p0h5nhw333.png" alt="Image description" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-3-. So, since we are using EC2 for our study, once your application is built, it will run on that system or be deployed on the server. This concept is referred to as a "deployment group." Click on "Create Deployment Group." A deployment group typically consists of more than one server. If you want to deploy the application on multiple servers simultaneously, you'll need to create a deployment group for that purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3389g9rd1iem6umv8815.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3389g9rd1iem6umv8815.png" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-4-. Then, provide a deployment group name, create a service role, and select the deployment type. In the service role, attach the necessary access policies. If needed, you can attach those policies to your role to ensure the deployment is done successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwco4y7upy524syxj96v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwco4y7upy524syxj96v.png" alt="Image description" width="800" height="891"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eqb08kzzdsvd8bx8n9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eqb08kzzdsvd8bx8n9u.png" alt="Image description" width="800" height="698"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-5-. Then, create a basic EC2 instance and go back to our CodeDeploy group. We need the EC2 instance because our application is deployed on that server. In the Environment Configuration, select your EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu432ksa9to24j6w800cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu432ksa9to24j6w800cy.png" alt="Image description" width="682" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvl9u84b1zkx4v6juoxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvl9u84b1zkx4v6juoxj.png" alt="Image description" width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-6-. Select the "never" value for installing the AWS CodeDeploy agent, disable the load balancer, and click on "Create Deployment Group." For the time being, I've chosen the "never" option, but we may need to change this setting later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy1fp3xu2j0h4ac0sphh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy1fp3xu2j0h4ac0sphh.png" alt="Image description" width="800" height="843"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-7-. The reason I selected the "never" option for installing the agent is because your application runs on an EC2 instance, and the EC2 instance needs specific requirements for the application to function. Installing the "agent" ensures compatibility between the EC2 instance and CodeDeploy. To address any potential version mismatches between the EC2 instance and CodeDeploy, I will provide you with a script. You'll need to connect to the instance and run the script to resolve the version mismatch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash 
# This installs the CodeDeploy agent and its prerequisites on Ubuntu 22.04.  
sudo apt-get update 
sudo apt-get install ruby-full ruby-webrick wget -y
cd /tmp 
wget https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/releases/codedeploy-agent_1.3.2-1902_all.deb 
# wget cmd end here
mkdir codedeploy-agent_1.3.2-1902_ubuntu22 
dpkg-deb -R codedeploy-agent_1.3.2-1902_all.deb codedeploy-agent_1.3.2-1902_ubuntu22
# dpkg end here
sed 's/Depends:.*/Depends:ruby3.0/' -i ./codedeploy-agent_1.3.2-1902_ubuntu22/DEBIAN/control
# sed end here
dpkg-deb -b codedeploy-agent_1.3.2-1902_ubuntu22/
sudo dpkg -i codedeploy-agent_1.3.2-1902_ubuntu22.deb
systemctl list-units --type=service | grep codedeploy
sudo service codedeploy-agent status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feygo7ta71xc856f0d38x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feygo7ta71xc856f0d38x.png" alt="Image description" width="671" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-8-. Now, if you recall, just as we used the &lt;code&gt;buildspec.yaml&lt;/code&gt; for building the code, we need an &lt;code&gt;appspec.yaml&lt;/code&gt; file for deploying the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux
files:
  - source: /
    destination: /var/www/html
hooks:
  AfterInstall:
    - location: scripts/install_nginx.sh
      timeout: 300
      runas: root
  ApplicationStart:
    - location: scripts/start_nginx.sh
      timeout: 300
      runas: root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-9-. After that, create a "scripts" folder, and within it, create two new files named "install_nginx.sh" and "start_nginx.sh." Push these files to our AWS code repository. The file contents are as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---------------------install_nginx.sh---------------------
#!/bin/bash

sudo apt-get update
sudo apt-get install -y nginx

----------------------start_nginx.sh-------------------
#!/bin/bash

sudo service nginx start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02wmvkbsuute5wfxckga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02wmvkbsuute5wfxckga.png" alt="Image description" width="452" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10-. Afterward, we need to build this code so that our latest code is stored in "S3." Navigate to "Build" and initiate the build process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqcd76d841qkw3srcm1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqcd76d841qkw3srcm1e.png" alt="Image description" width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-11-. Now, go to "Deployment" again and click on "Create Deployment." Choose the "Revision type," select the file type, and then click on "Create Deployment." You will see the deployment process starting. Also, choose the S3 location where our "Zip" file is stored; this is crucial. Click on that zip file, and it will show the details of the zip. Copy the zip URL and paste it in the revision location.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzb6zffqttoj6uywlxfr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzb6zffqttoj6uywlxfr.png" alt="Image description" width="593" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk1ldtfogjkoi6dtuz7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk1ldtfogjkoi6dtuz7r.png" alt="Image description" width="800" height="721"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrppkwvn3llyyext36gl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrppkwvn3llyyext36gl.png" alt="Image description" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-12-. But if you click on "Events," you will notice that all are pending because the EC2 instance doesn't have the permission to retrieve artifacts from S3. Additionally, the EC2 instance lacks permission to access the CodeDeploy service. Let's create one more role in IAM to address this issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbzxq0dmeqyu59mmuwme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbzxq0dmeqyu59mmuwme.png" alt="Image description" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau90trinjnmd4blbi4cm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau90trinjnmd4blbi4cm.png" alt="Image description" width="766" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-13-. Now, we need to grant these permissions to our instance. Go to the EC2 dashboard and select your instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwek890u4y9uztbzkl1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwek890u4y9uztbzkl1f.png" alt="Image description" width="673" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd7ullretq7gaiudneta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd7ullretq7gaiudneta.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-14-. Now, connect to the instance and restart the CodeDeploy agent service. After that, check the CodeDeploy dashboard, and you should see that all steps have run successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service codedeploy-agent restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z6vqujg42bex4b183gi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z6vqujg42bex4b183gi.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fardhr8j1yjq2ub5mincz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fardhr8j1yjq2ub5mincz.png" alt="Image description" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-15-. Now, go to EC2, copy your public IP, and paste it into the browser. You should see the Nginx page, indicating that your deployment was successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5i73e64un7qtubzldcv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5i73e64un7qtubzldcv.png" alt="Image description" width="397" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;so this is how code-pipeline works&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsdevops</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS CodeBuild</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Wed, 24 Jan 2024 05:39:32 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/aws-codebuild-npm</link>
      <guid>https://dev.to/shrihariharidass/aws-codebuild-npm</guid>
      <description>&lt;p&gt;Today, we will explore the AWS CodeBuild service, which is the next phase after CodeCommit. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. You don't need to provision, manage, and scale your own build servers. I hope you have read the previous blog about the CodeCommit service.&lt;/p&gt;

&lt;p&gt;So, let's get started&lt;/p&gt;

&lt;p&gt;-1-. Now, let's build our code. To do that, go to CodeBuild and click on 'Create build project.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflgtfzhzw9okyarnuqv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflgtfzhzw9okyarnuqv2.png" alt="Image description" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-2-. In the project configuration settings, provide the 'Project Name,' then select the source provider as 'AWS CodeCommit.' After that, choose the 'Branch.' Next, select the OS as 'Ubuntu,' runtime as 'Standard,' and use the latest image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19w8yplkhu3qepdpaah7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19w8yplkhu3qepdpaah7.png" alt="Image description" width="800" height="821"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h9rl8h8ht4650w1v1el.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h9rl8h8ht4650w1v1el.png" alt="Image description" width="622" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-3-. Now, we need to create a 'Service Role.' Essentially, it requires access to other services. For instance, for building, it needs access to the code repository to fetch code, and the same applies to other services. If you observe, it creates a role automatically for you. After that, you will see the 'Buildspec' file option, which is important as it is the 'Configuration' file that we need to create.&lt;/p&gt;

&lt;p&gt;-4-. To create the file, let's go to VS Code, create a file named 'buildspec.yml,' save it, and push it to the master branch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  install:
    commands:
      - echo Installing NGINX
      - sudo apt-get update
      - sudo apt-get install nginx -y
  build:
    commands:
      - echo Build started on `date`
      - cp index.html /var/www/html/
  post_build:
    commands:
      - echo configuring NGINX

artifacts:
  files:
    - '**/*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-5-. If you save the file as 'buildspec.yml,' you don't need to provide the file name; it will be fetched automatically from that branch. After that, click on 'Create build project.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7m2n4sk8h493qy1jdab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7m2n4sk8h493qy1jdab.png" alt="Image description" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh6oqm0uie33od4i6vva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh6oqm0uie33od4i6vva.png" alt="Image description" width="800" height="658"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-6-. Next, click on the 'Start Build' button to initiate your CodeBuild. You can observe that our build is successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbkgoeb85eiqok3z1guw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbkgoeb85eiqok3z1guw.png" alt="Image description" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj1eemqi9jslbx06ntth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj1eemqi9jslbx06ntth.png" alt="Image description" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-7-. If you want to store the 'Artifacts,' you can edit the configuration and set the location to 'S3'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i97yxwywyoippvk4zmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i97yxwywyoippvk4zmd.png" alt="Image description" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-8-. Choose 'S3' as the option. If you already have a bucket, you can provide its name; otherwise, create one in S3. Within that bucket, create another folder where we will store our artifacts in '.zip' format. also choose Artifacts packaging as "ZIP" Copy that location and paste it, then click on 'Update artifacts.' This way, when we build next time, our artifacts will be stored in S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff438to9ghhgzxnmz81ix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff438to9ghhgzxnmz81ix.png" alt="Image description" width="800" height="1092"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-9-. You can run the build again to check if your build is stored in S3. Before starting the build, go to IAM → roles, find our build role, and add the 'Access S3' permission. This step is necessary to upload the artifacts; otherwise, you will encounter an 'access denied' error. Here, I've attached that role, and our build is successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggd8eas15fmodmtwwzn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggd8eas15fmodmtwwzn1.png" alt="Image description" width="481" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lru69hzkrmk5kjr2s1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lru69hzkrmk5kjr2s1b.png" alt="Image description" width="598" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10-. Now, go to the S3 location to check if the artifacts are uploaded. By mistake, I pasted the wrong location path, so that's why the path is long.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2g83hm53lgy2p5rxxda.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2g83hm53lgy2p5rxxda.png" alt="Image description" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, this is how we can build our code using the CodeBuild service. This series is designed for those who are starting with AWS DevOps or are new to AWS DevOps. That's why I am covering the basic pipeline steps. Once this series concludes, we will create a real scenario pipeline. At that time, we can skip the basic steps. I hope you enjoyed today's part.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsdevops</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS CodeCommit</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Wed, 17 Jan 2024 04:40:00 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/aws-codecommit-3lpb</link>
      <guid>https://dev.to/shrihariharidass/aws-codecommit-3lpb</guid>
      <description>&lt;p&gt;As you can see in the title, today we are covering the first service of AWS DevOps, which is 'CodeCommit.' I hope you've read the introduction part of this series; if not, please check it out. AWS CodeCommit is similar to other centralized code repositories like GitHub, GitLab, GitBucket, and so on; the only difference is that it's an AWS private repository that you can use. For more information, you can read about it on &lt;a href="https://aws.amazon.com/codecommit/."&gt;AWS CodeCommit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So today, we'll learn how to create a repository in CodeCommit, create a user for it, provide necessary permissions to that user using the IAM service, clone that repository into our local machine, make some changes, and push it back to the repository. Additionally, we'll explore how to create branches, merge them in CodeCommit, and cover some other important options. Let's get started!&lt;/p&gt;

&lt;p&gt;-1-. Begin by searching for the 'CodeCommit' service. As you know, the first step involves pushing code into the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IJptwylm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k91ayy3ljciqje4p8tv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IJptwylm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k91ayy3ljciqje4p8tv0.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-2-. Next, navigate to 'Repository,' where we push our code. To do this, click on 'Create Repository.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--edxxcmWf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uel4ljh9nfol6pkrbje2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--edxxcmWf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uel4ljh9nfol6pkrbje2.png" alt="Image description" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, provide a name and description for the repository. You'll notice 'CodeGuru,' which is similar to 'SonarQube Scanner' for code scanning. However, at this point, we don't want to proceed directly to 'create repository.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BtRa_Dhq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fnyv7tt12kbbwjg18j3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BtRa_Dhq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fnyv7tt12kbbwjg18j3.png" alt="Image description" width="800" height="805"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-3-. You'll encounter a warning indicating that using the root user is not recommended. To address this, let's create another user in 'IAM' to enable the use of services in AWS DevOps. Proceed to 'IAM' now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gQPxt0mm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yuldq7gybi7fsvt5ynxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gQPxt0mm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yuldq7gybi7fsvt5ynxw.png" alt="Image description" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-4-. In 'IAM,' navigate to 'Users,' click on 'Add Users,' and provide a name for the user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tQUbs-Pf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4p3uycygugq9r4d5w4j7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tQUbs-Pf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4p3uycygugq9r4d5w4j7.png" alt="Image description" width="769" height="795"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--e3INORjk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r2882pm3idqgsdorwarj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--e3INORjk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r2882pm3idqgsdorwarj.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-5-. In the 'Set Permissions' window, click 'Next.' On the 'Review' page, you'll notice that we currently have only one permission, which is the ability to change the password. Proceed to click 'Create User.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7jVbDAN5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wo4qjxxum1pkve3wdxio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7jVbDAN5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wo4qjxxum1pkve3wdxio.png" alt="Image description" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ivNRCSIy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8cihegrpuhwnv1w7uar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ivNRCSIy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8cihegrpuhwnv1w7uar.png" alt="Image description" width="625" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-6-. You can save this password, as it is displayed only once, or download the '.csv file.'&lt;/p&gt;

&lt;p&gt;-7-. Now that our user is created, click on it. We want to grant permissions for 'CodeCommit,' so first, navigate inside it and then click on 'Security Credentials.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ipxY81Z8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3xzlnaolyuaeq04aa7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ipxY81Z8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3xzlnaolyuaeq04aa7o.png" alt="Image description" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-8-. Scroll down, and you'll find 'HTTPS Git credentials for AWS CodeCommit.' Click on 'Generate Credentials.' Here, you will see your credentials generated. You can also download them, and with these credentials, you can access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1GuMhcJS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oozdtr9iajlzxboox8vn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1GuMhcJS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oozdtr9iajlzxboox8vn.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RHa4zITz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/195vrvki3dxewdqux1xw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RHa4zITz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/195vrvki3dxewdqux1xw.png" alt="Image description" width="627" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-9-. Return to the 'CodeCommit' window, click on 'Clone URL,' and choose the 'HTTPS' option. Create a folder, open your preferred code editor (like VS Code), and clone the repository to your local computer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GOWoWuKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mswh91bdkv7hsuolbfpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GOWoWuKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mswh91bdkv7hsuolbfpz.png" alt="Image description" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10-. While cloning, a pop-up will appear asking you to enter the Git credentials created for 'CodeCommit.' Simply copy and paste them in this window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tEPMlfQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q5p3415htv3k7h9ua9if.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tEPMlfQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q5p3415htv3k7h9ua9if.png" alt="Image description" width="478" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-11-. If the cloning process is not working, we might have missed setting up IAM permissions for cloning and committing. Go back to the 'Users' section, select the user, navigate to the 'Permissions' tab. As there is only one permission currently, click on 'Add permission' → 'Attach Policy Directly' → 'AWSCodeCommitPowerUsers.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u5Efe7tL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilzlilzza2p1s94y3vha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u5Efe7tL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilzlilzza2p1s94y3vha.png" alt="Image description" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2mjR969q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwekjgc61bk1o428zvxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2mjR969q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwekjgc61bk1o428zvxo.png" alt="Image description" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-12-. Now, try the process again in 'VS Code.' This time, you should see a 'Clone successful, empty repository' message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BLE11cQ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38vnn7qi4xzulh7chkze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BLE11cQ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38vnn7qi4xzulh7chkze.png" alt="Image description" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-13-. Next, add an 'index.html' file with some sample text. After making changes, commit and push this code into our repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S2gqokhV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv7tgszozpwq5am8gwpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S2gqokhV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv7tgszozpwq5am8gwpu.png" alt="Image description" width="800" height="886"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_fZQY-K4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vl43cdz80v568a759hah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_fZQY-K4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vl43cdz80v568a759hah.png" alt="Image description" width="733" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-14-. Navigate to 'CodeCommit' → 'Repository,' and refresh. You should now see our files in the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XDBD-2nz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yecxlad6rs6lxkx15hkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XDBD-2nz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yecxlad6rs6lxkx15hkf.png" alt="Image description" width="635" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-15-. This is how you can connect your local machine to AWS CodeCommit using IAM credentials, similar to connecting to GitHub. Now, let's create another branch, push the code, and check the 'Branches' section in CodeCommit to see the newly created 'dev' branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CLA3qDpN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r5t25xemskc632snyqzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CLA3qDpN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r5t25xemskc632snyqzs.png" alt="Image description" width="800" height="844"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cYHyTiqX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sy3q9kewvnieqn0c6ws2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cYHyTiqX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sy3q9kewvnieqn0c6ws2.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-16-. Now, let's merge the changes into 'Master.' Click on the branch, create a pull request, provide a name, create the pull request, merge it using fast-forward, and delete the source branch after merging. Refresh the 'Master' branch to see your changes successfully merged.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xsRn0F2F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqylara1oinytky56vgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xsRn0F2F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqylara1oinytky56vgl.png" alt="Image description" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4T_L2KyI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skegkph4tliclqkfl50f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4T_L2KyI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skegkph4tliclqkfl50f.png" alt="Image description" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yriJ9fz7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kglzoucli82uryl4v4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yriJ9fz7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kglzoucli82uryl4v4s.png" alt="Image description" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tmjo2WAo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhavpnfrwkjr31cucx87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tmjo2WAo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhavpnfrwkjr31cucx87.png" alt="Image description" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z0mMb4G6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3esbpailgtr5gglvcawt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z0mMb4G6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3esbpailgtr5gglvcawt.png" alt="Image description" width="621" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-17-. So, this is how you can use the 'CodeCommit' service.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsdevops</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Connect AWS Application Composer in VS Code</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Tue, 16 Jan 2024 09:39:18 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/how-to-connect-aws-application-composer-in-vs-code-5a7n</link>
      <guid>https://dev.to/shrihariharidass/how-to-connect-aws-application-composer-in-vs-code-5a7n</guid>
      <description>&lt;p&gt;As you know, last month, AWS announced AWS Application Composer in VS Code, allowing you to use it within VS Code and work seamlessly. However, when I started working on it, I found it challenging to integrate because it was recently announced, and there were no video documentation available. Many steps were missing at a high level. After troubleshooting and spending hours on it, I finally managed to make it work.&lt;/p&gt;

&lt;p&gt;So, I'm sharing a step-by-step document on how you can connect AWS Application Composer to VS Code locally and sync up. I hope you find it helpful. Let's get started!&lt;/p&gt;

&lt;p&gt;-1- So, let's begin by logging into the AWS account.&lt;/p&gt;

&lt;p&gt;-2- Next, search for the 'Application Composer' service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco8n8mbnr8t7ke144a3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco8n8mbnr8t7ke144a3r.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-3- After opening, you'll find information about the 'Application Composer.' You can then click on 'Create project.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp567z5oqjfi08zr6arf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp567z5oqjfi08zr6arf8.png" alt="Image description" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-4- Once you click on Create project, a blank canvas dashboard will open. On the right-hand side, you will see the Menu option. Click on it, and you will see multiple options. Select 'Active local sync.'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbu4htsxtga81xo5c54js.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbu4htsxtga81xo5c54js.png" alt="Image description" width="278" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-5- Next, select the folder from your local machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7evkezulgx8jbeoy9h51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7evkezulgx8jbeoy9h51.png" alt="Image description" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-6- I created a new folder named 'application-compose' and selected it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5f3stmtepz0in4ei697.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5f3stmtepz0in4ei697.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-7- After selecting, you will see the 'Active' button. Click on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsa2xalf0pavxdjkmkyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsa2xalf0pavxdjkmkyf.png" alt="Image description" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-8- Open VS Code, go to the Extensions, and search for 'AWS Toolkit.' Install it and configure it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzy6m93kfqjs01tasuxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzy6m93kfqjs01tasuxh.png" alt="Image description" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-9- Click on 'Sign In to get Started,' and another window will appear. Sign in using your 'Builder ID.' If you haven't created one, you can do so at the time of signing in, and you'll be ready to use AWS services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ps61qh23ndpivg3nhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ps61qh23ndpivg3nhz.png" alt="Image description" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-10- Now, navigate to the folder you created.&lt;/p&gt;

&lt;p&gt;-11- Now, you will see the 'template.yaml' file there, which is currently empty since we haven't created anything yet. Additionally, on the right side, you'll notice a small 'Application Composer' symbol. Clicking on this symbol will open the same AWS GUI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj54q4oxqimcxnubzmpi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj54q4oxqimcxnubzmpi4.png" alt="Image description" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-12- From there, you can start working on your project. This is how you can use AWS Application Composer in VS Code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5e0q3d3zb38qmny9m7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5e0q3d3zb38qmny9m7s.png" alt="Image description" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>serverless</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Introduction to AWS DevOps</title>
      <dc:creator>Shrihari Haridass</dc:creator>
      <pubDate>Thu, 11 Jan 2024 10:41:05 +0000</pubDate>
      <link>https://dev.to/shrihariharidass/introduction-to-aws-devops-4b15</link>
      <guid>https://dev.to/shrihariharidass/introduction-to-aws-devops-4b15</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS provides a set of flexible services designed to enable companies to more rapidly and reliably build and deliver products using AWS and DevOps practices. These services simplify provisioning and managing infrastructure, deploying application code, automating software release processes, and monitoring your application and infrastructure performance.&lt;/p&gt;

&lt;p&gt;DevOps is a methodology. When we talk about AWS DevOps, the main components are:&lt;/p&gt;

&lt;p&gt;A. CodeCommit&lt;br&gt;
B. CodeBuild&lt;br&gt;
C. CodeDeploy&lt;br&gt;
D. CodePipeline&lt;/p&gt;

&lt;p&gt;But, as we delve deeper, the journey begins with code. How do you deploy this code on servers in an automated fashion or in a continuous cycle? This is the essence of the journey.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you are a DevOps engineer, especially focused on AWS DevOps, you should be familiar with the following services. I'll discuss each service in-depth, so don't worry about that.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8M622AkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bah4hi059g76pf6so5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8M622AkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bah4hi059g76pf6so5a.png" alt="Credit: https://aws.amazon.com/blogs/architecture/category/developer-tools/aws-codepipeline/ " width="776" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IAM&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Suppose you want to grant access to a specific DevOps service for a particular user. In AWS, where there are over 400 services, you can provide access to specific services, and these services also require access to certain resources. All these rules are defined in IAM.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;KMS &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;KMS, or Key Management Service, is utilized when working with the need to encrypt or decrypt files. The code for encryption and decryption belongs to KMS.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Artifacts&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you build your code, it's essential to save it somewhere. For this purpose, we use the 'Artifact' and 'S3' services.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;CodeCommit&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;CodeCommit is similar to GitHub or GitBucket; the only difference is that it is owned by AWS. It functions as a private service of AWS, serving as a central repository where developers can push their code to 'AWS CodeCommit'.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;CodeBuild &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After pushing the code to the repository, the next steps involve 'building' and 'testing' the code, forming a crucial part of the Continuous Integration (CI) process. Once the developer pushes the code into the repository, it undergoes a build process, and the resulting code is saved to S3 as artifacts. These artifacts can then be deployed whenever and at any time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;CodeDeploy &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In CodeDeploy, its role involves fetching the build from S3 and deploying that code onto compute or serverless services. This marks the next phase in the AWS DevOps service. I'll elaborate on the following aspects in the next part, as they fall under the Continuous Deployment (CD) process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;CodePipeline&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After completing both CI and CD processes, the next step is to automate the workflow by building a pipeline. CodePipeline comes into play for this purpose. It essentially creates stages for you, following the sequence of source → build → deploy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;EC2&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Amazon EC2 empowers you to run secure and scalable compute capacity in the cloud. Additionally, if you need to manage services within your application, EC2 is suitable. It is especially recommended for small-scale applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ECS&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Amazon ECS is a fully managed orchestration service that is ideal for scaling applications, especially those dealing with high traffic. In AWS DevOps, it is commonly utilized for efficient management and scaling of applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lambda&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AWS Lambda allows you to run code without the need for provisioning or managing servers. In the realm of AWS DevOps, Lambda is frequently employed for various purposes such as automating tasks, handling serverless functions, and orchestrating workflows seamlessly.&lt;/p&gt;

&lt;p&gt;This concludes the first part of AWS DevOps, covering essential services commonly used and taught. In the upcoming segments of the DevOps series, we will delve into more services integral to AWS DevOps. Stay tuned for the new series, and thank you for the positive response to my Azure DevOps series. Your support is appreciated, and I look forward to the next part.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
