<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Amash Ansari</title>
    <description>The latest articles on DEV Community by Amash Ansari (@iamamash).</description>
    <link>https://dev.to/iamamash</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iamamash"/>
    <language>en</language>
    <item>
      <title>Shell Scripting Mini Project</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Thu, 12 Sep 2024 18:15:00 +0000</pubDate>
      <link>https://dev.to/iamamash/shell-scripting-mini-project-1bn0</link>
      <guid>https://dev.to/iamamash/shell-scripting-mini-project-1bn0</guid>
      <description>&lt;h2&gt;
  
  
  About
&lt;/h2&gt;

&lt;p&gt;This project gives detailed information about the server who is the user, date and time, Server Uptime, last login activities, Disk utilization, RAM utilization, and Top CPU Processes Running. This project is made using Shell-Scripting, the script can be executed using the bash. I've also taken a backup of this script in the form of a zip file. Some extra commands are also used just to beautify the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Linux&lt;/li&gt;
&lt;li&gt;Shell Scripting&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Things to Remember
&lt;/h2&gt;

&lt;p&gt;Before diving into the project, there are a few commands that are required to be discussed just to have a simple understanding of the project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;whoami&lt;/strong&gt; - allows Linux users to see the currently logged-in user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;echo&lt;/strong&gt; - is used to display the text passed in as an argument.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;echo -e&lt;/strong&gt; - allows escape sequences within its operands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;date&lt;/strong&gt; - displays the current time and date (of the system) in the given FORMAT.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;xargs&lt;/strong&gt; - converts the statements into arguments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;awk&lt;/strong&gt; - a utility that enables a programmer to write tiny but effective programs in the form of statements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;top&lt;/strong&gt; - a commonly used tool for displaying system-performance information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;last&lt;/strong&gt; - used to display a list of users who have previously logged in to the system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;uptime&lt;/strong&gt; - an important metric that shows how long the system has been running or how much time has passed since the system last rebooted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;df -h&lt;/strong&gt; - displays information about total space and available space on a file system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;free -h&lt;/strong&gt; - displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Script
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a shell script.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; vim server_details.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add production house at the top, which means mention the shell you'll be using for the execution of the script.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; #!/bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start writing the script in the way you like by not conflicting with the syntax.&lt;/li&gt;
&lt;li&gt;The final script looks something like this. 👇
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

RED="31"
GREEN="32"
YELLOW="33"
MAGENTA="35"
GRAY="90"
WHITE="97"
ITALICRED="\e[3;${RED}m"
ITALICYELLOW="\e[3;${YELLOW}m"
ITALICWHITE="\e[3;${WHITE}m"
ITALICGREEN="\e[3;${GREEN}m"
BOLDGRAY="\e[1;${GRAY}m"
BOLDYELLOW="\e[1;${YELLOW}m"
BOLDMAGENTA="\e[1;${MAGENTA}m"
ENDCOLOR="\e[0m"


echo -e "${BOLDMAGENTA}\n---------------------------WELCOME----------------------------------\n${ENDCOLOR}"
echo -e "${BOLDYELLOW}\nHey $(whoami), Welcome to the server details corner :-)\n${ENDCOLOR}"

echo -e "${BOLDGRAY}----------------------------***----------------------------------${ENDCOLOR}"
echo -e "${ITALICWHITE}        Current date and Time is : $(date | xargs | awk '{print $3"/"$2"/"$6,$1,$4,$5}')${ENDCOLOR}"
echo -e "${BOLDGRAY}----------------------------***----------------------------------\n${ENDCOLOR}"

echo -e "${BOLDYELLOW}******************************************************************${ENDCOLOR}"
echo -e "${BOLDGRAY}Server Uptime is   : ${ENDCOLOR} ${ITALICWHITE} $(uptime) ${ENDCOLOR}"
echo -e  "${BOLDGRAY}\nLast login details :${ENDCOLOR} ${ITALICWHITE} \n$(last -a | head -5) ${ENDCOLOR}"
echo -e "${BOLDYELLOW}******************************************************************${ENDCOLOR}"

echo -e "${ITALICRED}\n###################################################################${ENDCOLOR}"
echo -e "${ITALICYELLOW}    Disk Space available :${ENDCOLOR} ${ITALICWHITE} $(df -h | xargs | awk '{print $11"/"$9}')${ENDCOLOR}"
echo -e "${BOLDGRAY}\n------------------------------------------------------------------\n${ENDCOLOR}"

echo -e "${ITALICYELLOW}    RAM utilization      :${ENDCOLOR} ${ITALICWHITE} $(free -h | xargs | awk '{print $10"/"$8}')${ENDCOLOR}"
echo -e "${ITALICRED}##################################################################${ENDCOLOR}"

echo -e "${BOLDMAGENTA}\n----------------------------***----------------------------------\n${ENDCOLOR}"
echo -e "${ITALICYELLOW}Top CPU Processes running : ${ENDCOLOR}${ITALICWHITE}"  &amp;amp;&amp;amp; top -b | head -10
echo -e "${BOLDMAGENTA}\n----------------------------***---------------------------------\n${ENDCOLOR}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: I've used some variables here just to beautify the project, they're optional to use. "${}" is used for mentioning variables in arguments. If you want to learn more about styling the text in bash script then &lt;a href="https://dev.to/ifenna__/adding-colors-to-bash-scripts-48g4"&gt;Click here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Output 🎉
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kw1nb63yyhu8vimnwy6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kw1nb63yyhu8vimnwy6.jpg" alt="Image description" width="800" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>bash</category>
      <category>linux</category>
    </item>
    <item>
      <title>Scaling Infrastructure Across Environments with Terraform</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Wed, 11 Sep 2024 17:42:50 +0000</pubDate>
      <link>https://dev.to/iamamash/scaling-infrastructure-across-environments-with-terraform-4m6</link>
      <guid>https://dev.to/iamamash/scaling-infrastructure-across-environments-with-terraform-4m6</guid>
      <description>&lt;p&gt;In this post, we'll explore how to set up Multi-Environment Infrastructure through a step-by-step project. We'll use modules to write organized and reusable code, allowing us to create multiple environments using a single setup. We'll enhance the security and reliability of our infrastructure by integrating our Terraform state file, &lt;code&gt;terraform.tfstate&lt;/code&gt; with a remote backend (S3 bucket). In addition, we'll implement state locking and log the results in a DynamoDB table.&lt;/p&gt;

&lt;p&gt;This post covers advanced Terraform concepts, so it's best if you have a good grasp of Terraform fundamentals and intermediate concepts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You should be familiar with AWS services like EC2, S3, and DynamoDB.&lt;/li&gt;
&lt;li&gt;It's crucial to have a strong understanding of Terraform basics.&lt;/li&gt;
&lt;li&gt;Ensure you have AWS CLI installed on your system. If it's not already installed, you can set it up by &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;clicking here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're new to Terraform basics, you can get started with this beginner-friendly guide by &lt;a href="https://noobdevblog.hashnode.dev/elevating-infrastructure-terraform-in-action" rel="noopener noreferrer"&gt;clicking here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring Project's Implementation
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;We'll complete this project by writing clean and organized code and keeping related configurations together in a separate folder. This approach makes project understanding and management more efficient and speedy.&lt;/li&gt;
&lt;li&gt;We'll make a new folder to store module configurations. In this folder, we'll create modules for EC2, S3, DynamoDB, and a Variable module to manage configuration settings.&lt;/li&gt;
&lt;li&gt;We're doing this to write the configurations once and use them to create resources in various environments, such as development, production, and testing. The advantage is that we don't have to write separate configurations for each environment. With modular code, one configuration can create resources for multiple environments.&lt;/li&gt;
&lt;li&gt;Last but not least, we'll also create a main file where we'll put our discussed modules to use and create multiple infrastructure environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8e5dxs7k1bgu6upjm9f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8e5dxs7k1bgu6upjm9f.jpg" alt="Image description" width="246" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Modules
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Start by setting up a dedicated 'modules' folder to house all your modules. Open this folder with your code editor (I'll be using VS Code in this post). Then, create separate files with the .tf extension to define your EC2, S3 bucket, and DynamoDB table configurations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhldml7affj4h21mpa5o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhldml7affj4h21mpa5o.jpg" alt="Image description" width="372" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the variable configuration and begin writing the script as outlined below. This script will include the variables for our modules, which will be utilized in the resource configurations.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # Variable for multi-environments
  variable "env" {
    description = "This is the environment value for multiple-environments of our infrastructure"
    type        = string
  }

  # Variable for giving ami
  variable "ami" {
    description = "This is the ami value for EC2"
    type        = string
  }

  # Variable for giving instance type
  variable "instance_type" {
    description = "This is the instance_type for our infrastructure"
    type        = string
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now, open the EC2 file and paste the following configuration. This will set up the EC2 instance for multi-environments.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # EC2 Instance configuration for multi-environments
  resource "aws_instance" "ec2" {
    ami           = var.ami    # Using ami variable
    instance_type = var.instance_type # Using instance_type variable
    tags = {
      Name = "${var.env}-instance" # Using env variable to give instance name
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Next, open the S3 file and paste the following configuration. This will create the S3 bucket for multiple environments.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # S3 bucket configuration for multi-environments
  resource "aws_s3_bucket" "module-bucket" {
    bucket = "${var.env}-amash-bucket" # Using env variable to give bucket name
    tags = {
      Name = "${var.env}-amash-bucket"
    }
  } # NOTE: Bucket name should be unique

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Afterward, open the DynamoDB file and insert the following configuration. This will establish the DynamoDB table for various environments.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# DynamoDB configuration for multi-environments
  resource "aws_dynamodb_table" "dynamodb-table" {
    name         = "${var.env}-table" 
    billing_mode = "PAY_PER_REQUEST" # Give billing mode
    hash_key     = "userID"
    attribute {
      name = "userID" # Name of table attribute
      type = "S" # Type of table attribute i.e. string
    }
    tags = {
      Name = "${var.env}-table" # Using env variable to give table name
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it for the modules. Now, we'll employ these modules in various infrastructure environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Main Configurations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Begin by crafting separate &lt;code&gt;.tf&lt;/code&gt; files to define your main configurations, such as Terraform settings, providers, and the backend. Here's how it should look: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0m1w0uub4dzva8b5em1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0m1w0uub4dzva8b5em1.jpg" alt="Image description" width="374" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next, open the &lt;code&gt;remote-backend.tf&lt;/code&gt; file and insert the following configuration. This will link the state file, &lt;code&gt;terraform.tfstate&lt;/code&gt; with the remote backend. It will also implement state locking during updates and display logs in a tabular format.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # Remote backend variable for S3 bucket
  variable "state_bucket_name" {
    default = "demoo-state-bucket"
  }

  # Remote backend variable for DynamoDB table
  variable "state_table_name" {
    default = "demoo-state-table"
  }

  # Variable for giving aws-region
  variable "aws-region" {
    default = "us-east-1"
  }

  # Backend resources for S3 bucket
  resource "aws_s3_bucket" "state_bucket" {
    bucket = var.state_bucket_name 
    tags = {
      Name = var.state_bucket_name # Using state_bucket_name variable to give a bucket name
    }
  }

  resource "aws_dynamodb_table" "state_table" {
    name         = var.state_table_name
    billing_mode = "PAY_PER_REQUEST" # Give any billing mode
    hash_key     = "LockID" # This key will serve as the lock for the infrastructure state

    attribute {
      name = "LockID" # Name of table attribute
      type = "S" # Type of table attribute i.e. string
    }
    tags = {
      Name = var.state_table_name # Using state_table_name variable to give a table name
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Select the Terraform configuration and start scripting it as described below. This script defines the high-level behavior of the Terraform configuration.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # Terraform block
  terraform {
    required_providers {
      aws = {
        source  = "hashicorp/aws"
        version = "~&amp;gt; 5.0"
      }
    }

  # Remote backend configuration
    backend "s3" {
      bucket         = "demoo-state-bucket" # S3 Bucket name
      key            = "terraform.tfstate" # File that we intend to store remotely
      region         = "us-east-1" # Give any region
      dynamodb_table = "demoo-state-table" # DynamoDB table name
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After that, open the providers.tf file and insert the following configuration. This is used to specify the region for creating resources.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; # Provider configuration for selecting aws region
  provider "aws" {
    region = var.aws-region # Using aws-region variable from remote-backend file
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;At last, open the main file and add the following configuration. This file utilizes all the modules we've created so far to set up different environments of our infrastructure, including development, production, and testing.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Created development environment of infrastructure
  module "dev" {
    source        = "./modules" # Giing path to the modules
    env           = "dev" # Passing the environment variable value, which we've defined in the 'var-module.tf' file within the 'modules' folder
    ami           = "ami-053b0d53c279acc90" # Passing the ami variable value, which we've defined in the 'var-module.tf' file within the 'modules' folder
    instance_type = "t2.micro" # Passing the instance_type variable value, which we've defined in the 'var-module.tf' file within the 'modules' folder
  }

  # Created production environment of infrastructure
  module "prod" {
    source        = "./modules"
    env           = "prod"
    ami           = "ami-053b0d53c279acc90"
    instance_type = "t2.micro"
  }

  # Created testing environment of infrastructure
  module "test" {
    source        = "./modules"
    env           = "test"
    ami           = "ami-053b0d53c279acc90"
    instance_type = "t2.micro"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our next step is to run these scripts with Terraform and bring our hard work to a successful outcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Remember
&lt;/h2&gt;

&lt;p&gt;We're just a few steps away from making this project functional, but before that, there are some important points to grasp. Initially, we'll comment out the &lt;code&gt;remote "s3" {}&lt;/code&gt; section in the &lt;code&gt;terraform.tf&lt;/code&gt; file and apply the configuration using the &lt;code&gt;terraform apply&lt;/code&gt; command. Following that, we'll uncomment the &lt;code&gt;remote "s3" {}&lt;/code&gt; part in the &lt;code&gt;terraform.tf&lt;/code&gt; file and apply the configuration again using &lt;code&gt;terraform apply&lt;/code&gt;. This is necessary due to Terraform's behavior, which runs the backend configuration before any other configuration. Since we've used the S3 bucket and DynamoDB tables in our backend, if the backend uses these resources before their creation, Terraform will generate an error. To avoid this, we'll first create the resources like the S3 bucket and DynamoDB table, and then use them to store the state of our infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4b90lns0hoki20pobvf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4b90lns0hoki20pobvf.gif" alt="Image description" width="220" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform in Action
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Execute &lt;code&gt;terraform init&lt;/code&gt; to initialize the folder with Terraform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5y4tzmch4t7frjmv5ms.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5y4tzmch4t7frjmv5ms.jpg" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next, run &lt;code&gt;terraform validate&lt;/code&gt; to check the code syntax, and then execute &lt;code&gt;terraform plan&lt;/code&gt; to preview the changes that will occur if you apply this configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsdijblcepqu16qsal6t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsdijblcepqu16qsal6t.jpg" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finally, perform &lt;code&gt;terraform apply&lt;/code&gt; to apply the configuration, and wait for Terraform to provision the multiple environments for you (Don't forget to comment out the &lt;code&gt;remote "s3" {}&lt;/code&gt; part in the &lt;code&gt;terraform.tf&lt;/code&gt; file).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq42zedc4rjfhzmf6s83w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq42zedc4rjfhzmf6s83w.jpg" alt="Image description" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the changes have been successfully applied, uncomment the &lt;code&gt;remote "s3" {}&lt;/code&gt; part in the &lt;code&gt;terraform.tf&lt;/code&gt; file. Remember to execute &lt;code&gt;terraform init&lt;/code&gt; first, because we've altered Terraform's behavior by adding a backend. Terraform will need to install the necessary plugins. Then, apply the latest configuration by executing the terraform apply command, as discussed above.
&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyzqkw1vyx4y2d9uuwjq.jpg" alt="Image description" width="800" height="368"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After some time, you'll see all the EC2 instances, S3 buckets, and DynamoDB tables on the AWS console, provisioned with just a single click. With the addition of a remote backend and state locking, this showcases the power of Terraform. You can make infrastructure respond to your commands effortlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We've embarked on a journey to establish Multi-Environment Infrastructure through a comprehensive step-by-step project. By leveraging modules, we've structured our code for reusability, simplifying the creation of multiple environments from a single configuration.&lt;/p&gt;

&lt;p&gt;We've taken steps to enhance the security and reliability of our infrastructure by seamlessly integrating our Terraform state file (&lt;code&gt;terraform.tfstate&lt;/code&gt;) with a remote backend hosted on an S3 bucket. Moreover, we've prioritized infrastructure stability by implementing state locking and recording logs in a DynamoDB table. This project showcases the power of Terraform, empowering us to manage infrastructure with precision and ease.&lt;/p&gt;

&lt;p&gt;Here is the project &lt;a href="https://github.com/iamamash/Terraform-Multi-Environment-Infra-Project" rel="noopener noreferrer"&gt;link🔗 Click&lt;/a&gt; to access it.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Effortless Deployment of Project with Ansible &amp; Nginx on AWS</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Tue, 10 Sep 2024 18:30:24 +0000</pubDate>
      <link>https://dev.to/iamamash/effortless-deployment-of-project-with-ansible-nginx-on-aws-10di</link>
      <guid>https://dev.to/iamamash/effortless-deployment-of-project-with-ansible-nginx-on-aws-10di</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Before you start, you should already know how to write Ansible playbooks well.&lt;/li&gt;
&lt;li&gt;You should have some basic knowledge of Nginx.&lt;/li&gt;
&lt;li&gt;You should be comfortable with AWS EC2.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you're new to AWS EC2, check out &lt;a href="https://noobdevblog.hashnode.dev/aws-ec2-basics-beginners-guide" rel="noopener noreferrer"&gt;this&lt;/a&gt; beginner-friendly guide. For more information about Ansible, &lt;a href="https://noobdevblog.hashnode.dev/ansible-fundamentals-simplified" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the EC2 Instances
&lt;/h2&gt;

&lt;p&gt;Set up two EC2 instances for this project: one as the control (worker) node, managed by Ansible, and the other as the managed (worker) node. The control node will use playbooks to configure the managed node and automate the entire deployment process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuguf5ylc94gjbj2e6rxa.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuguf5ylc94gjbj2e6rxa.jpg" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's write the playbook
&lt;/h2&gt;

&lt;p&gt;What we're going to do is create a playbook that installs Nginx and starts the service on the worker node. This playbook will be set up and controlled by the control node. Here's the playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-
  name: This is a simple HTML project
  hosts: servers             #Group name that will host this playbook
  become: yes                #Giving sudo privileges
  tasks:
    - name: nginx-install    #This will install the nginx
      apt:
        name: nginx
        state: latest

    - name: nginx-start
      service:                #This will start and enable the nginx service
        name: nginx
        state: started
        enabled: yes

    - name: deploy-app
      copy:
        src: ../index.html     #Give path to the file
        dest: /var/www/html/   #Nginx will serve our file from this specific location

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our next task is straightforward: run this playbook on the control node using the terminal and wait for Ansible to configure and deploy an end-to-end application with a single click.&lt;/p&gt;

&lt;p&gt;If you'd like to access my "index.html" file, just &lt;a href="https://github.com/iamamash/Ansible-Project/blob/main/index.html" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execute the playbook
&lt;/h2&gt;

&lt;p&gt;To run the playbook, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook &amp;lt;playbook-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see some output similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpp1zdokp7pdw4ukpnyj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpp1zdokp7pdw4ukpnyj.jpg" alt="Image description" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Check the project by accessing
&lt;/h2&gt;

&lt;p&gt;Finally, ensure the project is up and running by accessing it on the default Nginx port, which is 80.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ixntkydhf8hireevcs6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ixntkydhf8hireevcs6.jpg" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Grafana 101: A Beginner’s Guide to the Powerful Dashboard Tool</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Mon, 09 Sep 2024 18:02:33 +0000</pubDate>
      <link>https://dev.to/iamamash/grafana-101-a-beginners-guide-to-the-powerful-dashboard-tool-298n</link>
      <guid>https://dev.to/iamamash/grafana-101-a-beginners-guide-to-the-powerful-dashboard-tool-298n</guid>
      <description>&lt;p&gt;Grafana is a powerful tool that helps visualize and monitor data from various sources, making complex information easy to understand through customizable dashboards and graphs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm9sju46jhqbhk680ljo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm9sju46jhqbhk680ljo.jpg" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Grafana's key use cases include monitoring systems, analyzing data, and creating visualizations. Its benefits lie in providing real-time insights, simplifying complex data into easy-to-understand visuals, and aiding in decision-making by spotting trends or issues quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important Terminologies
&lt;/h2&gt;

&lt;p&gt;Before diving into Grafana, it's important to know some key terms like Observability, which helps understand systems, Monitoring checks how things are doing, Logging for keeping records, and Alerting, which warns about problems.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Monitoring in Grafana involves keeping an eye on systems or data in real time. It helps track how things are performing, like checking if a website is running smoothly or how much CPU a server is using.&lt;/li&gt;
&lt;li&gt;Logging in Grafana is like keeping a diary for a computer system. It records events, errors, or important actions that happen, helping to understand what's been going on and troubleshoot any issues later on.&lt;/li&gt;
&lt;li&gt;Alerting in Grafana is like having a watchdog. It's there to notify you if something goes wrong or needs attention in your system. It sends warnings or messages when specific conditions you set are met, so you can take action quickly.&lt;/li&gt;
&lt;li&gt;Observability in Grafana refers to the ability to understand what's happening within a system through monitoring, logging, and tracing. It involves collecting data and using visualizations to gain insights into system performance, health, and behavior to effectively troubleshoot and optimize it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzss2aeyskknbbzcrnzk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzss2aeyskknbbzcrnzk.jpg" alt="Image description" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Grafana's Inside Story
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Graphite and agent?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Graphite is a tool used to store and graph time-series data. A graphite agent (like Carbon) collects data and sends it to Graphite for storage. Grafana, on the other hand, is a visualization tool that can pull data from Graphite and create visuals like charts or graphs.&lt;/p&gt;

&lt;p&gt;The graphite-agent collects data from different sources, like servers or applications, and sends it to Graphite. This data might be about server performance, website traffic, or any other metric you want to track. Graphite then stores this data in a way that Grafana can understand.&lt;/p&gt;

&lt;p&gt;To display this data in Grafana, you connect Grafana to your Graphite database. Grafana queries Graphite for the data and creates visual representations, such as graphs or charts, allowing you to monitor and analyze the information. Essentially, Graphite acts as the storage and the graphite-agent helps gather data, while Grafana showcases this data visually for easy understanding and analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loki and Promtail?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Grafana, Loki is a system used for logging and storing logs. It's designed to handle huge amounts of log data efficiently. Promtail, on the other hand, is like a collector - it gathers logs from different sources (like files or applications) and sends them to Loki for storage.&lt;/p&gt;

&lt;p&gt;Think of Loki as a big library where all the logs are stored, and Promtail is like a librarian that collects logs from different places and organizes them neatly in that library so you can easily find and use them later on. This helps in analyzing and troubleshooting issues by having all the logs in one accessible place.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Loki and Promtail differ from Graphite?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Loki and Promtail:&lt;/strong&gt; They're focused on handling and storing logs, like text-based records of events or activities, making it easier to search through and analyze this information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graphite:&lt;/strong&gt; It's more about storing and visualizing numeric, time-based data, like performance metrics or sensor readings, in graphs or charts.&lt;/p&gt;

&lt;p&gt;Loki and Promtail are for logs, while Graphite is for numeric data and graphs. They handle different types of information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbs8mufbhxe44uun8re8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbs8mufbhxe44uun8re8.jpg" alt="Image description" width="600" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Grafana's Editions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Grafana Cloud:&lt;/strong&gt; It's a managed service offered by Grafana Labs that provides hosting for Grafana, Prometheus, and other related tools. It's like a ready-to-use platform where you can access Grafana without managing servers or infrastructure yourself. It's convenient for quick setup and management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana Enterprise:&lt;/strong&gt; This is a version of Grafana designed for larger organizations or teams with more advanced needs. It offers additional features, support, and customization options compared to the open-source version. It's like a souped-up version of Grafana with extra capabilities tailored for bigger setups.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Prometheus
&lt;/h2&gt;

&lt;p&gt;Prometheus is a tool used for monitoring and alerting computing systems. It collects data from various sources, like servers or applications, to help you understand their performance.&lt;/p&gt;

&lt;p&gt;Think of it as a health tracker for your systems—it monitors things like CPU usage, memory, or website response times. This data helps spot issues early and sets off alarms when things aren't working as they should, allowing you to fix problems before they become bigger. Overall, it helps maintain system health and performance.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>An introduction to AWS Lambda for beginners</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Sun, 08 Sep 2024 17:58:49 +0000</pubDate>
      <link>https://dev.to/iamamash/an-introduction-to-aws-lambda-for-beginners-1koo</link>
      <guid>https://dev.to/iamamash/an-introduction-to-aws-lambda-for-beginners-1koo</guid>
      <description>&lt;h2&gt;
  
  
  Understanding AWS Lambda
&lt;/h2&gt;

&lt;p&gt;In the realm of cloud computing, AWS Lambda emerges as a revolutionary service offered by Amazon Web Services (AWS). But what exactly is Lambda? It's a serverless computing platform that allows you to run code without managing servers. Think of Lambda as your assistant, ready to execute your code whenever needed, eliminating the complexities of server setup and maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Inner Workings of AWS Lambda
&lt;/h2&gt;

&lt;p&gt;Imagine Lambda as a responsive friend awaiting your call. You upload your code to Lambda and specify triggers that prompt its execution. These triggers could be events like file uploads, database modifications, or even scheduled time intervals. When triggered, Lambda instantly springs into action, executes your code, and then returns to its restful state. It's an efficient system that operates on a pay-as-you-go basis, charging only for the time your code runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx84b9j0gnmn2dr985g7t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx84b9j0gnmn2dr985g7t.jpg" alt="Image description" width="460" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of AWS Lambda
&lt;/h2&gt;

&lt;p&gt;Let's unwrap the benefits that make Lambda an attractive choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Serverless Simplicity:&lt;/strong&gt; No more server management headaches. Lambda handles the infrastructure, allowing you to focus solely on crafting your code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective:&lt;/strong&gt; Pay only for your code's compute time. There are no charges for idle server time, making it a cost-efficient solution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Scaling:&lt;/strong&gt; Whether your application has ten users or a million, Lambda automatically scales to meet demand without manual intervention.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Applications of AWS Lambda
&lt;/h2&gt;

&lt;p&gt;The versatility of Lambda extends across various domains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Event-Driven Actions:&lt;/strong&gt; Use Lambda to respond to incoming messages, image uploads, or database modifications. The code executes instantly upon event occurrence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend Services:&lt;/strong&gt; Develop APIs or backend logic without the hassle of managing servers. Lambda simplifies the process by handling the infrastructure complexities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Processing:&lt;/strong&gt; Process and transform data with ease. Lambda's capabilities allow for seamless data handling and computation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8930ahcfh47ixzqpqnyx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8930ahcfh47ixzqpqnyx.jpg" alt="Image description" width="300" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Initiating Your AWS Lambda Journey
&lt;/h2&gt;

&lt;p&gt;Getting started with Lambda is an exciting endeavor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Account Setup:&lt;/strong&gt; Begin by signing up for an AWS account if you haven't already.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Navigating to Lambda:&lt;/strong&gt; Access the AWS Management Console, select "Services," and click on "Lambda" under the "Compute" section.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creating a Function:&lt;/strong&gt; Dive into Lambda by creating your first function. Provide a name, select a runtime environment (like Node.js, Python, etc.), and upload your code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuring Triggers:&lt;/strong&gt; Choose triggers that activate your function, these could be events like file uploads, API calls, or scheduled times.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing and Monitoring:&lt;/strong&gt; Experiment with your function within Lambda, monitor its performance, and refine it as necessary.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;In conclusion, AWS Lambda stands as a game-changer in the landscape of cloud computing. Its ability to streamline code execution without the hassle of server management empowers developers and businesses alike. With its cost-effectiveness, scalability, and diverse applications, Lambda opens doors to a world of seamless computing experiences. Begin your Lambda journey today, and witness firsthand the transformative power of serverless computing!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0glx85e3z02b31kngkta.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0glx85e3z02b31kngkta.jpg" alt="Image description" width="500" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>opensource</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Setting Up an AWS EKS Cluster Using Terraform: A Beginner-Friendly Guide</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Sat, 07 Sep 2024 17:47:24 +0000</pubDate>
      <link>https://dev.to/iamamash/setting-up-an-aws-eks-cluster-using-terraform-a-beginner-friendly-guide-225d</link>
      <guid>https://dev.to/iamamash/setting-up-an-aws-eks-cluster-using-terraform-a-beginner-friendly-guide-225d</guid>
      <description>&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) simplifies running Kubernetes on AWS without having to install or operate your own Kubernetes control plane. In this guide, I’ll walk you through creating an EKS cluster using Terraform — an Infrastructure as Code (IaC) tool that helps automate provisioning.&lt;/p&gt;

&lt;p&gt;By the end of this post, you'll have a fully operational Kubernetes cluster running in AWS. Let’s get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Make sure you have the following installed before proceeding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt;: To interact with AWS services from your terminal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt;: To manage AWS infrastructure as code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You also need an AWS account with sufficient permissions to create VPCs, EKS clusters, and security groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Configure AWS Credentials
&lt;/h2&gt;

&lt;p&gt;First, configure AWS CLI with your access keys by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter your AWS &lt;code&gt;Access Key ID&lt;/code&gt;, &lt;code&gt;Secret Access Key&lt;/code&gt;, &lt;code&gt;Default region name&lt;/code&gt;, and &lt;code&gt;Default output format&lt;/code&gt;. Ensure that the IAM user or role you are using has the required permissions to create EKS clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create a VPC Using Terraform
&lt;/h2&gt;

&lt;p&gt;Create a file named &lt;code&gt;vpc.tf&lt;/code&gt; to define the network setup. This configuration will create a Virtual Private Cloud (VPC) with both private and public subnets and enable DNS support.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_availability_zones"&lt;/span&gt; &lt;span class="s2"&gt;"azs"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/vpc/aws"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"5.13.0"&lt;/span&gt;

  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_name&lt;/span&gt;
  &lt;span class="nx"&gt;cidr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr&lt;/span&gt;

  &lt;span class="nx"&gt;azs&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_availability_zones&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;names&lt;/span&gt;
  &lt;span class="nx"&gt;private_subnets&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"10.0.2.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;public_subnets&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.101.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"10.0.102.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;enable_nat_gateway&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;single_nat_gateway&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_hostnames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_support&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;                                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_name&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/cluster/${var.eks_name}"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"shared"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;private_subnet_tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/cluster/${var.eks_name}"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"shared"&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/role/internal-elb"&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;public_subnet_tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/cluster/${var.eks_name}"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"shared"&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/role/elb"&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Define Variables
&lt;/h2&gt;

&lt;p&gt;Create a file &lt;code&gt;variables.tf&lt;/code&gt; to define reusable variables like VPC name, CIDR block, and EKS cluster name. This way, you can easily adjust these values without modifying the core configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws_region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS region"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"vpc_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"VPC name"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"vpc_cidr"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"VPC CIDR"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"eks_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS EKS Cluster name"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"sg_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Security group name"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws-eks-sg"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Create Security Groups
&lt;/h2&gt;

&lt;p&gt;Now, let's set up security groups that control network access to the EKS cluster. Create a file &lt;code&gt;security-groups.tf&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"eks-sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sg_name&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"eks-sg-ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"allow inbound traffic from eks"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
  &lt;span class="nx"&gt;from_port&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="nx"&gt;to_port&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;-1&lt;/span&gt;
  &lt;span class="nx"&gt;security_group_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks-sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"49.43.153.70/32"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"eks-sg-egress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"allow outbound traffic to eks"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"egress"&lt;/span&gt;
  &lt;span class="nx"&gt;from_port&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="nx"&gt;to_port&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;-1&lt;/span&gt;
  &lt;span class="nx"&gt;security_group_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks-sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Set Up the EKS Cluster
&lt;/h2&gt;

&lt;p&gt;With the network and security setup complete, create a file &lt;code&gt;eks.tf&lt;/code&gt; to define the EKS cluster and node groups.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"eks"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/eks/aws"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 20.0"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_name&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.30"&lt;/span&gt;

  &lt;span class="nx"&gt;enable_irsa&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_subnets&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cluster&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-eks-cluster"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# EKS Managed Node Group(s)&lt;/span&gt;
  &lt;span class="nx"&gt;eks_managed_node_group_defaults&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;ami_type&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AL2_x86_64"&lt;/span&gt;
    &lt;span class="nx"&gt;instance_types&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks-sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;eks_managed_node_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;node_group&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;min_size&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="nx"&gt;max_size&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
      &lt;span class="nx"&gt;desired_size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Create Terraform and Provider Files
&lt;/h2&gt;

&lt;p&gt;You also need the following Terraform and provider files for proper setup:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;terraform.tf&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 5.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;code&gt;provider.tf&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;code&gt;output.tf&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"cluster_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS EKS Cluster ID"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"cluster_endpoint"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS EKS Cluster Endpoint"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_endpoint&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"cluster_security_group_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Security group ID of the control plane in the cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_security_group_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS region"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"oidc_provider_arn"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;oidc_provider_arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Initialize Terraform
&lt;/h2&gt;

&lt;p&gt;Run the following command to initialize Terraform and download the necessary providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Validate and Plan
&lt;/h2&gt;

&lt;p&gt;Before applying the configuration, validate it and preview the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform validate
terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 9: Apply the Terraform Configuration
&lt;/h2&gt;

&lt;p&gt;Create your EKS cluster and the associated VPC with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This process will take approximately 15 minutes to complete. Once done, you’ll have your AWS EKS cluster up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 10: Verify the Cluster in AWS Console
&lt;/h2&gt;

&lt;p&gt;Head to the &lt;strong&gt;AWS Management Console&lt;/strong&gt;, navigate to &lt;strong&gt;EKS&lt;/strong&gt;, and confirm that your cluster is listed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 11: Configure Cluster Access (Manual Step)
&lt;/h2&gt;

&lt;p&gt;After the EKS cluster is ready, you need to configure access for your IAM users or roles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the AWS Console, navigate to &lt;strong&gt;EKS &amp;gt; Your Cluster &amp;gt; Configuration &amp;gt; Access&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Add role&lt;/strong&gt; or &lt;strong&gt;Add user&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Provide the &lt;strong&gt;IAM Principal ARN&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose the &lt;strong&gt;Type&lt;/strong&gt; (Role or User), provide a &lt;strong&gt;Username&lt;/strong&gt;, and select a &lt;strong&gt;Policy Name&lt;/strong&gt; (e.g., Admin or ViewOnly).&lt;/li&gt;
&lt;li&gt;Define the &lt;strong&gt;Access Scope&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add Policy&lt;/strong&gt; and finalize the configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This step grants users or roles permissions to interact with the Kubernetes cluster using IAM-based authentication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gqsebtgg6coyv1kq72z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gqsebtgg6coyv1kq72z.png" alt="Image description" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ksggfusw0qfb3vxmtbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ksggfusw0qfb3vxmtbj.png" alt="Image description" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 12: Verify Nodes in the Compute Tab
&lt;/h2&gt;

&lt;p&gt;To check if the nodes are up and running:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;strong&gt;Compute&lt;/strong&gt; tab of the cluster in the AWS Console.&lt;/li&gt;
&lt;li&gt;Ensure that the worker nodes are visible.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1aa9yvq8nmevcj27pe02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1aa9yvq8nmevcj27pe02.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 13: Access the Cluster via CLI
&lt;/h2&gt;

&lt;p&gt;Once the cluster is set up, access it using the AWS CLI by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;cluster-name&amp;gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;aws-region&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command configures your &lt;code&gt;kubectl&lt;/code&gt; context to use the new EKS cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 14: Clean Up Resources
&lt;/h2&gt;

&lt;p&gt;To avoid unexpected charges, delete the resources you created when they’re no longer needed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You’ve successfully set up an AWS EKS cluster using Terraform! This guide provides a simplified approach to provisioning cloud infrastructure, ensuring that you can quickly get started with Kubernetes on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhur3ag5wrgevww7z1kqo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhur3ag5wrgevww7z1kqo.jpg" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to customize the configuration as per your needs, and remember to clean up any resources when you’re done to avoid unnecessary costs. Happy Terraforming!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>aws</category>
    </item>
    <item>
      <title>A Beginner's Guide to DevOps: Everything You Need to Know</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Fri, 06 Sep 2024 17:38:18 +0000</pubDate>
      <link>https://dev.to/iamamash/a-beginners-guide-to-devops-everything-you-need-to-know-fk0</link>
      <guid>https://dev.to/iamamash/a-beginners-guide-to-devops-everything-you-need-to-know-fk0</guid>
      <description>&lt;p&gt;If you're new to the world of software development or IT, you may have heard the term &lt;strong&gt;DevOps&lt;/strong&gt; being thrown around. But what is it, really? Why is everyone talking about it, and how does it transform the way we build, deploy, and maintain software?&lt;/p&gt;

&lt;p&gt;In this post, we'll break down the essentials of DevOps and guide you through the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is DevOps?&lt;/li&gt;
&lt;li&gt;Why is DevOps important?&lt;/li&gt;
&lt;li&gt;Pros of adopting DevOps.&lt;/li&gt;
&lt;li&gt;How to get started with DevOps (Roadmap).&lt;/li&gt;
&lt;li&gt;Best practices in DevOps.&lt;/li&gt;
&lt;li&gt;Who can learn DevOps?&lt;/li&gt;
&lt;li&gt;Core skills needed in DevOps.&lt;/li&gt;
&lt;li&gt;An overview of DevSecOps and MLOps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is DevOps?
&lt;/h2&gt;

&lt;p&gt;At its core, &lt;strong&gt;DevOps&lt;/strong&gt; is a set of practices and cultural philosophies that bring together &lt;strong&gt;development (Dev)&lt;/strong&gt; and &lt;strong&gt;operations (Ops)&lt;/strong&gt; teams to work more efficiently and collaboratively. The goal is to automate and streamline the software development lifecycle (SDLC), from coding to deployment and monitoring, fostering a culture of continuous integration (CI) and continuous delivery (CD).&lt;/p&gt;

&lt;p&gt;In simpler terms, DevOps is about breaking down silos between teams, automating processes, and ensuring faster and more reliable software delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why DevOps?
&lt;/h2&gt;

&lt;p&gt;The traditional software development process often involved separate teams for coding, testing, and deploying. This could lead to long delays, miscommunications, and a lack of accountability when issues arose. DevOps solves these problems by unifying the development and operations teams, speeding up the delivery process and improving overall quality.&lt;/p&gt;

&lt;p&gt;Here are some reasons why organizations adopt DevOps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Faster Time to Market&lt;/strong&gt;: By automating tasks and improving communication, DevOps helps deliver new features and fixes faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Collaboration&lt;/strong&gt;: DevOps promotes a culture of shared responsibility, making teams more cohesive and efficient.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Quality&lt;/strong&gt;: Through continuous integration and testing, DevOps ensures software is regularly tested and improved, leading to fewer bugs and issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Higher Stability&lt;/strong&gt;: With monitoring and feedback loops in place, issues are identified and fixed faster, resulting in more stable systems.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Pros of DevOps
&lt;/h2&gt;

&lt;p&gt;Adopting DevOps offers a wide range of benefits for both organizations and teams:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Increased Efficiency&lt;/strong&gt;: Automated pipelines reduce manual tasks, freeing up developers and operations staff to focus on high-value work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Costs&lt;/strong&gt;: By streamlining processes, DevOps reduces unnecessary overhead and errors, lowering operational costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Security&lt;/strong&gt;: Through practices like Infrastructure as Code (IaC) and automated security testing, security becomes integrated into the development pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Recovery&lt;/strong&gt;: In case of failure, DevOps practices allow for quicker issue detection and recovery.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: DevOps encourages practices that make scaling infrastructure and applications seamless.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to DevOps: A Roadmap for Beginners
&lt;/h2&gt;

&lt;p&gt;Ready to dive into DevOps? Here’s a simple roadmap to guide your journey:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Learn Version Control&lt;/strong&gt;: Start with learning Git and GitHub for version control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understand CI/CD Pipelines&lt;/strong&gt;: Tools like Jenkins, GitLab CI, or GitHub Actions are essential to automate building, testing, and deploying code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Master Containerization&lt;/strong&gt;: Tools like Docker help package applications into containers, ensuring consistency across environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learn Infrastructure as Code (IaC)&lt;/strong&gt;: Tools like Terraform or AWS CloudFormation allow you to manage your infrastructure in a version-controlled, code-based way.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explore Cloud Providers&lt;/strong&gt;: Learn about cloud platforms like AWS, Azure, or Google Cloud to host and scale applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up Monitoring and Logging&lt;/strong&gt;: Tools like Prometheus, Grafana, and ELK Stack help monitor and track your system's performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understand Configuration Management&lt;/strong&gt;: Tools like Ansible and Chef are used to automate infrastructure configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Best Practices in DevOps
&lt;/h2&gt;

&lt;p&gt;To ensure success in DevOps, consider following these best practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automation First&lt;/strong&gt;: Automate repetitive tasks, from testing to deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaborative Culture&lt;/strong&gt;: Foster open communication and collaboration between teams.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Integration and Continuous Delivery (CI/CD)&lt;/strong&gt;: Ensure your code is always tested and ready for deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Feedback Loops&lt;/strong&gt;: Always monitor your systems and applications to catch issues early.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security by Design&lt;/strong&gt;: Integrate security into the development process, not as an afterthought.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Who Can Learn DevOps?
&lt;/h2&gt;

&lt;p&gt;DevOps isn't limited to developers or operations professionals—anyone interested in improving software delivery can learn DevOps. Whether you’re a software engineer, a systems administrator, a network engineer, or even someone just entering the tech industry, DevOps skills can be beneficial.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of coding and scripting.&lt;/li&gt;
&lt;li&gt;Familiarity with Linux or any other operating system.&lt;/li&gt;
&lt;li&gt;Understanding of networking and cloud platforms is helpful but not mandatory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Skills to be Used in DevOps
&lt;/h2&gt;

&lt;p&gt;Here are the most important skills to focus on when learning DevOps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Coding and Scripting&lt;/strong&gt;: Knowledge of programming languages (Python, Bash, etc.) is essential for automating tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version Control&lt;/strong&gt;: Expertise in Git and repository hosting services like GitHub or GitLab.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Integration/Continuous Deployment (CI/CD)&lt;/strong&gt;: Tools like Jenkins, CircleCI, and GitLab CI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerization&lt;/strong&gt;: Knowledge of Docker and Kubernetes for orchestrating containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;: Skills in tools like Terraform, Ansible, or CloudFormation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Logging&lt;/strong&gt;: Using tools like Prometheus, Grafana, or ELK for tracking system health.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  DevSecOps and MLOps: Expanding the DevOps Ecosystem
&lt;/h2&gt;

&lt;h3&gt;
  
  
  DevSecOps
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps&lt;/strong&gt; integrates security practices into the DevOps process. Rather than addressing security late in the development cycle, DevSecOps ensures that security is part of every phase—from design to deployment. This includes automating security checks like vulnerability scans, code analysis, and compliance checks.&lt;/p&gt;

&lt;h3&gt;
  
  
  MLOps
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;MLOps&lt;/strong&gt; (Machine Learning Operations) focuses on deploying and maintaining machine learning models in production. Just as DevOps streamlines software development, MLOps does the same for machine learning by automating model deployment, monitoring, and retraining. It combines DevOps practices with data science to ensure efficient, scalable, and repeatable workflows.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;DevOps is more than just a set of tools—it's a cultural shift that improves collaboration, automation, and efficiency in the software development lifecycle. Whether you're an aspiring engineer or a seasoned professional, learning DevOps can open doors to faster software delivery, better quality, and more secure applications. As technology evolves, incorporating practices like DevSecOps and MLOps will become increasingly important in creating reliable and scalable systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fengcqp0ks8yejxo9cxx6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fengcqp0ks8yejxo9cxx6.jpg" alt="Image description" width="330" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, dive in, start learning, and embrace the future of software development with DevOps!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>softwaredevelopment</category>
      <category>ai</category>
    </item>
    <item>
      <title>DevOpsifying a Go Web Application: An End-to-End Guide</title>
      <dc:creator>Amash Ansari</dc:creator>
      <pubDate>Thu, 05 Sep 2024 18:55:47 +0000</pubDate>
      <link>https://dev.to/iamamash/devopsifying-a-go-web-application-an-end-to-end-guide-17bm</link>
      <guid>https://dev.to/iamamash/devopsifying-a-go-web-application-an-end-to-end-guide-17bm</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post, I will guide you through the process of DevOpsifying a Go-based web application. We will cover everything from containerizing the application with Docker to deploying it on a Kubernetes cluster (AWS EKS) using Helm, setting up continuous integration with GitHub Actions, and automating deployments with ArgoCD. By the end of this tutorial, you'll have a fully operational, CI/CD-enabled Go web application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before starting this project, ensure you meet the following prerequisites:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Account:&lt;/strong&gt; You need an active AWS account to create and manage your EKS cluster for deploying the Go-based application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DockerHub Account:&lt;/strong&gt; You should have a DockerHub account to push your Docker images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic DevOps Knowledge:&lt;/strong&gt; Familiarity with DevOps concepts and practices is essential, including understanding CI/CD pipelines, containerization, orchestration, and cloud deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Helm:&lt;/strong&gt; Basic knowledge of Helm, the Kubernetes package manager, will be required to package and deploy your application.&lt;/p&gt;

&lt;p&gt;By meeting these prerequisites, you'll be well-prepared to follow the steps in this guide and successfully DevOpsify your Go-based application!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Getting the Source Code
&lt;/h2&gt;

&lt;p&gt;To get started with the project, you'll need to clone the source code from the GitHub repository. Use the following command to clone the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/iam-veeramalla/go-web-app-devops.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This repository contains all the necessary files and configurations to set up and deploy the Go-based application using the DevOps practices described in this guide. Once cloned, you can navigate through the steps below and follow them to containerize, deploy, and manage the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Containerizing the Go Web Application
&lt;/h2&gt;

&lt;p&gt;The first step is to containerize our Go application. We will use a multistage Dockerfile to build the Go application and create a lightweight production-ready image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM golang:1.22.5 as build

WORKDIR /app

COPY go.mod .

RUN go mod download

COPY . .

RUN go build -o main .

FROM gcr.io/distroless/base

WORKDIR /app

COPY --from=build /app/main .

COPY --from=build /app/static ./static

EXPOSE 8080

CMD ["./main"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Commands to Build and Push Docker Image:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login
docker build . -t go-web-app
docker push go-web-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this Dockerfile, the first stage uses the Golang image to build the application. The second stage uses a distroless base image, which is much smaller and more secure, containing only the necessary files to run our Go application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Deploying on Kubernetes with AWS EKS
&lt;/h2&gt;

&lt;p&gt;Next, we will deploy our containerized application to a Kubernetes cluster. Here’s how you can set up your cluster and deploy your app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create an EKS Cluster:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster --name demo-cluster --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deployment Configuration (deployment.yaml):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: go-web-app
  labels:
    app: go-web-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: go-web-app
  template:
    metadata:
      labels:
        app: go-web-app
    spec:
      containers:
      - name: go-web-app
        image: iamamash/go-web-app:latest
        ports:
        - containerPort: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Service Configuration (service.yaml):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: go-web-app
  labels:
    app: go-web-app
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  selector:
    app: go-web-app
  type: ClusterIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ingress Configuration (ingress.yaml):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: go-web-app
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: go-web-app.local
    http:
      paths: 
      - path: /
        pathType: Prefix
        backend:
          service:
            name: go-web-app
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Apply the configurations using kubectl:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Setting Up Nginx Ingress Controller:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An Ingress controller in Kubernetes manages external access to services within the cluster, typically handling HTTP and HTTPS traffic. It provides centralized routing, allowing you to define rules for how traffic should reach your services. In this project, we use the Nginx Ingress controller to efficiently manage and route traffic to our Go-based application deployed in the Kubernetes cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/aws/deploy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Packaging with Helm
&lt;/h2&gt;

&lt;p&gt;To manage our Kubernetes resources more effectively, we package our application using Helm, a package manager for Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Helm Chart:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm create go-web-app-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After creating the chart, replace everything inside the templates directory with your &lt;code&gt;deployment.yaml&lt;/code&gt;, &lt;code&gt;service.yaml&lt;/code&gt;, and &lt;code&gt;ingress.yaml&lt;/code&gt; files.&lt;/p&gt;

&lt;p&gt;Update &lt;code&gt;values.yaml&lt;/code&gt;: The &lt;code&gt;values.yaml&lt;/code&gt; file will contain dynamic values, like the Docker image tag. This tag will be updated automatically based on the GitHub Actions run ID, ensuring each deployment is unique.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Default values for go-web-app-chart.
replicaCount: 1

image:
  repository: iamamash/Go-Web-App
  pullPolicy: IfNotPresent
  tag: "10620920515" # Will be updated by CI/CD pipeline

ingress:
  enabled: false
  className: ""
  annotations: {}
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Helm Deployment:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete -f k8s/.
helm install go-web-app helm/go-web-app-chart
kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Continuous Integration with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;To automate the build and deployment of our application, we set up a CI/CD pipeline using GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Actions Workflow (.github/workflows/cicd.yaml):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI/CD

on:
  push:
    branches:
      - main
    paths-ignore:
      - 'helm/**'
      - 'README.md'

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Set up Go 1.22
      uses: actions/setup-go@v2
      with:
        go-version: 1.22

    - name: Build
      run: go build -o go-web-app

    - name: Test
      run: go test ./...

  push:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v1

    - name: Login to DockerHub
      uses: docker/login-action@v3
      with:
        username: ${{ secrets.DOCKERHUB_USERNAME }}
        password: ${{ secrets.DOCKERHUB_TOKEN }}

    - name: Build and Push action
      uses: docker/build-push-action@v6
      with:
        context: .
        file: ./Dockerfile
        push: true
        tags: ${{ secrets.DOCKERHUB_USERNAME }}/go-web-app:${{github.run_id}}

  update-newtag-in-helm-chart:
    runs-on: ubuntu-latest
    needs: push
    steps:
    - name: Checkout repository
      uses: actions/checkout@v4
      with:
        token: ${{ secrets.TOKEN }}

    - name: Update tag in Helm chart
      run: |
        sed -i 's/tag: .*/tag: "${{github.run_id}}"/' helm/go-web-app-chart/values.yaml

    - name: Commit and push changes
      run: |
        git config --global user.email "ansari2002ksp@gmail.com"
        git config --global user.name "Amash Ansari"
        git add helm/go-web-app-chart/values.yaml
        git commit -m "Updated tag in Helm chart"
        git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To securely store sensitive information like DockerHub credentials and Personal Access Tokens (PAT) in GitHub, you can use GitHub Secrets. To create a secret, navigate to your repository on GitHub, go to &lt;strong&gt;Settings &amp;gt; Secrets and variables &amp;gt; Actions &amp;gt; New repository secret&lt;/strong&gt;. Here, you can add secrets like &lt;code&gt;DOCKERHUB_USERNAME&lt;/code&gt;, &lt;code&gt;DOCKERHUB_TOKEN&lt;/code&gt;, and &lt;code&gt;TOKEN&lt;/code&gt;. Once added, these secrets can be accessed in your GitHub Actions workflows using &lt;code&gt;${{ secrets.SECRET_NAME }}&lt;/code&gt; syntax, ensuring that your sensitive data is securely managed during the CI/CD process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Continuous Deployment with ArgoCD
&lt;/h2&gt;

&lt;p&gt;Finally, we implement continuous deployment using ArgoCD to automatically deploy the application whenever changes are pushed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install ArgoCD:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
kubectl get svc argocd-server -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setup ArgoCD Project: To access the ArgoCD UI after setting it up, you first need to determine the external IP of the node where ArgoCD is running. You can obtain this by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, get the port number at which the ArgoCD server is running using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc argocd-server -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have the external IP and port number, you can access the ArgoCD UI by navigating to &lt;code&gt;http://&amp;lt;node-external-IP&amp;gt;:&amp;lt;port&amp;gt;&lt;/code&gt;. For example, if the external IP is &lt;code&gt;54.161.25.151&lt;/code&gt; and the port number is &lt;code&gt;30498&lt;/code&gt;, the URL to access ArgoCD UI would be &lt;code&gt;http://54.161.25.151:30498&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To log in to the ArgoCD UI for the first time, use the default username admin. The password can be retrieved from the ArgoCD secrets using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit secret argocd-initial-admin-secret -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the encoded password from the &lt;code&gt;data.password&lt;/code&gt; field and decode it using &lt;code&gt;base64&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo &amp;lt;encoded-password&amp;gt; | base64 --decode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if the encoded password is &lt;code&gt;kjasdfbSNLnlkaW==&lt;/code&gt;, decoding it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo kjasdfbSNLnlkaW== | base64 --decode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;will provide the actual password. Be sure to exclude any trailing &lt;strong&gt;%&lt;/strong&gt; symbol from the decoded output when using the password to log in.&lt;/p&gt;

&lt;p&gt;Now, after accessing the ArgoCD UI, since both ArgoCD and the application are in the same cluster, you can create a project. To do this, click on the &lt;strong&gt;"New App"&lt;/strong&gt; button and fill in the required fields, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;App Name: Provide a name for your application.&lt;/li&gt;
&lt;li&gt;Sync Policy: Choose between manual or automatic synchronization.&lt;/li&gt;
&lt;li&gt;Self-Heal: Enable this option if you want ArgoCD to automatically fix any drift.&lt;/li&gt;
&lt;li&gt;Source Path: Enter the GitHub repository URL where your application code resides.&lt;/li&gt;
&lt;li&gt;Helm Chart Path: Specify the path to the Helm chart within your repository.&lt;/li&gt;
&lt;li&gt;Destination: Set the Cluster URL and namespace where you want the application deployed.&lt;/li&gt;
&lt;li&gt;Helm Values: Select the appropriate values.yaml file for your Helm chart.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After filling in these details, click on &lt;strong&gt;"Create"&lt;/strong&gt; and wait for ArgoCD to create the project. ArgoCD will pick up the Helm chart and deploy the application to the Kubernetes cluster for you. You can verify the deployment using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's all you need to do!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have successfully DevOpsified your Go web application. This end-to-end guide covered containerizing your application with Docker, deploying it with Kubernetes and Helm, automating builds with GitHub Actions, and setting up continuous deployments with ArgoCD. You are now ready to manage your Go application with full CI/CD capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh8jkp7ukoz292rjf6pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh8jkp7ukoz292rjf6pd.png" alt="Go-Web-App" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to leave your comments and feedback below! Happy DevOpsifying!&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;For a detailed video guide on deploying Go applications on AWS EKS, check out this &lt;a href="https://youtu.be/HGu9sgoHaJ0?si=rAlvBobTZCoT-E1_" rel="noopener noreferrer"&gt;video&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>go</category>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
