<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hussein Alamutu</title>
    <description>The latest articles on DEV Community by Hussein Alamutu (@husseinalamutu).</description>
    <link>https://dev.to/husseinalamutu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/husseinalamutu"/>
    <language>en</language>
    <item>
      <title>Demystifying For Loops in Terraform: A Practical Guide</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Thu, 15 Jun 2023 12:56:39 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/demystifying-for-loops-in-terraform-a-practical-guide-bh9</link>
      <guid>https://dev.to/husseinalamutu/demystifying-for-loops-in-terraform-a-practical-guide-bh9</guid>
      <description>&lt;p&gt;Terraform is a popular infrastructure as a code tool that allows you to define and manage cloud resources. &lt;/p&gt;

&lt;p&gt;One of the powerful features of Terraform is the ability to use &lt;code&gt;for&lt;/code&gt; loops. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;for&lt;/code&gt; loops in Terraform provide a powerful mechanism for iterating over a list of items and performing actions or creating resources dynamically. &lt;/p&gt;

&lt;p&gt;This capability allows for efficient and flexible infrastructure provisioning, reducing manual effort and enhancing scalability.&lt;/p&gt;

&lt;p&gt;In this guide, I walk you through the syntax and benefits of for loops in Terraform, providing practical insights and examples to help you leverage their power effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding For Loops in Terraform
&lt;/h2&gt;

&lt;p&gt;To leverage for loops, you start by defining a variable that represents the list of items you want to iterate over, then, you use either the &lt;code&gt;for_each&lt;/code&gt; or &lt;code&gt;count&lt;/code&gt; expressions to control the iteration. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;count&lt;/code&gt; expression is suitable for iterating a fixed number of times, while the &lt;code&gt;for_each&lt;/code&gt; expression is particularly useful when you have a dynamic set of items, such as a map or set.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But, what do I mean by the map and set?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;for_each&lt;/code&gt; map is like &lt;a href="https://www.w3schools.com/python/python_dictionaries.asp"&gt;dictionaries&lt;/a&gt; in Python, where variables have attributes with keys mapped to a value, while the set is similar to a &lt;a href="https://www.w3schools.com/python/python_lists.asp"&gt;list&lt;/a&gt; - an ordered collection of objects.&lt;/p&gt;

&lt;p&gt;Here's an example of a Terraform code snippet that demonstrates the use of &lt;code&gt;for_each&lt;/code&gt; and &lt;code&gt;count&lt;/code&gt; expressions in different use cases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Dynamic set of items using for_each
variable "dynamic_items" {
  type    = map(string)
  default = {
    ubuntu_server = "ami-01dd271720c1ba44f“ 
    windows_server = "ami-0274fd9e256dea7b1“ 
    rhel_server = "ami-013d87f7217614e10“ 
  }
}

resource "aws_instance" "servers" {
  for_each = var.dynamic_items

  ami           = each.value
  instance_type = "t2.micro"

  tags = {
    Name = each.key
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the &lt;code&gt;for_each&lt;/code&gt; expression is used to create AWS EC2 instances based on the dynamic set of items defined in the dynamic_items variable. Each item in the map represents a server name and its corresponding types - machine image (ami). The aws_instance resource will be created for each server in the map, and the instance tags will be set using the for_each iterator.&lt;/p&gt;

&lt;p&gt;On the other hand, the &lt;code&gt;count&lt;/code&gt; expression is used to create AWS EBS volumes iteratively based on the fixed count defined in the fixed_count variable. The &lt;code&gt;aws_ebs_volume&lt;/code&gt; resource will be created three times, with each volume having a different name specified using the count.index variable (DataVolume-1, DataVolume-2, DataVolume-3).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Fixed number of iterations using count
variable "fixed_count" {
  type    = number
  default = 3
}

resource "aws_ebs_volume" "data_volume" {
  count          = var.fixed_count
  availability_zone = "us-west-1a"
  size           = 100
  volume_type    = "gp2"

  tags = {
    Name = "DataVolume-${count.index + 1}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using for loops, instead of manually defining multiple resource blocks or modules, you can dynamically generate resources, this eliminates the need for repetitive code and reduces the chances of errors.&lt;/p&gt;

&lt;p&gt;They also enable easy scalability, this allows you to adapt to changing requirements and easily handle larger infrastructure deployments. Whether you need to create multiple EC2 instances, provision multiple subnets, or configure multiple security groups, for loops provide a convenient mechanism to scale your infrastructure.&lt;/p&gt;

&lt;p&gt;In addition to automation, they also support conditional statements, you can incorporate if conditions and logical expressions within for loops to control the iteration process and make dynamic decisions based on specific criteria.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;p&gt;When working with for loops in Terraform, maintaining code readability and organization is essential for ensuring long-term maintainability and collaboration. Here are some best practices to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use meaningful variable names: Choose descriptive names for your variables to enhance code readability. This makes it easier for others (and yourself) to understand the purpose and context of the loop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add comments: Include comments within your code to provide explanations and document the intention behind the for loop. This helps others grasp the logic and purpose of the iteration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Indentation and formatting: Consistently apply proper indentation and formatting to your for loops. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Limit loop complexity: Avoid overly complex for loops with nested iterations or extensive conditional statements. Excessive complexity can make code difficult to understand and maintain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test and validate: Before deploying for loops in production environments, thoroughly test and validate their functionality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The End -
&lt;/h2&gt;

&lt;p&gt;Summarily, the power of for loops in Terraform empowers you to automate resource creation and configuration while providing flexibility through conditional statements. Moreover, the inclusion of conditional statements allows you to adapt to different scenarios, making your infrastructure setups more dynamic and adaptable to changing requirements.&lt;/p&gt;

&lt;p&gt;For a more in-depth tutorial, take a look at this &lt;a href="https://www.cloudbolt.io/terraform-best-practices/terraform-for-loops/"&gt;article&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>cicd</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to automate the installation of web servers on AWS EC2 using Terraform and Github Actions</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Sun, 30 Apr 2023 22:58:26 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/how-to-automate-the-installation-of-web-servers-on-aws-ec2-using-terraform-and-github-actions-558o</link>
      <guid>https://dev.to/husseinalamutu/how-to-automate-the-installation-of-web-servers-on-aws-ec2-using-terraform-and-github-actions-558o</guid>
      <description>&lt;p&gt;As more and more businesses are moving their applications to the cloud, the need to automate the deployment and management of infrastructure has become increasingly important. One of the common tasks in cloud computing is setting up web servers on virtual instances, and managing them, which can be a tedious and error-prone task if done manually.&lt;/p&gt;

&lt;p&gt;This is where automation tools like Terraform and Github Actions come in handy. &lt;/p&gt;

&lt;p&gt;Terraform is an open-source infrastructure as code software tool that allows developers to manage their infrastructure using code, and is particularly useful for managing cloud resources such as AWS EC2 instances. &lt;/p&gt;

&lt;p&gt;Github Actions, on the other hand, is a powerful continuous integration and continuous deployment (CI/CD) tool that can be used to automate the deployment of code changes to cloud environments.&lt;/p&gt;

&lt;p&gt;In this article, I will teach you how to use Terraform and Github Actions to automate the installation of web servers(apache and nginx) on AWS EC2 instances. By automating this process, we can save time and reduce the risk of human error, while also allowing us to easily replicate our infrastructure across multiple instances. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Be familiar with AWS and it's services&lt;/li&gt;
&lt;li&gt;Understand how Github works&lt;/li&gt;
&lt;li&gt;Know what web servers are&lt;/li&gt;
&lt;li&gt;Familiarity with Terraform &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Setting up AWS and Terraform&lt;/li&gt;
&lt;li&gt;Creating the Github repository and setting up Github Actions&lt;/li&gt;
&lt;li&gt;Writing the Terraform code&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  I. Setting up AWS Access Key and Secret Key
&lt;/h2&gt;

&lt;p&gt;To get started with automating the installation of web servers on AWS EC2 using Terraform and Github Actions, you need to set up an AWS account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating an AWS account and setting up access keys&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you don't have an AWS account already, create an AWS account by visiting the &lt;a href="//aws.amazon.com"&gt;AWS website&lt;/a&gt; and following the instructions to sign up. &lt;/p&gt;

&lt;p&gt;Once you have an account, you will need to create an access key that Terraform can use to interact with your AWS account. To do this, go to the AWS console and navigate to the IAM service. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g_BmSAEp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxaqsrbv8u205ips3a80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g_BmSAEp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxaqsrbv8u205ips3a80.png" alt="Hussein Alamutu navigated to IAM page" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the "Access management" section on the left hand pane, select "Users" and create a new user. Give the user a name and choose "Programmatic access" as the access type. Next, assign the user to a group that has the necessary permissions or directly attach the permission policies necessary for managing EC2 instances, load balancers, and auto scaling, then create the user. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QCFcK5Pb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ginsb229w96nio021ty3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QCFcK5Pb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ginsb229w96nio021ty3.png" alt="Hussein Alamutu creating an IAM User" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, you will be prompted to download the access keys for the user. Keep these keys safe as they will be used to configure Github Actions and Terraform.&lt;/p&gt;

&lt;p&gt;If you got stuck in any part, you should watch &lt;a href="https://www.youtube.com/watch?v=HuE-QhrmE1c"&gt;this short tutorial&lt;/a&gt; on how to create AWS access keys. You can also follow AWS guide on creating access keys &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  II. Creating the Github repository and setting up Github Actions
&lt;/h2&gt;

&lt;p&gt;Once we have set up our AWS account, the next step is to create a Github repository for our project and set up Github Actions to automate the deployment of the web server on AWS EC2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a new Github repository for the project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to the Github website and sign in to your account. Create a new repository by clicking on the "New" button in the top right corner of the screen. Give your repository a name, choose any other settings that you prefer, and click on the "Create repository" button to create the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OzU8gLn4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rl80o1qvvun6td5my2at.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OzU8gLn4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rl80o1qvvun6td5my2at.png" alt="Create a new github repository - hussein alamutu" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring Github Actions to automatically deploy the web server on AWS EC2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Github Actions allows us to automate various tasks in our repository, including the deployment of our web server on AWS EC2. &lt;/p&gt;

&lt;p&gt;To set up Github Actions, go to the "Actions" tab in your repository and click on the "Set up a workflow yourself" button. This will allow you to create a new workflow file that defines the actions you want to perform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c1J3RgVy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmqn5vm1hkddbxk8nxt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c1J3RgVy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmqn5vm1hkddbxk8nxt3.png" alt="Setup a workflow" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the workflow file, you can define the triggers for when the actions should be executed, such as on pushes to a specific branch or on a schedule. Then, you can define the steps that should be executed as part of the workflow. &lt;/p&gt;

&lt;p&gt;For this example:&lt;/p&gt;

&lt;p&gt;Define the triggers to be when you make a push to your main branch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches: main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, define a step to configure AWS credentials using the access keys you created earlier, and another step to run Terraform commands to deploy the web server on an EC2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# configuring AWS access keys
env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this part of the code to work, define the environment secrets that are required which is the values for the secret defined in the above github actions code.&lt;/p&gt;

&lt;p&gt;Secrets are encrypted and can be securely stored in Github so that they can be used in your workflow without exposing sensitive information.&lt;/p&gt;

&lt;p&gt;To do this, navigate to your Github repository for this task and click on the settings tab, then navigate to the secrets and variables tab in the left pane, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IxCYbJd4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bez5719u380zp8x55pcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IxCYbJd4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bez5719u380zp8x55pcn.png" alt="Secrets and variables tab" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that click on actions in the drop down menu from the secrets and variables tab, then click on the green button in the middle of the page, labelled "New repository secret"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1cblEWgE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h40ppr1xq1tfj8bkylm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1cblEWgE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h40ppr1xq1tfj8bkylm.png" alt="Creating a secret environment variable" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This takes you to a new page to configure your secret environment variables used in your Github Actions workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IyS-tlaE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4y1namins2j14tii2bh1.p%250Ang" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IyS-tlaE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4y1namins2j14tii2bh1.p%250Ang" alt="Defining the key name and variable" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using AWS_ACCESS_KEY_ID as the name, next, get the access key you got from AWS in step one, copy and paste into the Secret* box, and click on "Add secret", now repeat the same process for AWS_SECRET_ACCESS_KEY.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring the job for the Github workflow to deploy the web server on an EC2 Instance&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# configuring steps to be run, for terraform various commands 
jobs:
  tf:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    - uses: hashicorp/setup-terraform@v2
    - name: Terraform fmt
    id: fmt
    run: terraform fmt -check
    - name: Terraform Init
    id: init
    run: terraform init
    - name: Terraform Plan
    id: plan
    run: terraform plan
    - name: Terraform Apply
    id: apply
    run: terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have defined your workflow, save the file and Github Actions will automatically start running the defined steps whenever the triggers are met(a push to the main branch). You can monitor the progress of your workflow in the "Actions" tab and view the logs to troubleshoot any issues that may arise.&lt;/p&gt;

&lt;p&gt;With Github Actions configured, you now have an automated process for deploying web servers on AWS EC2 instances using Terraform, making it easier to manage your infrastructure and ensure consistent deployments across environments.&lt;/p&gt;

&lt;p&gt;Note: Your final Github Actions code should look like this,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Terraform AWS

# setting up triggers
on:
  push:
    branches: main

# setting up the access key to aws-cli
env:
   AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
   AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

jobs:
  tf:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    - uses: hashicorp/setup-terraform@v2
    - name: Terraform fmt
      id: fmt
      run: terraform fmt -check
    - name: Terraform Init
      id: init
      run: terraform init
    - name: Terraform Plan
      id: plan
      run: terraform plan
    - name: Terraform Apply
      id: apply
      run: terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  III. Writing Terraform code
&lt;/h2&gt;

&lt;p&gt;In this section I teach you how to write Terraform code to configure an EC2 instance and install a web server on it. &lt;/p&gt;

&lt;p&gt;Writing Terraform code to configure the EC2 instance and install the web server:&lt;/p&gt;

&lt;p&gt;To begin, create a main.tf file, where to define the resources that will be created. First, define the provider module in this case AWS provider, you can get it from the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs"&gt;official terraform documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, define the two ec2 instance, one for NGINX and the other one for Apache.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "nginx" {
    ami = "ami-014d05e6b24240371"
    instance_type = "t2.micro"
    **user_data = file("nginx.sh")**
    tags = { 
    Name  = "NGINX"
  }
}

resource "aws_instance" "apache" {
    ami = "ami-014d05e6b24240371"
    instance_type = "t2.micro"
    **user_data = file("apache.sh")**
    tags = { 
        Name  = "APACHE"
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;resource&lt;/strong&gt; block is used to define the type of resource you need terraform to provision for you in AWS, in this case &lt;strong&gt;aws_instance&lt;/strong&gt; and the next label stands for an identifier, in case you need to use the resource in other parts of your code i.e "aws_instance.apache.instance_type*" or "aws_instance.nginx.ami"&lt;/p&gt;

&lt;p&gt;Next, to get the ami(amazon machine image) value for your main.tf instance resource configuration, navigate to EC2 dashboard on AWS and click on ami catalog under Images dropdown in the left pane, choose any ami id of your choice, copy and paste into your instance configuration, in this case I make use of an Ubuntu image which has a free tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nCGQtEM4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mdbhotb1htslgzo7ssn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nCGQtEM4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mdbhotb1htslgzo7ssn9.png" alt="Getting ami ID for AWS ec2 Instance" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;instance_type&lt;/strong&gt; is used to select the appropriate combination of CPU, memory and networking resource. The &lt;strong&gt;t2.micro&lt;/strong&gt; instance type has an attached free tier, I recommend you use it. Learn more about &lt;a href="https://www.geeksforgeeks.org/amazon-ec2-instance-types/"&gt;instance type&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;user_data&lt;/strong&gt; in the code is use to run a shell script that installs and configures nginx or apache. For this example, I created two files, apache.sh and nginx.sh with bash commands in the files to install both apache and nginx respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Follow these steps to create your user data's file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file with the name nginx.sh in your parent folder, copy and paste the below code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# install nginx
apt-get update
apt-get -y install nginx

# make sure nginx is started
service nginx start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create another file with the name apache.sh in your parent folder, copy and paste the below code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# install apache
apt-get update
apt-get -y install apache2

# make sure apache is started
service apache2 start 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Final note, your final main.tf file should look like this, you can also get access to the code &lt;a href="https://github.com/husseinalamutu/terraform-githubactions-article"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
        source = "hashicorp/aws"
        version = "~&amp;gt;4.0"
    }
  }
}

provider "aws" {
  region = "us-west-1"
}

resource "aws_instance" "nginx" {
    ami = "ami-014d05e6b24240371"
    instance_type = "t2.micro"
    user_data = file("nginx.sh")
    tags = { 
    Name  = "NGINX"
  }
}

resource "aws_instance" "apache" {
    ami = "ami-014d05e6b24240371"
    instance_type = "t2.micro"
    user_data = file("apache.sh")
    tags = { 
        Name  = "APACHE"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With these resources defined, we can run "terraform init" to initialize our project, "terraform plan" to see what resources will be created, and "terraform apply" to create the resources on AWS. &lt;/p&gt;

&lt;p&gt;The Github Actions Workflow created in section II contains the above mentioned commands, which will be triggered to run in succession when you git push to your repo's main branch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bonus: Using variables and modules to make the code more modular and reusable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As our infrastructure becomes more complex, it becomes more difficult to manage all the configuration options in a single file. To make the code more modular and reusable, we can use variables and modules.&lt;/p&gt;

&lt;p&gt;Variables allow us to define values that can be passed into our Terraform code at runtime. We can define variables for the instance type, the key pair, and any other configurable settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "instance_type" {
  default = "t2.micro"
}

variable "key_pair" {
  default = "my-key-pair"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modules, on the other hand, allow us to encapsulate groups of resources into reusable units that can be easily shared and reused across different Terraform projects. We can create a module for configuring an EC2 instance with a web server installed, including all of the necessary resources and configuration options.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "web_server" {
  source = "./modules/web_server"

  instance_type = var.instance_type
  key_pair      = var.key_pair
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we have created a module in the "modules/web_server" directory, which defines the resources and configuration options required for the web server. The module can be shared and reused across different Terraform projects, making it easier to manage and maintain the infrastructure.&lt;/p&gt;

&lt;p&gt;In conclusion, writing Terraform code to configure an EC2 instance and install a web server is a powerful way to manage infrastructure in a declarative way. By using variables and modules, we can make our Terraform code more modular and reusable, allowing us to manage more complex infrastructure with ease.&lt;/p&gt;

&lt;p&gt;That's it folks if you are interested in seeing more terraform task related to this you can check out &lt;a href="https://github.com/husseinalamutu"&gt;my github &lt;/a&gt;. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>githubactions</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Understanding Virtual CPUs: Exploring the Differences Between Cores and Threads</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Thu, 13 Apr 2023 23:09:07 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/understanding-virtual-cpus-exploring-the-differences-between-cores-and-threads-33ph</link>
      <guid>https://dev.to/husseinalamutu/understanding-virtual-cpus-exploring-the-differences-between-cores-and-threads-33ph</guid>
      <description>&lt;p&gt;In today's world, computers are more powerful than ever before. However, the speed and performance of a computer are often limited by its hardware components, such as its CPU. &lt;/p&gt;

&lt;p&gt;In recent years, virtual CPUs have emerged as a solution to this problem, enabling faster and more efficient computing.&lt;/p&gt;

&lt;p&gt;Virtual CPUs is the creation of multiple CPUs from a single physical CPU.&lt;/p&gt;

&lt;p&gt;This allows for more efficient use of hardware resources, and can significantly improve the performance of computers and servers. &lt;/p&gt;

&lt;p&gt;In this article, we will explore the concept of virtual CPUs, and explain the differences between cores and threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual CPU
&lt;/h2&gt;

&lt;p&gt;A virtual CPU, also known as a vCPU, is a portion of a physical CPU that is allocated to a virtual machine. Virtual machines are software programs that mimic the behavior of a physical computer, and are commonly used in cloud computing and virtualization. By creating virtual CPUs within a virtual machine, it is possible to run multiple virtual machines on a single physical server.&lt;/p&gt;

&lt;p&gt;The virtual CPUs within a virtual machine can be configured to use a different number of cores and threads, depending on the requirements of the application being run. A core is a physical processing unit within a CPU, while a thread is a virtual processing unit that can be created within a core. By using multiple cores and threads, it is possible to improve the performance of the virtual machine, and ensure that it can handle multiple tasks at once.&lt;/p&gt;

&lt;p&gt;One of the key benefits of virtual CPUs is that they enable greater efficiency and resource utilization. It also makes more efficient use of hardware resources, and reduce the cost of running a data center or cloud computing service. &lt;/p&gt;

&lt;p&gt;Additionally, virtual CPUs enable faster and more efficient processing of data, which can help to reduce latency and improve performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cores vs Threads
&lt;/h2&gt;

&lt;p&gt;Virtual CPUs are a technology that allow for more efficient use of hardware resources, and can significantly improve the performance of computers and servers. &lt;/p&gt;

&lt;p&gt;However, to fully understand the benefits of virtual CPUs, it's important to first understand the differences between CPU cores and threads, and how they impact the performance of virtual machines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cores&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CPU cores are physical processing units within a CPU that are responsible for executing instructions. Most CPUs today have multiple cores, which enables them to execute multiple instructions simultaneously. This allows for faster and more efficient processing of data, and is particularly important in applications that require high levels of computing power, such as gaming, video editing, and scientific computing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Threads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Threads, on the other hand, are virtual processing units that can be created within a core. &lt;/p&gt;

&lt;p&gt;Threads allow for parallel processing of data within a core, which can significantly improve the performance of applications that require multiple tasks to be executed simultaneously. By using threads, it is possible to maximize the use of CPU cores, and achieve faster and more efficient processing of data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact of Cores and Threads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When it comes to virtual CPUs, the number of cores and threads allocated to a virtual machine can have a significant impact on its performance. &lt;/p&gt;

&lt;p&gt;Virtual machines with multiple cores and threads can handle more tasks simultaneously, which can result in faster and more efficient processing of data. However, allocating too many cores and threads can also result in overhead, and can reduce the efficiency of the virtual machine.&lt;/p&gt;

&lt;p&gt;One of the key advantages of virtual CPUs is that they enable flexible allocation of resources. &lt;/p&gt;

&lt;p&gt;For example, applications that require high levels of computing power, such as video editing or scientific computing, can benefit from more cores and threads, while applications that require less computing power, such as web browsing or email, may only require a few cores and threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Virtual CPUs in Cloud Computing and Virtualization
&lt;/h2&gt;

&lt;p&gt;Virtual CPUs have become an essential technology for cloud computing and virtualization. By enabling multiple virtual CPUs to be created from a single physical CPU, virtualization can provide many benefits, including improved performance, greater efficiency, and more flexibility. &lt;/p&gt;

&lt;p&gt;This section explores the advantages of virtual CPUs in cloud computing and virtualization.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reduced Cost&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Virtual CPUs enable more efficient allocation of resources, which help to reduce the cost of running a data center or cloud computing service and also ensure resources are used effectively, and applications perform as efficiently as possible.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Greater Efficiency&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By creating multiple virtual machines on a single physical server, it is possible to make more efficient use of hardware resources. This can help to reduce the cost of running a data center or cloud computing service, as fewer physical servers are required to run the same number of applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;More Flexibility&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, virtual CPUs provide more flexibility in cloud computing and virtualization. With virtual machines it is possible to isolate applications and improve their security. This help to reduce the risk of security breaches and protect sensitive data.&lt;/p&gt;

&lt;p&gt;Virtual CPUs also enable greater flexibility in resource allocation. By adjusting the number of cores and threads allocated to a virtual machine, it is possible to optimize its performance, this can help to ensure that resources are used effectively, and that applications perform as efficiently as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of Virtual CPUs
&lt;/h2&gt;

&lt;p&gt;Virtual CPUs offer many benefits, but they also have limitations that can impact performance. In this section, I will discuss some of the limitations of virtual CPUs and their impact on performance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Overhead&lt;/strong&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Virtualization software adds overhead to the system. The overhead is the additional processing power required to manage the virtual machines. &lt;/p&gt;

&lt;p&gt;This overhead can be significant, especially for applications that require low latency and high performance. Overhead can lead to performance degradation, as the processing power required to manage the virtual machines can reduce the amount of processing power available to applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Resource Allocation Issues&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Resource allocation can also be a challenge when using virtual CPUs. In some cases, it can be difficult to allocate the appropriate amount of resources to virtual machines. If a virtual machine is allocated too few resources, it can result in poor performance. On the other hand, if a virtual machine is allocated too many resources, it can lead to wasted resources and unnecessary costs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;I/O Bottlenecks&lt;/strong&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Virtualization software can create I/O bottlenecks, which can impact performance. I/O bottlenecks occur when multiple virtual machines compete for the same physical resources, such as network bandwidth or disk I/O. This can lead to slow response times and poor performance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Licensing&lt;/strong&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another limitation of virtual CPUs is licensing. Some software vendors require licensing based on the number of physical CPUs in a system. If a virtual machine is running on a system with multiple virtual CPUs, it can be challenging to determine how many physical CPUs the software vendor requires for licensing purposes.&lt;/p&gt;

&lt;p&gt;The impact of these limitations on performance can be significant. Overhead can reduce the processing power available to applications, leading to slower response times and degraded performance. Resource allocation issues can result in poor performance if a virtual machine is not allocated enough resources. I/O bottlenecks can lead to slow response times and poor performance, especially for applications that require high I/O throughput. Licensing can also impact performance, as software vendors may require licensing based on the number of physical CPUs in a system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I started by introducing the concept of virtual CPUs and how they differ from physical CPUs. I then discussed the differences between cores and threads and how they impact virtual machine performance. I also explained the advantages of virtual CPUs in cloud computing and virtualization, as well as their applications.&lt;/p&gt;

&lt;p&gt;Likewise, I also discussed the limitations of virtual CPUs, such as overhead and resource allocation issues, which can impact performance.&lt;/p&gt;

&lt;p&gt;In conclusion, virtual CPUs play a critical role in modern computing environments. They allow multiple virtual machines to run on a single physical server, reducing hardware costs and improving resource utilization.&lt;/p&gt;

&lt;p&gt;Virtual CPUs offer many benefits, but they also have limitations that can impact performance. By understanding these limitations and how they impact performance, we can ensure that virtual machines perform as expected, and make the most out of this powerful technology.&lt;/p&gt;

&lt;p&gt;Congratulations! You have gotten to the end of the guide.&lt;/p&gt;

&lt;p&gt;I’m currently in search of paid technical writing gigs related to cloud &amp;amp; devops, if you got one, reach out or refer me.&lt;/p&gt;

&lt;p&gt;Send me an e-mail via &lt;a href="mailto:husseinalamutu@gmail.com"&gt;husseinalamutu@gmail.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>networking</category>
      <category>linux</category>
      <category>cloud</category>
    </item>
    <item>
      <title>The How, Why and When of Github Actions.</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Sat, 08 Apr 2023 11:50:23 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/the-how-why-and-when-of-github-actions-7lo</link>
      <guid>https://dev.to/husseinalamutu/the-how-why-and-when-of-github-actions-7lo</guid>
      <description>&lt;p&gt;DevOps is a broad field, with what seems like a non ending pool of tools to use.&lt;/p&gt;

&lt;p&gt;Github Actions is one of those tools. If you want to automate your code pipeline from integration to delivery without complicating things, Github Actions comes in handy.&lt;/p&gt;

&lt;p&gt;At the end of this article you will gain the understanding of why Github Actions is used in DevOps, when to use it and how to get started with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to get the most out of this article, you need to have a basic understanding of &lt;a href="https://www.freecodecamp.org/news/introduction-to-git-and-github/amp/"&gt;Github&lt;/a&gt; and &lt;a href="https://www.guru99.com/ci-cd-pipeline.html"&gt;continuous integration and continuous delivery(CI/CD)&lt;/a&gt; pipeline. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Github Actions?
&lt;/h2&gt;

&lt;p&gt;Github Actions is a CI/CD tool that allows you to automate your workflow by creating custom scripts that can be triggered by different events. These events can include things like code changes, pull requests, or other Github activities.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note: I will be using workflows a lot as we move along.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the context of Github Actions, a workflow refers to a set of automated steps or processes that are triggered by an event, such as a code push or pull request, in a Github repository. &lt;/p&gt;

&lt;p&gt;A workflow consist of one or more jobs, and each job can have one or more steps that define specific actions to be executed, such as running tests, building a project, or deploying code to a server. &lt;/p&gt;

&lt;p&gt;The workflow is defined in a YAML file that is committed to the repository, and Github automatically executes the workflow when the defined event, such as code push or merged pull request occurs. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Github Actions was first launched in 2018, and since then it has been adopted by a large number of organizations and individuals. Github Actions is based on &lt;a href="https://www.redhat.com/en/topics/automation/what-is-yaml"&gt;YAML&lt;/a&gt; syntax and uses &lt;a href="https://www.docker.com/resources/what-container/"&gt;Docker containers&lt;/a&gt; to run your code in a variety of different environments(Linux, Windows or MacOS).&lt;/p&gt;

&lt;p&gt;Github Actions offers several advantages over other CI/CD tools. &lt;/p&gt;

&lt;p&gt;These advantages include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Integration with Github: Because Github Actions is integrated with Github and Github is very popular amongst programmers as a code hosting platform for version control and collaboration, this makes Github Actions easy to set up and use if you're already familiar with Github.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: Github Actions is suitable for small teams and can scale to meet the needs of large organisations. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flexibility: Github Actions is highly customizable, which means you can create workflows that fit your specific needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Event-based workflows: With Github Actions, you can set up workflows to run when specific events occur, which can help you automate your workflow more efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Time-based workflows: You can set up workflows to run at a specific time or on a recurring schedule, rather than being triggered by an event in a Github repository. This is useful for performing tasks like automated backups, running tests on a regular basis, or performing routine maintenance on a server or application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Github Actions can also be a more cost-effective solution than other CI/CD tools, as it doesn't require any additional infrastructure or third-party services. Additionally, Github Actions provides a wide range of pre-built actions that you can use to get started quickly.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  When to use Github Actions?
&lt;/h2&gt;

&lt;p&gt;Github Actions is a powerful tool for automating continuous integration and deployment workflows, but it's not always the best choice for every situation. When considering whether to use Github Actions for your project, it's important to evaluate your needs and compare Github Actions to other CI/CD tools on the market.&lt;/p&gt;

&lt;p&gt;This section explores when it's appropriate to use Github Actions, how to evaluate whether it's the right choice for your project, and some examples of successful use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choosing between Github Actions and other CI/CD tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When it comes to choosing between Github Actions and other CI/CD tools, there are a few factors to consider. One key factor is the level of customization and control you need over your workflows. Github Actions offers a high level of customisation and control, allowing you to define complex workflows and use a wide range of third-party integrations. &lt;/p&gt;

&lt;p&gt;However, other tools may offer more specialised features or integrations that better meet your specific needs.&lt;/p&gt;

&lt;p&gt;Another factor to consider is the level of expertise and resources required to set up and maintain your CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;Github Actions is relatively easy to set up and use, especially if you're already familiar with Git and Github, other tools may require more expertise or specialised knowledge to configure and maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluating when Github Actions is appropriate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To determine whether Github Actions is appropriate for your project, it's important to evaluate your needs and goals. Here are a few questions to consider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;What level of automation and testing do you need for your project?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What are your deployment goals, and how frequently do you need to deploy code?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What level of customisation and control do you need over your workflows?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What level of expertise and resources do you have available to set up and maintain your CI/CD pipeline?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you need a high level of automation and testing, frequent deployments, and a high level of customisation and control over your workflows, Github Actions may be a good choice. &lt;/p&gt;

&lt;p&gt;However, if you have more specialised needs or require a higher level of expertise to set up and maintain your pipeline, other tools may be a better fit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples of successful use cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are many successful use cases for Github Actions, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Automating testing and deployment for web applications&lt;br&gt;
Building and deploying Docker images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Running &lt;a href="https://www.linkedin.com/posts/hussein-alamutu_programming-python-javascript-activity-7049618058159529984-ANcq?utm_source=share&amp;amp;utm_medium=member_ios"&gt;linters&lt;/a&gt; and other code quality checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automating documentation generation and publishing&lt;br&gt;
In each of these cases, Github Actions provides a powerful and flexible tool for automating complex workflows and ensuring that code is tested, built, and deployed reliably and efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In conclusion, Github Actions can be a powerful tool for automating CI/CD workflows, but it's important to evaluate your needs and compare Github Actions to other tools before making a decision. &lt;/p&gt;

&lt;p&gt;By carefully considering your requirements and goals, you can choose the tool that best meets your needs and ensures that your development process is efficient, reliable, and scalable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The How of GitHub Actions
&lt;/h2&gt;

&lt;p&gt;This section explores how to set up Github Actions, including a step-by-step guide, an explanation of workflows and actions, and examples of YAML syntax.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide to Setting up Github Actions&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new repository in Github, or select an existing repository.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this step, I am using an existing repository(devops_hands-on)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zPx_CXEX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujzpqyxcx9vv89njc7vn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zPx_CXEX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujzpqyxcx9vv89njc7vn.png" alt="Selecting an existing repository" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the "Actions" tab in your repository and click "set up a workflow yourself" to create a new YAML file for your workflow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should see something similar to what's in the image below, Github will suggests some sample workflow depending on what's in your repository. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TogcUl_A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z8iti5q5rwxl7wxh3ppm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TogcUl_A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z8iti5q5rwxl7wxh3ppm.png" alt="Setting up a workflow on Github actions" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define your workflow by specifying the event that will trigger your workflow, the jobs that will be executed, and the steps that will be performed within each job.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is what the workflow YAML file looks like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8959o0zL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rp9qru9qfcf2ycs4gg30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8959o0zL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rp9qru9qfcf2ycs4gg30.png" alt="Creating a workflow" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Test your workflow by committing changes to your repository and observing the actions taken by Github.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This image shows what goes on underneath the specified workflow, basically, anytime I push to my repo or merge a pull request, this workflow runs and perform the actions "echo Hello, world!" and some other multi-line script that was defined in the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZqRsQhBi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/837jqfk5kix70sd5vtrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZqRsQhBi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/837jqfk5kix70sd5vtrh.png" alt="What the workflow does underneath" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explaining Workflows and Actions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;workflow&lt;/strong&gt; in Github Actions is a set of automated steps or processes that are triggered by an event, such as a code push or pull request, in a Github repository. &lt;/p&gt;

&lt;p&gt;A workflow can consist of one or more jobs, and each &lt;strong&gt;job&lt;/strong&gt; can have one or more steps that define specific actions to be executed, such as running tests, building a project, or deploying code to a server.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;action&lt;/strong&gt; is a specific step that is executed within a job in a workflow. Actions can be defined using pre-existing actions from the Github Marketplace, or by writing custom actions in Docker containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples of YAML Syntax&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are some examples of YAML syntax that you might use when defining your workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specifying a trigger for your workflow:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches: [ main ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you push any changes to your main branch, your workflow fires up.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defining a job that runs a command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Install dependencies
        run: npm install
      - name: Build project
        run: npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code defines a single job in a Github Actions workflow called "build". The job runs on the latest version of the Ubuntu operating system, as specified by the runs-on key.&lt;/p&gt;

&lt;p&gt;The job consists of three steps: &lt;/p&gt;

&lt;p&gt;The first step, named "Checkout code", uses the actions/checkout action to fetch the latest version of the repository code. This action is commonly used in Github Actions workflows as a first step to ensure that the code being built or tested is up-to-date.&lt;/p&gt;

&lt;p&gt;The second step, named "Install dependencies", runs the npm install command to install any dependencies required by the project. This step assumes that the project is a Node.js application and uses the Node Package Manager (npm) to install the dependencies.&lt;/p&gt;

&lt;p&gt;The third and final step, named "Build project", runs the npm run build command to build the project. This step assumes that the project has a build script defined in its package.json file, and that running npm run build will produce a compiled version of the project ready for deployment.&lt;/p&gt;

&lt;p&gt;Overall, this code defines a simple Github Actions job that checks out the latest code, installs dependencies, and builds the project. &lt;/p&gt;

&lt;p&gt;This is a common workflow for many types of projects and can serve as a starting point for more complex workflows that incorporate testing, deployment, and other tasks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defining a job that uses a pre-existing action:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Deploy to server
        uses: easingthemes/ssh-deploy@v2.1.1
        with:
          server: ${{ secrets.SERVER }}
          username: ${{ secrets.USERNAME }}
          key: ${{ secrets.KEY }}
          port: ${{ secrets.PORT }}
          source: "dist/"
          dest: "/var/www/my-app"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code defines a Github Actions job called "deploy" that runs on the latest version of Ubuntu operating system.&lt;/p&gt;

&lt;p&gt;The job consists of two steps. &lt;/p&gt;

&lt;p&gt;The first step, named "Checkout code", uses the actions/checkout action to fetch the latest version of the repository code. This step is similar to the first step in the previous example, and ensures that the code being deployed is up-to-date.&lt;/p&gt;

&lt;p&gt;The second step, named "Deploy to server", uses the easingthemes/ssh-deploy action to deploy the code to a server. This action allows for secure SSH-based deployment of a project to a remote server. The action takes several parameters that are stored as secrets in Github, including the server's ip address, the username to log in with, and the path to the SSH key file.&lt;/p&gt;

&lt;p&gt;The action also specifies the source and destination paths for the deployment. In this case, it is deploying the contents of the "dist/" directory to the "/var/www/my-app" directory on the remote server.&lt;/p&gt;

&lt;p&gt;Overall, this code defines a simple Github Actions job that deploys a project to a remote server using SSH-based deployment. This is a common workflow for many types of projects that require deployment to a remote server, and can be modified to suit different deployment scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What a final workflow looks like&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The final workflow for Github Actions will depend on the specific needs and goals of your project. &lt;/p&gt;

&lt;p&gt;However, to give you an idea of what a final workflow might look like, here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build and Deploy

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Install dependencies
        run: npm install
      - name: Build project
        run: npm run build
  deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
      - name: Deploy to server
        run: ssh user@server "cd /var/www &amp;amp;&amp;amp; git pull"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This workflow is triggered by a push to the main branch or a pull request on the main branch. &lt;/p&gt;

&lt;p&gt;It consists of two jobs: a build job that installs dependencies and builds the project, and a deploy job that deploys the project to a server. The deploy job has a dependency on the build job, meaning it won't be executed until the build job has completed successfully.&lt;/p&gt;

&lt;p&gt;In the deploy job, the workflow sets up an SSH connection to the server using a private key stored as a secret in Github. It then uses SSH to run a command on the server that pulls the latest changes from the Github repository.&lt;/p&gt;

&lt;p&gt;This is just one example of a Github Actions workflow, and yours will likely look different depending on your specific needs. &lt;/p&gt;

&lt;p&gt;However, by following the steps outlined in this article, you can create powerful and flexible automation for your Github repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Best practices for Github Actions
&lt;/h2&gt;

&lt;p&gt;Github Actions is a powerful tool that can help automate many aspects of software development, from continuous integration and deployment to testing and monitoring. &lt;/p&gt;

&lt;p&gt;However, with great power comes great responsibility, and it's important to follow best practices when writing Github Actions workflows to ensure that they are efficient, effective, and secure. &lt;/p&gt;

&lt;p&gt;This section covers some of the best practices for Github Actions that you should keep in mind when creating workflows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep workflows simple and focused.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the best practices for writing Github Actions workflows is to keep them simple and focused on a specific task. This means that each workflow should only do one thing, such as running tests or deploying code, and should not try to do too many things at once. This approach makes it easier to understand and maintain the workflow, and reduces the risk of errors or failures.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use caching to speed up workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Github Actions provides caching functionality that can be used to speed up workflows by caching files and dependencies between workflow runs. &lt;/p&gt;

&lt;p&gt;This can significantly reduce the time it takes to run a workflow, especially if the workflow requires downloading and installing dependencies or compiling code. To use caching, you can add a caching step to your workflow that specifies which files or directories to cache.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use reusable actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Github Actions allows you to create and reuse actions, which are reusable steps that can be used in multiple workflows. &lt;/p&gt;

&lt;p&gt;Reusing actions can help simplify your workflows and make them more modular, as you can reuse the same action in multiple workflows rather than writing the same steps multiple times. You can create your own actions or use actions from the Github Marketplace.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep secrets and sensitive information secure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Github Actions allows you to store secrets and sensitive information as encrypted variables, which can be used in your workflows. However, it's important to keep these secrets and sensitive information secure, as they can be accessed by anyone with access to your repository. &lt;/p&gt;

&lt;p&gt;To keep them secure, you should only grant access to authorised users, limit the scope of the secrets and sensitive information, and avoid storing them in plain text in your workflows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test and validate workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Github Actions workflows should be thoroughly tested and validated before they are deployed to production. This includes testing the workflows on different platforms and environments, validating the inputs and outputs of the workflows, and ensuring that the workflows are reliable and error-free. &lt;/p&gt;

&lt;p&gt;You can use Github Actions' built-in testing functionality, such as the &lt;code&gt;jobs.&amp;lt;job_id&amp;gt;.steps.run&lt;/code&gt; syntax, to test and validate your workflows.&lt;/p&gt;

&lt;p&gt;In conclusion, Github Actions is a powerful tool that can help automate many aspects of software development, but it's important to follow best practices when writing workflows to ensure that they are efficient, effective, and secure. &lt;/p&gt;

&lt;p&gt;By keeping workflows simple and focused, using caching, reusing actions, keeping secrets and sensitive information secure, and testing and validating workflows, you can create robust and reliable workflows that help streamline your development process.&lt;/p&gt;

&lt;p&gt;Congratulations! You have gotten to the end of the guide. &lt;/p&gt;

&lt;p&gt;I’m currently in search of paid technical writing gigs related to cloud &amp;amp; devops, if you got one, &lt;a href="//mailto:husseinalamutu@gmail.com"&gt;reach out&lt;/a&gt; or refer me.&lt;/p&gt;

&lt;p&gt;You can check out - &lt;a href="https://husseinalamutu.github.io"&gt;My portfolio&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
      <category>cloud</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Bash vs Python Scripting: A Simple Practical Guide</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Mon, 20 Mar 2023 16:14:00 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/bash-vs-python-scripting-a-simple-practical-guide-16in</link>
      <guid>https://dev.to/husseinalamutu/bash-vs-python-scripting-a-simple-practical-guide-16in</guid>
      <description>&lt;p&gt;&lt;strong&gt;Bash&lt;/strong&gt; and &lt;strong&gt;Python&lt;/strong&gt; are two popular scripting languages used for automation and system administration tasks. &lt;/p&gt;

&lt;p&gt;This article aims to provide a simple, practical guide for beginners to understand the differences between Bash and Python scripting and when to use each one. &lt;/p&gt;

&lt;p&gt;By the end of this article, readers will have a better understanding of the basics of Bash and Python scripting, as well as their strengths and weaknesses in different scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow this tutorial you need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access to a linux machine(Ubuntu, Cent OS etc.) to enable you run bash commands on the terminal, since Bash is unix-based. If you don’t have that, &lt;a href="https://www.javatpoint.com/git-bash" rel="noopener noreferrer"&gt;git bash&lt;/a&gt; or Windows Subsystem for Linux(&lt;a href="https://www.xda-developers.com/how-to-install-wsl-2-windows/" rel="noopener noreferrer"&gt;wsl&lt;/a&gt;) will do.&lt;/li&gt;
&lt;li&gt;Python installation, whether Linux, Windows or MacOS, most times python comes pre-installed, you can confirm that by running &lt;code&gt;python —-version&lt;/code&gt; on your system command line interface, if python exists, you will get the version number of the python on your system. If not you can install it by following this &lt;a href="https://python.land/installing-python" rel="noopener noreferrer"&gt;Python guide&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that been said, let’s dive into the guide properly, fasten your seatbelts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6h43g0ke0fop8il6cco.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6h43g0ke0fop8il6cco.jpg" alt="Bash script been open on hussein alamutu’s ubuntu command line"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash Scripting
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is Bash &amp;amp; Bash Scripting?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bash is a command-line shell used on Linux, macOS, and other Unix-like operating systems. While, Bash scripting is commonly used for automation, system administration, and software deployment tasks.&lt;/p&gt;

&lt;p&gt;Bash scripts are easy to write and execute, and they can perform complex operations using a few lines of code. Bash provides many built-in commands, such as "echo"(used to print), "cd"(to change directory), "ls"(list), and "grep"(searches for a match to a given pattern), that can be used in scripts. &lt;/p&gt;

&lt;p&gt;Bash scripts can be used to manipulate files, perform backups, and configure system settings, among other tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basics of Bash Scripting
&lt;/h2&gt;

&lt;p&gt;Here's an explanation of the basics of Bash scripting:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Variables:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Bash, a variable is used to store a value, such as a number or a string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Variables can be declared and assigned a value using the equals sign (=).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example, x=10 assigns the value 10 to the variable x.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2.Conditionals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Bash, conditionals are used to make decisions based on a certain condition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The if statement is used to check whether a condition is true, and the else statement is used to specify what to do if the condition is false.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

x=10
if [ $x -gt 5 ]
then
    echo "x is greater than 5"
else
    echo "x is less than or equal to 5"
fi


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;3.Loops:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Bash, loops are used to iterate over a sequence of values or to repeat a block of code a certain number of times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The for loop is used to iterate over a sequence of values, while the while loop is used to repeat a block of code as long as a certain condition is true.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Using a for loop to iterate over a list of values
fruits=("apple" "banana" "cherry")
for fruit in "${fruits[@]}"
do
    echo "$fruit"
done

# Using a while loop to repeat a block of code
x=0
while [ $x -lt 10 ]
do
    echo "$x"
    x=$((x+1))
done


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;4.Functions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Bash, functions are used to encapsulate a block of code that can be called repeatedly with different inputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Functions are defined using the function keyword, and they can have inputs and outputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Defining a function to calculate the area of a rectangle
function calculate_rectangle_area {
    width=$1
    height=$2
    area=$((width * height))
    echo $area
}

# Calling the function with different inputs
echo $(calculate_rectangle_area 3 4) # Output: 12
echo $(calculate_rectangle_area 5 6) # Output: 30


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These are the basic building blocks of Bash scripting, and they can be combined to create more complex scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Bash Scripts
&lt;/h2&gt;

&lt;p&gt;Here are some examples of Bash scripts for common tasks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. File manipulation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A script that renames all files in a directory with a specific extension to have a new prefix.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A script that searches for a particular string in a file and replaces it with a new value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A script that compresses all files in a directory into a single archive file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check this github repo for sample scripts of the above &lt;a href="https://github.com/husseinalamutu/devops_hands-on/tree/main/bashScripting" rel="noopener noreferrer"&gt;bash scripting&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. System administration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A script that backs up all files in a directory to a remote server using secure copy (SCP).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A script that monitors system logs for a particular error and sends an email alert when it occurs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A script that automates the installation of software packages and updates on multiple servers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the system administration part, here's an example of a Bash script that automates the installation of software packages and updates on multiple servers:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#!/bin/bash

# List of servers to update
servers=("server1" "server2" "server3")

# Software packages to install
packages=("apache2" "mysql-server" "php")

# Update package lists on all servers
for server in "${SERVERS[@]}"; do
  ssh $server "sudo apt-get update"
done

# Install packages on all servers
for server in "${SERVERS[@]}"; do
  ssh $server "sudo apt-get install ${PACKAGES[@]} -y"
done


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this script, we first define a list of servers to update and a list of packages to install or update. We then loop through each server in the list and run the "apt-get update" command to update the system packages. We then loop through each package in the list and install or update it using the "apt-get install" command. &lt;/p&gt;

&lt;p&gt;The -y option is used with apt-get install to automatically answer "yes" to any prompts during the installation process.&lt;/p&gt;

&lt;p&gt;Note that this script assumes that you have SSH access to the servers and that you have sudo privileges to install software packages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65yzz29wngbhvgw8q1aj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65yzz29wngbhvgw8q1aj.jpeg" alt="Coding with a python book on the table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Python Scripting
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is Python &amp;amp; Python Scripting?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python is a general-purpose programming language used for a wide range of applications, including web development, data analysis, AI and machine learning. Python provides a clear, concise syntax that is easy to read and write&lt;br&gt;
Python has a large standard library and many third-party modules that can be used to perform complex operations&lt;/p&gt;

&lt;p&gt;Python scripting is commonly used for automation, data processing, and scientific computing tasks. Python scripts can be used to scrape data from websites, process large datasets, and automate repetitive tasks, among other things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basics of Python Scripting
&lt;/h2&gt;

&lt;p&gt;Here's an explanation of the basics of Python scripting:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Variables:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Python, a variable is used to store a value, such as a number or a string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Variables can be declared and assigned a value using the equals sign (=).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example, x = 10 assigns the value 10 to the variable x.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2.Conditionals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Python, conditionals are used to make decisions based on a certain condition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The if statement is used to check whether a condition is true, and the else statement is used to specify what to do if the condition is false.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

x = 10
if x &amp;gt; 5:
    print("x is greater than 5")
else:
    print("x is less than or equal to 5")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;3.Loops:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Python, loops are used to iterate over a sequence of values, such as a list or a string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The for loop is used to iterate over a sequence of values, while the while loop is used to repeat a block of code as long as a certain condition is true.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Using a for loop to iterate over a list
fruits = ["apple", "banana", "cherry"]
for fruit in fruits:
    print(fruit)

# Using a while loop to repeat a block of code
x = 0
while x &amp;lt; 10:
    print(x)
    x += 1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;4.Functions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Python, functions are used to encapsulate a block of code that can be called repeatedly with different inputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Functions are defined using the def keyword, and they can have inputs and outputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Defining a function to calculate the area of a rectangle
def calculate_rectangle_area(width, height):
    area = width * height
    return area

# Calling the function with different inputs
print(calculate_rectangle_area(3, 4)) # Output: 12
print(calculate_rectangle_area(5, 6)) # Output: 30


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These are the basic building blocks of Python scripting, and they can be combined to create more complex programs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python Modules &amp;amp; How to Use Them in Scripts
&lt;/h2&gt;

&lt;p&gt;Python modules are pre-written pieces of code that can be imported into a script to add functionality such as working with files, processing data, sending emails, and more. . Here are some common Python modules and how to use them in scripts:&lt;/p&gt;

&lt;p&gt;1.&lt;code&gt;os&lt;/code&gt; module:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This module provides a way to interact with the underlying operating system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Functions like &lt;code&gt;os.chdir()&lt;/code&gt; to change the current working directory, &lt;code&gt;os.mkdir()&lt;/code&gt; to create a new directory, and &lt;code&gt;os.path.exists()&lt;/code&gt; to check if a file or directory exists can be used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import os

# Change the current working directory
os.chdir('/path/to/new/directory')

# Create a new directory
os.mkdir('new_directory')

# Check if a file exists
if os.path.exists('/path/to/file'):
    print('File exists')
else:
    print('File does not exist')


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2.&lt;code&gt;datetime&lt;/code&gt; module:&lt;/p&gt;

&lt;p&gt;-This module provides a way to work with dates and times. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Functions like &lt;code&gt;datetime.datetime.now()&lt;/code&gt; to get the current date and time, &lt;code&gt;datetime.timedelta()&lt;/code&gt; to calculate the difference between two dates or times, and &lt;code&gt;datetime.datetime.strptime()&lt;/code&gt; to convert a string to a date or time object can be used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import datetime

# Get the current date and time
current_time = datetime.datetime.now()
print(current_time)

# Calculate the difference between two dates or times
time_diff = datetime.timedelta(days=1)
yesterday = current_time - time_diff
print(yesterday)

# Convert a string to a date or time object
date_string = '2023-03-20'
date_object = datetime.datetime.strptime(date_string, '%Y-%m-%d')
print(date_object)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.&lt;code&gt;csv&lt;/code&gt; module:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This module provides a way to read and write comma-seperated-value(CSV) files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Functions like &lt;code&gt;csv.reader()&lt;/code&gt; to read a CSV file, and &lt;code&gt;csv.writer()&lt;/code&gt; to write to a CSV file, can be used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import csv

# Read a CSV file
with open('data.csv', 'r') as f:
    reader = csv.reader(f)
    for row in reader:
        print(row)

# Write to a CSV file
with open('output.csv', 'w') as f:
    writer = csv.writer(f)
    writer.writerow(['Name', 'Age', 'City'])
    writer.writerow(['Alice', '25', 'New York'])
    writer.writerow(['Bob', '30', 'San Francisco'])



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.&lt;code&gt;shutil&lt;/code&gt; module: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The shutil module provides a higher-level interface for working with files and directories than the os module. It provides functions for copying, moving, and deleting files and directories. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import shutil

# Copy a file from one directory to another
shutil.copy("source/file.txt", "destination/file.txt")

# Move a file from one directory to another
shutil.move("source/file.txt", "destination/file.txt")

# Delete a file
os.remove("file.txt")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;5.&lt;code&gt;requests&lt;/code&gt; module:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The requests module provides a way to send HTTP requests and handle responses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can use it to download files, interact with web APIs, and scrape web pages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Example usage:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import requests

# Download a file
url = "https://example.com/file.txt"
response = requests.get(url)
with open("file.txt", "wb") as file:
    file.write(response.content)

# Get data from a web API
url = "https://api.example.com/data"
response = requests.get(url, headers={"Authorization": "Bearer YOUR_TOKEN"})
data = response.json()


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These are just a few examples of common Python modules and their usage. There are many other modules available that can help you accomplish various tasks in your scripts. You can search for them in the Python Package Index (PyPI) or through the Python documentation. &lt;/p&gt;

&lt;h2&gt;
  
  
  More Python Scripts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Web Scraping&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Python is a popular language for web scraping, as it provides easy-to-use libraries like &lt;code&gt;BeautifulSoup&lt;/code&gt; and &lt;code&gt;requests&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here's an example script that scrapes the top headlines from the BBC News website:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import requests
from bs4 import BeautifulSoup

url = "https://www.bbc.com/news"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")

headlines = soup.find_all("h3", class_="gs-c-promo-heading__title")
for headline in headlines:
    print(headline.text)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This script uses the requests library to send an HTTP GET request to the BBC News website and fetch the HTML content. It then uses the BeautifulSoup library to parse the HTML and extract the headlines using a CSS selector.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Data Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Python is a popular language for data analysis, with many powerful libraries such as &lt;code&gt;pandas&lt;/code&gt;, &lt;code&gt;numpy&lt;/code&gt;, and &lt;code&gt;matplotlib&lt;/code&gt; available for use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here's an example Python script for data analysis using the &lt;code&gt;pandas&lt;/code&gt; library:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import pandas as pd
import matplotlib.pyplot as plt

# Load data from a CSV file
data = pd.read_csv('data.csv')

# Print the first five rows of the data
print(data.head())

# Compute descriptive statistics
print(data.describe())

# Compute the correlation matrix
corr_matrix = data.corr()
print(corr_matrix)

# Plot a histogram of one of the variables
data['variable_name'].hist()

# Plot a scatter plot of two variables
data.plot.scatter(x='variable1', y='variable2')

# Group data by a categorical variable and compute summary statistics
grouped_data = data.groupby('category')['variable'].agg(['mean', 'std', 'count'])
print(grouped_data)

# Plot a bar chart of the means for each category
grouped_data['mean'].plot(kind='bar')

# Save a plot to a file
plt.savefig('output.png')


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This script loads data from a CSV file, computes descriptive statistics and a correlation matrix, plots a histogram and a scatter plot of two variables, groups data by a categorical variable and computes summary statistics, plots a bar chart of the means for each category, and saves a plot to a file.&lt;/p&gt;

&lt;p&gt;You can modify this script to analyse your own data by replacing the file name, variable names, and categorical variable with the appropriate names for your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash vs Python: A Fair Comparison
&lt;/h2&gt;

&lt;p&gt;This section covers various aspects that should be considered while comparing Bash and Python scripting languages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A. Syntax and readability:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The syntax of Bash scripting language is more complex and harder to read and understand than Python. Bash uses various special characters and symbols to represent different actions, which can make it harder to read and maintain. In contrast, Python's syntax is more straightforward, using indentation to denote code blocks, and a clean, easy-to-read syntax that is more accessible to beginners and experts alike.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;B. Functionality and capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python provides a wide range of libraries and modules that allow for a wide range of functionalities, such as data analysis, web development, artificial intelligence, and more. On the other hand, Bash scripting is mostly used for automating system-level tasks and commands, such as file management and system administration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;C. Performance and execution time:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bash scripts tend to run faster than Python scripts because they do not require an interpreter to execute the code. Bash scripts are directly executed by the shell, which makes them faster than Python scripts that need to be interpreted by a Python interpreter. However, Python has several modules that can be used to optimize the code and improve performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;D. Portability and compatibility:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bash is available on most Unix-based systems and can be executed on any system that supports the Bash shell. In contrast, Python scripts may require the installation of Python and its dependencies on each system where the script will be executed, making it less portable. Additionally, Bash scripts may be more compatible with other shell commands and utilities used in the Unix shell environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E. Security and safety:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python provides better error handling and a more secure execution environment than Bash. Bash scripts are more susceptible to shell injection attacks and other security vulnerabilities. Python's built-in security features, such as the capability to handle exceptions, prevent code injection, and other security features make it a more secure option for scripting tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;F. Maintenance and scalability:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python's object-oriented approach, modular structure, and extensive libraries make it easier to maintain and scale than Bash. In contrast, Bash scripts can be more difficult to maintain as they tend to be more complex and lack the object-oriented structure that Python provides. Python scripts are more scalable as they can be easily extended and modified using libraries and modules.&lt;/p&gt;

&lt;p&gt;Overall, these points highlight the key differences between Bash and Python scripting languages, making it easier for you to choose the appropriate language for your specific use case.&lt;/p&gt;

&lt;p&gt;Congratulations! You have gotten to the end of the guide. Also, I’m currently in search of paid technical writing gigs related to cloud &amp;amp; devops, if you got one, reach out or refer me.&lt;/p&gt;

&lt;p&gt;You can check out - &lt;a href="//husseinalamutu.github.io"&gt;My portfolio&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>bash</category>
      <category>python</category>
      <category>linux</category>
      <category>devops</category>
    </item>
    <item>
      <title>Serverless and its not so Server-less Nature</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Mon, 08 Aug 2022 16:23:00 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/serverless-and-its-not-so-server-less-nature-4n4e</link>
      <guid>https://dev.to/husseinalamutu/serverless-and-its-not-so-server-less-nature-4n4e</guid>
      <description>&lt;p&gt;If you have ever wondered what serverless means and why it was built, then you are at the right place, in this article, I would share a bit of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what serverless is(a little dive into history),&lt;/li&gt;
&lt;li&gt;and, my favorite part - debunking some myths that has to do with serverless.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What you should know?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why this article might not be so technical, it requires you to at least have a background knowledge of cloud computing.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Serverless?
&lt;/h3&gt;

&lt;p&gt;Serverless has been gaining increasing interest over time and it seems like a very promising tech. it's not a technology or architecture, serverless is a bunch of solutions that promise great benefits in administering your application e.g. low entry barrier, cost efficiency, high availability and scalability.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Serverless came to be, the evolution
&lt;/h3&gt;

&lt;p&gt;Some people claim that serverless is the next stage of evolution in cloud computing. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How true is that?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To know the answer to this, I need to take you back to the era before the great wall of china was built, I'm kidding. &lt;/p&gt;

&lt;p&gt;Before the era of the cloud, to build applications, you have to think of so many concepts (e.g. capacity management, installing hardware, managing servers, physical security, software updates, virtualization etc.) and all these distract you from the software development goal itself.&lt;/p&gt;

&lt;p&gt;By 2006, Amazon launched Infrastructure as a Service, in which, with just a few clicks and an API request, I can provision a Linux-based server somewhere in North California while eating Jollof rice in my hometown in Nigeria, yeah, it's that easy. Now, all I have to focus on is software provisioning, OS management, scaling etc.&lt;/p&gt;

&lt;p&gt;In just a small amount of time, cloud computing became very popular and everyone wanted to hop on the train, from Google and Microsoft creating their own IaaS service to traditional hosting companies also leveraging the cloud, the software development landscape started to change drastically.&lt;/p&gt;

&lt;p&gt;Around 2013 things got even better, containers were introduced, let's call it Container as a Service(CaaS). Containers helped DevOps teams to accelerate software development, test and production cycle. This allowed us to package our applications in containers and run them on an &lt;a href="https://dev.to/husseinalamutu/a-simple-guide-to-container-orchestration-with-kubernetes-3439"&gt;orchestration platform like Kubernetes&lt;/a&gt;. CaaS reduced the amount of time spent on infrastructure, but not totally we still had to provision infrastructures, manage auto-scaling, monitor and log events etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Serverless was created.
&lt;/h3&gt;

&lt;p&gt;With all the immense benefits the cloud has to offer, serverless took it even further. Serverless was introduced around 2014, which was mostly pioneered by Amazon. &lt;/p&gt;

&lt;p&gt;Serverless promise is - you don't have to think about managing infrastructures, just provision your desired infrastructures using FaaS(Function as a Service) i.e. Azure functions and your service provider(i.e. Azure) will take it up from there.&lt;/p&gt;

&lt;p&gt;However, while serverless has great benefits, it also has some hurdles, new technologies, tools and architectural patterns to be learnt. &lt;/p&gt;

&lt;p&gt;Some companies that currently use serverless are Coca-Cola, Netflix, Codepen, Nordstream etc. Coca-Cola was able to cut the cost of processing vending machine transactions by 99% when they switched to serverless. As I said, it's a promising tech.&lt;/p&gt;

&lt;p&gt;Now to my favourite part...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3e2he19f7o2lqowlss5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3e2he19f7o2lqowlss5.jpg" alt="Busting serverless myth"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Some Myths about serverless.
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Serverless and it's not so serverless tech&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Myth: For the language gurus reading this, you should know linguistically, serverless means without servers. Well, that name doesn't tell the full story, it is just like cloud computing when it was first introduced and we asked questions like are there computers or servers in the cloud? &lt;/p&gt;

&lt;p&gt;The answer is No, the servers are on the earth but can be accessed from anywhere.&lt;/p&gt;

&lt;p&gt;Truth: There are still servers in the background running the backend services. Serverless just means you can focus on your work and let a cloud provider handle the provision of servers and infrastructure that runs seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why was it names serverless then?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Personal opinion, buzzwords like these(cloud, serverless) are more of marketing tactics. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. It takes more than one click&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Myth: Deploy your code and get it running with just one click.&lt;/p&gt;

&lt;p&gt;Truth: It takes several processes to get your application to run on a serverless platform. There are technologies you still need to master, configurations and designs you need to make.&lt;/p&gt;

&lt;p&gt;Using &lt;a href="https://azure.microsoft.com/en-us/solutions/serverless/" rel="noopener noreferrer"&gt;Azure&lt;/a&gt; as a case study. With Azure Functions, you can provision a fully managed compute platform for processing data, integrating systems, and building simple APIs and microservices. These automate administrative tasks from development through to deployment and maintenance.&lt;/p&gt;

&lt;p&gt;Azure functions is a file-like manual that tells Azure what services should be provisioned i.e the number of servers, CPU size, Database to be integrated, event handlers or triggers etc.&lt;/p&gt;

&lt;p&gt;Serverless reduces a lot of relative friction for the developer in terms of going through required steps that add no direct value to their workflow, like building a container for every new code release or monitoring, and logging.&lt;/p&gt;

&lt;p&gt;If you would like me to write an ultimate guide on deploying with serverless or functions, let me know in the comment section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Serverless is suitable for all applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One major drawback of serverless computing is its non-persistent state or response latency.&lt;/p&gt;

&lt;p&gt;Response latency is the time between when a request is stimulated and when a program reacts. In the cloud, serverless computing is not continually running, it gets powered down between requests. &lt;/p&gt;

&lt;p&gt;Learn more about serverless response latency and its work-around &lt;a href="https://dev.to/aws-builders/the-hidden-serverless-latency-58p4"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Phew!&lt;/p&gt;

&lt;p&gt;Those are some myths debunked. Hopefully, now you have a better understanding of what serverless is and its historical background.&lt;/p&gt;

&lt;p&gt;Congratulations! You have gotten to the end of the guide. Also, I’m currently in search of paid technical writing gigs related to cloud &amp;amp; devops, if you got one, reach out or refer me.&lt;/p&gt;

&lt;p&gt;You can check out - &lt;a href="//husseinalamutu.github.io"&gt;My portfolio.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cloud</category>
      <category>devops</category>
      <category>azure</category>
    </item>
    <item>
      <title>A Simple Guide to Container Orchestration with Kubernetes</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Wed, 27 Jul 2022 08:55:38 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/a-simple-guide-to-container-orchestration-with-kubernetes-3439</link>
      <guid>https://dev.to/husseinalamutu/a-simple-guide-to-container-orchestration-with-kubernetes-3439</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;You'll get the most out of this guide if your desire is to learn container orchestration with Kubernetes.&lt;/p&gt;

&lt;p&gt;The world of container orchestration is complex and ever-changing, but you can easily understand the basics, and that basic knowledge can make a big difference. There are also free Kubernetes courses widely available on the web, including guides like this.&lt;/p&gt;

&lt;p&gt;Combine the information gotten from all these guides and courses with some practice and you are well on your way to becoming proficient in container orchestration.&lt;/p&gt;

&lt;h3&gt;
  
  
  What you'll learn
&lt;/h3&gt;

&lt;p&gt;This guide focuses on an intermediate topic in Linux system administration. After reading through, you are expected to understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    What is container orchestration&lt;/li&gt;
&lt;li&gt;    What is Kubernetes and&lt;/li&gt;
&lt;li&gt;    Container orchestration with Kubernetes using AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The basics of container orchestration
&lt;/h3&gt;

&lt;p&gt;Container orchestration is a part of a CI/CD pipeline in DevOps, it is used to automate the management, deployment, scaling and networking of containers. &lt;/p&gt;

&lt;p&gt;In the CI/CD process, there are several container orchestration platform, the two major ones are Kubernetes and Docker swarm. This guide will focus on Kubernetes.&lt;/p&gt;

&lt;p&gt;If you are familiar with CI/CD or DevOps, you should already know what containers are, they are a form of virtual machines that houses an operating system, they can be used to run a small microservice application and even large applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Orchestration?&lt;/strong&gt;&lt;br&gt;
Orchestration is the automated management of the lifecycle of our application&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Orchestration helps us automate our deployment process for continuous deployment&lt;/li&gt;
&lt;li&gt;Orchestration helps us handle complicated workflows in deploying our application&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Understanding how containers work
&lt;/h3&gt;

&lt;p&gt;Containers are like a standardized packaging for microservices applications with all needed application code and dependencies.&lt;/p&gt;

&lt;p&gt;Prior to containerization, the traditional way of building and deploying codes was to run your software  on a physical server where you have to install or use an existing operating system and also install dependencies for the software.&lt;/p&gt;

&lt;p&gt;Meanwhile, containers are similar to virtual machines(VM) you create on your local machine when building applications, with VM you are able to create a replicate of your local machine, this allows you to run different versions of software dependencies. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5eceq0kg3tjt78mq63k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5eceq0kg3tjt78mq63k.png" alt="Virtualization vs Containerization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In simpler words, using virtual machines, I am able to build two different applications that requires two different versions of, let's say node i.e. node v17.2.0 and node v18.6.0. If you build the two apps on your local machine it's impossible to use two different node versions at once.&lt;/p&gt;

&lt;p&gt;Now, the best part, while containers are similar to VM, containers are way cooler. Containerization enables you to deploy multiple applications using the same operating system on a single virtual machine or server.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Kubernetes?
&lt;/h3&gt;

&lt;p&gt;Kubernetes is a container orchestration system packed with features for automating an application’s deployment, it easily scales applications and ship new codes.&lt;/p&gt;

&lt;p&gt;Well, read what &lt;a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/" rel="noopener noreferrer"&gt;Kubernetes' docs&lt;/a&gt; have to say&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Kubernetes consists of two major components for the continuous deployment process, pods and services. &lt;/p&gt;

&lt;p&gt;Pods are abstractions of multiple containers and are also ephemeral. It's not uncommon to see a deployment involving a few containers to be deployed, hence the formation of pods.&lt;/p&gt;

&lt;p&gt;Services are an abstraction of a set of pods to expose them through a network. Applications are often deployed with multiple replicas to help with load balancing and horizontal scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load Balancing: handling traffic by distributing it across different endpoints&lt;/li&gt;
&lt;li&gt;Horizontal Scaling: handling increased traffic by creating additional replicas so that traffic can be divided across the replicas&lt;/li&gt;
&lt;li&gt;Replica: a redundant copy of a resource often used for backups or load balancing&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  A practical overview
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
This practical aspect assumes you have a previous knowledge on creating images and containers with docker and basic knowledge on cloud computing with AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will need&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A command line interface&lt;/li&gt;
&lt;li&gt;A docker hub account&lt;/li&gt;
&lt;li&gt;A docker image and container&lt;/li&gt;
&lt;li&gt;An AWS account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Time to get dirty, wear your gloves, get your tools ready, let's get started.&lt;/p&gt;
&lt;h3&gt;
  
  
  Installing Kubernetes
&lt;/h3&gt;

&lt;p&gt;There are many services that can be used to setup Kubernetes but this guide will focus on using AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why AWS?&lt;/strong&gt;&lt;br&gt;
Setting up Kubernetes from scratch it's a bit complicated, but AWS makes it easier. This guide will walk you through how to create Kubernetes cluster using AWS and how to create a node on AWS.&lt;/p&gt;

&lt;p&gt;Note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster consists of a set of nodes, that run containerized applications. When you deploy Kubernetes you get a cluster and every cluster has at least one worker node.&lt;/li&gt;
&lt;li&gt;Nodes is an abstraction of pods managed by a control plane, the control plane handles scheduling of the pods across the Nodes in the cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Getting our container ready
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Underlying Process&lt;/strong&gt;&lt;br&gt;
Docker images are loaded from the container registry into Kubernetes pods. Access to the pods are exposed to consumers through a service.&lt;/p&gt;

&lt;p&gt;To deploy an application on Kubernetes, there are two basic configuration files that need to be created, can either be written in YAML or JSON, but YAML is the most used one. YAML is a great way to define our configurations because it is simple and efficient.&lt;/p&gt;

&lt;p&gt;The first configuration is the "deployment.yaml" file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: simple-node
        image: YOUR_DOCKER_HUB/simple-node
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the second one "service.yaml" file;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-app
  labels:
    run: my-app
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this file is available the next thing to do is to create a Kubernetes cluster using Elastic Kubernetes Service on AWS. Once logged into your AWS console, you can easily find it by typing into the search bar &lt;strong&gt;eks&lt;/strong&gt;, then click on it.&lt;/p&gt;

&lt;p&gt;The next step is to create an EKS cluster, navigate to the cluster section in the left panel, click on create cluster, &lt;/p&gt;

&lt;p&gt;But before that, open a new tab, search for IAM, now...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create EKS Cluster IAM role:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the Roles tab in the Identity and Access Management (IAM) dashboard in the AWS Console&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Create role&lt;/li&gt;
&lt;li&gt;    Select type of trusted entity:&lt;/li&gt;
&lt;li&gt;        Choose EKS as the use case&lt;/li&gt;
&lt;li&gt;        Select EKS-Cluster&lt;/li&gt;
&lt;li&gt;        Click Next: Permissions&lt;/li&gt;
&lt;li&gt;    Click Next: Tags&lt;/li&gt;
&lt;li&gt;    Click Next: Review&lt;/li&gt;
&lt;li&gt;    Give the role a name, e.g. EKSClusterRole&lt;/li&gt;
&lt;li&gt;    Click Create role&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create an SSH Pair&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Key pairs tab in the EC2 Dashboard&lt;/li&gt;
&lt;li&gt;    Click Create key pair&lt;/li&gt;
&lt;li&gt;        Give the key pair a name, e.g. mykeypair&lt;/li&gt;
&lt;li&gt;        Select RSA and .pem&lt;/li&gt;
&lt;li&gt;    Click Create key pair&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create an EKS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Clusters tab in Amazon EKS dashboard in the AWS Console&lt;/li&gt;
&lt;li&gt;Click Create cluster&lt;/li&gt;
&lt;li&gt;Specify: a unique Name (e.g. MyEKSCluster), Kubernetes Version (e.g. 1.21), Cluster Service Role (select the role you created above, e.g.EKSClusterRole)&lt;/li&gt;
&lt;li&gt;Click Next&lt;/li&gt;
&lt;li&gt;In Specify networking look for Cluster endpoint access, click the Public radio button&lt;/li&gt;
&lt;li&gt;Click Next and Next&lt;/li&gt;
&lt;li&gt;In Review and create, click Create&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It may take 5-15 minutes for the EKS cluster to be created.&lt;/p&gt;

&lt;p&gt;Troubleshooting: If you get a message like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cannot create cluster the targeted availability zone does not currently have sufficient capacity to support the cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;choose another availability zone and try again. You can set the availability zone in the upper right corner of your AWS console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create a Node Group&lt;/strong&gt;&lt;br&gt;
Now, go back to the opened IAM tab,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Create EKS Cluster Node IAM role&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the IAM Roles tab, click Create role&lt;/li&gt;
&lt;li&gt;Select type of trusted entity:&lt;/li&gt;
&lt;li&gt;Choose EC2 as the use case&lt;/li&gt;
&lt;li&gt;Select EC2&lt;/li&gt;
&lt;li&gt;Click Next: Permissions&lt;/li&gt;
&lt;li&gt;In Attach permissions policies, search for each of the following and check the box to the left of the policy to attach it to the role: "AWSAmazonEC2ContainerRegistryReadOnly, AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy".&lt;/li&gt;
&lt;li&gt;Click Next: Tags&lt;/li&gt;
&lt;li&gt;Click Next: Review&lt;/li&gt;
&lt;li&gt;Give the role a name, e.g. NodeRole&lt;/li&gt;
&lt;li&gt;Click Create role&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, go back to the Cluster tab...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the node&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the Compute tab in the newly-created cluster&lt;/li&gt;
&lt;li&gt;Click Add Node Group&lt;/li&gt;
&lt;li&gt;Specify: a unique Name (e.g. MyNodeGroup)&lt;/li&gt;
&lt;li&gt;Cluster Service Role (select the role you created above, e.g.NodeRole)&lt;/li&gt;
&lt;li&gt;Create and specify SSH key for node group&lt;/li&gt;
&lt;li&gt;In Node Group compute configuration, set instance type to t3.micro and disk size to 4* to minimize costs&lt;/li&gt;
&lt;li&gt;In Node Group scaling configuration, set the number of nodes to 2&lt;/li&gt;
&lt;li&gt;Click Next&lt;/li&gt;
&lt;li&gt;In Node Group network configuration, toggle on Configure SSH access to nodes&lt;/li&gt;
&lt;li&gt;Select the EC2 pair created above (e.g. mykeypair)&lt;/li&gt;
&lt;li&gt;Select All&lt;/li&gt;
&lt;li&gt;Click Next&lt;/li&gt;
&lt;li&gt;Review the configuration and click "Create"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point, we have a Kubernetes cluster set up and understand how YAML files can be created to handle the deployment of pods and expose them to consumers. &lt;/p&gt;

&lt;p&gt;Moving forward, the Kubernetes command-line tool (kubectl), will be used to interact with our cluster. The YAML files that we created will be loaded through this tool.&lt;/p&gt;
&lt;h3&gt;
  
  
  Interacting With Your Cluster
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html" rel="noopener noreferrer"&gt;Install kubectl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Set up &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html" rel="noopener noreferrer"&gt;aws-iam-authenticator&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Set up &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html" rel="noopener noreferrer"&gt;kubeconfig&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next step is to load the Infrastructure as Code(IaC) configurations.&lt;/p&gt;

&lt;p&gt;Loading YAML files, these are the deployment.yaml and service.yaml files. Load them to Amazon EKS resource using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 1: Deploy resources&lt;/p&gt;

&lt;p&gt;Send YAML files to Kubernetes to create resources. This will create the number of replicas of the specified image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f deployment.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and create the service:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f service.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Confirm deployment&lt;/p&gt;

&lt;p&gt;Verify that the resources have been created:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and check to see that they were created correctly:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe services&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get more metadata about the cluster, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl cluster info dump&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;By loading these configuration files to the Kubernetes cluster, you have set up Kubernetes to pull your created Docker image from your DockerHub's account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you ran into some issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check to see if node groups was successfully created in your EKS cluster. If it wasn't, Kubernetes won't have enough pods setup.&lt;/li&gt;
&lt;li&gt;If you get an error message concerning not been able to pull your Docker images, confirm that your DockerHub repo is set to public.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What Next?
&lt;/h3&gt;

&lt;p&gt;The next step is to secure and tune Kubernetes services for production. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managing Cost&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can start with &lt;strong&gt;configuring the clusters&lt;/strong&gt; to greatly reduce costs(i.e specifying the number of replicas to be created and also minimize the resources used such as CPU).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Consciousness&lt;/strong&gt;&lt;br&gt;
Make sure your Kubernetes cluster are secured from those with malicious intent, you can do that by configuring who has access to the Kubernetes pods and services.&lt;/p&gt;

&lt;p&gt;Applications deployed for production use varies differently from the ones for development. In the case of production the application is no longer running in an isolated environment. It has to be configured with least access privileges to prevent unexpected traffic and allow only expected traffic to access it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preparing for Scaling and Availability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure the Kubernetes service is able to handle the number/size of user requests and the application is responsive i.e. able to be used when needed.&lt;/p&gt;

&lt;p&gt;One way to ascertain this prior to releasing your application to a production environment is to use load testing mechanism, this simulates a large number of requests to our application, which helps us set a baseline understanding of the limits of our application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handling Backend Requests with Reverse Proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A reverse proxy is a single interface that forwards requests from the frontend(i.e. user of our application) and this appears to the user as the origin of the responses. &lt;/p&gt;

&lt;p&gt;An API gateway functions as a reverse proxy that accepts API requests from users, get the requested services and return the right result. &lt;/p&gt;

&lt;p&gt;Nginx is a web server that can be used as a reverse proxy, configurations can be specified with an &lt;code&gt;nginx.conf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessing the Kubernetes Cluster using Reverse Proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, check the pods to see what is running, using:&lt;br&gt;
&lt;code&gt;kubectl get pods&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, check the services to find the entry point for accessing the pod:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe services&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the output you will find the service Name(i.e. alamz-app-svc) and Type(ClusterIP) which means that the service is only accessible within the cluster.&lt;/p&gt;

&lt;p&gt;Now, create an &lt;strong&gt;nginx.conf&lt;/strong&gt; file. A sample nginx.conf file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;events {
}
http {
    server {
        listen 8080;
        location /api/ {
            proxy_pass http://alamz-app-svc:8080/;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Nginx service listens for requests at port 8080. Any requests with endpoints prefixed with /api/ will be redirected to the Kubernetes service alamz-app-svc. alamz-app-svc is a name that our Kubernetes cluster recognizes internally.&lt;/p&gt;

&lt;p&gt;Put this in your Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM nginx:alpine

COPY nginx.conf /etc/nginx/nginx.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;nginx.conf&lt;/strong&gt; sets up the service to listen for the requests that come in to port 8080 and forward any requests to the API endpoint to &lt;a href="http://my-app-svc" rel="noopener noreferrer"&gt;http://my-app-svc&lt;/a&gt; endpoint in the app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up the YAML files for the Reverse Proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The setup is similar to the previous yaml file created in this guide. The basic functionality is to: Create a single pod named reverseproxy and configure it to limit resources.&lt;/p&gt;

&lt;p&gt;Your "deployment.yaml" file should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    service: reverseproxy
  name: reverseproxy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        service: reverseproxy
    spec:
      containers:
      - image: YOUR_DOCKER_HUB/simple-reverse-proxy
        name: reverseproxy
        imagePullPolicy: Always          
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "1024Mi"
            cpu: "500m"       
        ports:
        - containerPort: 8080
      restartPolicy: Always

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your "service.yaml" file should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  labels:
    service: reverseproxy
  name: reverseproxy-svc
spec:
  ports:
  - name: "8080"
    port: 8080
    targetPort: 8080
  selector:
    service: reverseproxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deploying Reverse Proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, we are ready to deploy our reverse proxy. Deploy the reverse proxy using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f reverseproxy_deployment.yaml
kubectl apply -f reverseproxy_service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kubectl is the command line tool used to interact with our kubernetes cluster and the YAML file is the Infrastructure as Code(IaC) that specifies the configuration for our reverse proxy. &lt;/p&gt;

&lt;h3&gt;
  
  
  Gracias Amigos
&lt;/h3&gt;

&lt;p&gt;Congrats, If you have carefully followed up to this point, gracias amigo!, while I am fully aware this guide isn't an all inclusive one, I make sure to point you to the right direction, where you can get the information that wasn't included.&lt;/p&gt;

&lt;p&gt;Meanwhile, I am putting this guide out here to serve as the MVP(minimum viable product)... what? Is this now a product management class? Well.. No. I am just a Cloud DevOps Engineer with a Product Design background that somehow fell in love with writing. Okay, enough of me. &lt;/p&gt;

&lt;p&gt;Let me cut to the chase, if there is anything you want me to go deeper on, or an information you feel I skipped, please, make sure to inform me, so I can Include it in the next release. &lt;/p&gt;

&lt;p&gt;Okay, that's it fellow techies. See Ya! Ehhmm, one sec, to supplement this guide I will write a short article that talks about securing clusters and containers, and resource quota management.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>docker</category>
    </item>
    <item>
      <title>Linux: Managing RPM Packages with YUM</title>
      <dc:creator>Hussein Alamutu</dc:creator>
      <pubDate>Tue, 19 Jul 2022 20:21:23 +0000</pubDate>
      <link>https://dev.to/husseinalamutu/linux-managing-rpm-packages-with-yum-406c</link>
      <guid>https://dev.to/husseinalamutu/linux-managing-rpm-packages-with-yum-406c</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Package managers for Linux are basically tools or software applications that allows users to install, uninstall, update, configure and manage software packages on Linux. There are various &lt;a href="https://www.digitalocean.com/community/tutorials/package-management-basics-apt-yum-dnf-pkg"&gt;package managers&lt;/a&gt; for Linux. However, this article will focus on YUM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Target audience
&lt;/h3&gt;

&lt;p&gt;This article assumes you have basic or intermediate knowledge of &lt;a href="https://www.geeksforgeeks.org/interesting-facts-about-linux/"&gt;Linux&lt;/a&gt;, &lt;a href="https://linuxize.com/post/rpm-command-in-linux/"&gt;RPM&lt;/a&gt; and/or navigating the &lt;a href="https://ubuntu.com/tutorials/command-line-for-beginners"&gt;Linux CLI&lt;/a&gt;, If you have no previous knowledge of these topics, I recommend you read them prior to reading this.&lt;/p&gt;

&lt;h3&gt;
  
  
  What you'll learn
&lt;/h3&gt;

&lt;p&gt;This article focuses on an intermediate topic in Linux system administration. After reading through, you are expected to understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is YUM and how to install it&lt;/li&gt;
&lt;li&gt;Difference between YUM and RPM&lt;/li&gt;
&lt;li&gt;Managing software with the YUM command&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What you'll need
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A computer running Fedora or some other version of Red Hat-based Linux&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is YUM and how to install it
&lt;/h3&gt;

&lt;p&gt;The YellowDog Updater Modified(YUM) is a command line package management utility for the Linux operating system, it is used for managing Linux RPM software packages. &lt;/p&gt;

&lt;p&gt;While there are differences between YUM and RPM, YUM still uses the RPM package format.&lt;/p&gt;

&lt;p&gt;Look at what Wikipedia have to say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Under the hood, YUM depends on RPM, which is a packaging standard for digital distribution of software, which automatically uses hashes and digital signatures to verify the authorship and integrity of said software.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yum evolved from Yellowdog Updater(YUP). YUP was created between 1999-2001 to serve as a back-end engine for Linux's graphical installer. &lt;/p&gt;

&lt;p&gt;Now that you know a little history of yum, let's get down to using it. Fedora should come pre-installed with YUM as a default package, run the following command to confirm if YUM is installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;which yum
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but just in case you don't have YUM installed on your system, you can install it with...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf &lt;span class="nb"&gt;install &lt;/span&gt;yum
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now type in the previous command to confirm if yum is installed. Before diving deeper into using the YUM package manager, get a grasp of the similarities and differences between YUM and RPM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Difference between YUM and RPM
&lt;/h3&gt;

&lt;p&gt;Both YUM and RPM are package managers, the biggest drawback for RPM is that it cannot resolve package dependencies, for this amongst many other reasons YUM was created. &lt;/p&gt;

&lt;p&gt;It varies differently from RPM in various ways, to mention a few:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;YUM resolves package dependencies automatically &lt;/li&gt;
&lt;li&gt;It can install multiple versions of a package&lt;/li&gt;
&lt;li&gt;It automatically upgrades packages&lt;/li&gt;
&lt;li&gt;With YUM, you can go back to previous versions of a package&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Package management with YUM
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;yum help&lt;/code&gt; - Displays list of YUM commands and options&lt;br&gt;
&lt;code&gt;yum install &amp;lt;package_name&amp;gt;&lt;/code&gt; - Installs whatever package name was given in the command.&lt;br&gt;
&lt;code&gt;yum update&lt;/code&gt; - Updates the package&lt;br&gt;
&lt;code&gt;yum downgrade&lt;/code&gt; - Returns the package to an earlier version&lt;br&gt;
&lt;code&gt;yum remove&lt;/code&gt; - Removes the package and it's dependencies&lt;br&gt;
&lt;code&gt;yum info&lt;/code&gt; - Displays information about the package.&lt;br&gt;
&lt;code&gt;yum list&lt;/code&gt; - List package names&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By reading this article, you should have learnt about YUM package manager, how to use it and it's common commands. I urge you actively run the commands yourself, to make sure the learning sticks.&lt;/p&gt;

&lt;p&gt;However, YUM has been replaced with Dandified YUM(DNF), which means YUM is no more the primary package manager in Fedora. For more information on DNF check out the following articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjF45mX2oT5AhXHxqQKHWPADksQFnoECAMQAw&amp;amp;url=https%3A%2F%2Fdocs.fedoraproject.org%2Fen-US%2Fquick-docs%2Fdnf%2F&amp;amp;usg=AOvVaw1ogY33WCDUXpkky0sp_TNa"&gt;Using the DNF Package Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjF45mX2oT5AhXHxqQKHWPADksQFnoECB0QAQ&amp;amp;url=https%3A%2F%2Fopensource.com%2Farticle%2F18%2F8%2Fguide-yum-dnf&amp;amp;usg=AOvVaw22K8sVuIsisLCizE7avdfk"&gt;A Quick Guide to DNF for YUM users&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwiijc6KgYX5AhUY_hoKHXf2CnQQFnoECAcQAQ&amp;amp;url=https%3A%2F%2Fwww.linode.com%2Fdocs%2Fguides%2Fdnf-package-manager%2F&amp;amp;usg=AOvVaw1O-BM-vCocbYqr71W4312x"&gt;Using DNF to Manage Packages in CentOS/RHEL 8 and Fedora&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>bash</category>
      <category>devops</category>
      <category>writing</category>
    </item>
  </channel>
</rss>
