<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Esther Nnolum</title>
    <description>The latest articles on DEV Community by Esther Nnolum (@esthernnolum).</description>
    <link>https://dev.to/esthernnolum</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/esthernnolum"/>
    <language>en</language>
    <item>
      <title>Cut Costs by Automating EC2 Start/Stop with AWS Lambda</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 10 Aug 2025 13:11:29 +0000</pubDate>
      <link>https://dev.to/esthernnolum/cut-costs-by-automating-ec2-startstop-with-aws-lambda-3ige</link>
      <guid>https://dev.to/esthernnolum/cut-costs-by-automating-ec2-startstop-with-aws-lambda-3ige</guid>
      <description>&lt;p&gt;Every hour an unused EC2 instance runs is money slipping through the cracks, but with a little smart automation, you can turn that waste into predictable savings. Managing cloud costs is important, especially when dev or test servers run outside of business hours. A great way to save is to automatically stop EC2 instances after work hours, and start them again in the morning.&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn how to do this using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;EC2 tags to control which instances are affected&lt;/li&gt;
&lt;li&gt;EventBridge Scheduler&lt;/li&gt;
&lt;li&gt;IAM roles&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Before you begin, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account&lt;/li&gt;
&lt;li&gt;Basic understanding of EC2 and AWS Console&lt;/li&gt;
&lt;li&gt;EC2 instances you want to automate&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 1: Tag Your EC2 Instances
&lt;/h4&gt;

&lt;p&gt;This step is important if you don’t want to stop every instance, but just the ones you choose.&lt;br&gt;
Go to your EC2 instances (Select the instance, open the Tags tab, and click Manage Tags) and add a tag:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key: &lt;em&gt;AutoShutdown&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Value: &lt;em&gt;Yes&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is to tell our automation to only act on these tagged instances.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Create Lambda Function to Stop EC2 Instances
&lt;/h4&gt;

&lt;p&gt;Go to AWS Console → Lambda → Create Function&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: StopEC2Instances&lt;/li&gt;
&lt;li&gt;Runtime: Python 3.x&lt;/li&gt;
&lt;li&gt;Permissions: Create a new role with basic Lambda permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ki8spjbolkslr5advc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ki8spjbolkslr5advc.png" alt=" " width="800" height="1080"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Create function and Replace the default code with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')

    # Get all running EC2 instances with AutoShutdown = Yes
    response = ec2.describe_instances(
        Filters=[
            {'Name': 'tag:AutoShutdown', 'Values': ['Yes']},
            {'Name': 'instance-state-name', 'Values': ['running']}
        ]
    )

    instances_to_stop = []
    for reservation in response['Reservations']:
        for instance in reservation['Instances']:
            instances_to_stop.append(instance['InstanceId'])

    if instances_to_stop:
        print(f"Stopping instances: {instances_to_stop}")
        ec2.stop_instances(InstanceIds=instances_to_stop)
    else:
        print("No instances to stop.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3: Update IAM Permissions
&lt;/h4&gt;

&lt;p&gt;Go to IAM → Roles → find your Lambda role (StopEC2Instances-role-xxxx) → attach this policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "ec2:DescribeInstances",
                "ec2:StopInstances",
                "ec2:StartInstances"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4: Schedule the Stop Function with EventBridge Scheduler
&lt;/h4&gt;

&lt;p&gt;Go to Amazon EventBridge → Create Schedule&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schedule name: StopEC2AtNight&lt;/li&gt;
&lt;li&gt;Occurrence: Recurring schedule&lt;/li&gt;
&lt;li&gt;Time zone: "select-your-timezone"&lt;/li&gt;
&lt;li&gt;Schedule type: Cron-based schedule&lt;/li&gt;
&lt;li&gt;Cron expression: &lt;code&gt;cron(0 19 ? * MON-FRI *)&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means 7:00 PM of the specified timezone, Monday to Friday&lt;br&gt;
N.B: Replace "select-your-timezone" with your preferred timezone&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 5: Create Lambda Function to Start EC2 Instances
&lt;/h4&gt;

&lt;p&gt;Now let’s create a second function to start the instances in the morning.&lt;br&gt;
Create another Lambda function&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: StartEC2Instances&lt;/li&gt;
&lt;li&gt;Runtime: Python 3.x&lt;/li&gt;
&lt;li&gt;Use the same IAM role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click on Create function and Replace the default code with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')

    response = ec2.describe_instances(
        Filters=[
            {'Name': 'tag:AutoShutdown', 'Values': ['Yes']},
            {'Name': 'instance-state-name', 'Values': ['stopped']}
        ]
    )

    instances_to_start = []
    for reservation in response['Reservations']:
        for instance in reservation['Instances']:
            instances_to_start.append(instance['InstanceId'])

    if instances_to_start:
        print(f"Starting instances: {instances_to_start}")
        ec2.start_instances(InstanceIds=instances_to_start)
    else:
        print("No instances to start.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 6: Schedule the Start Function
&lt;/h4&gt;

&lt;p&gt;Go to Amazon EventBridge → Create Schedule&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schedule name: StartEC2InTheMorning&lt;/li&gt;
&lt;li&gt;Occurrence: Recurring schedule&lt;/li&gt;
&lt;li&gt;Time zone: "select-your-timezone"&lt;/li&gt;
&lt;li&gt;Schedule type: Cron-based schedule&lt;/li&gt;
&lt;li&gt;Cron expression: &lt;code&gt;cron(0 7 ? * MON-FRI *)&lt;/code&gt;
This means 7:00 AM of the specified timezone, Monday to Friday&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This simple automation is proof that small changes can deliver big results. By letting AWS Lambda and EventBridge Scheduler handle your EC2 stop/start, you’re not just cutting costs, you’re building a culture of efficiency and smart resource usage. &lt;br&gt;
The result: predictable savings, reduced human error, and an AWS environment that runs only when it needs to.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>automaton</category>
    </item>
    <item>
      <title>AWS CI/CD Services: CodeBuild, CodeDeploy, CodePipeline and What Replaces CodeCommit</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 29 Jun 2025 13:43:07 +0000</pubDate>
      <link>https://dev.to/esthernnolum/aws-cicd-services-codebuild-codedeploy-codepipeline-and-what-replaces-codecommit-43f8</link>
      <guid>https://dev.to/esthernnolum/aws-cicd-services-codebuild-codedeploy-codepipeline-and-what-replaces-codecommit-43f8</guid>
      <description>&lt;p&gt;As DevOps teams continue to modernize their workflows, AWS provides a set of tools to automate software delivery through Continuous Integration and Continuous Deployment (CI/CD). These services support the reliable development, testing, and deployment of applications while scaling with your infrastructure.&lt;/p&gt;

&lt;p&gt;The main AWS-native CI/CD tools will be described in this article:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AWS CodeBuild&lt;/strong&gt;: Build automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CodeDeploy&lt;/strong&gt;: Deployment automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CodePipeline&lt;/strong&gt;: Workflow orchestration&lt;/li&gt;
&lt;li&gt;[Deprecated] &lt;strong&gt;AWS CodeCommit&lt;/strong&gt;: Git hosting (with migration guidance)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay0cc45x8jfzyjyoz5ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay0cc45x8jfzyjyoz5ab.png" alt="Image description" width="228" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ll explore the key AWS services that power modern CI/CD pipelines, including how they work and how they’re billed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.0 CodeBuild: Build and Test in the Cloud&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With AWS CodeBuild, you can build and test code with automatic scaling. It compiles your source code, runs tests, and generates artifacts (e.g., Docker images or JAR files).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove the complexity of managing build servers&lt;/li&gt;
&lt;li&gt;Build source code hosted on other git provides (e.g., GitHub or GitLab)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS CodeBuild follows a pay-as-you-go pricing model with no upfront commitments. You only pay for the compute resources used during your build process, and charges are based on how long the build runs. The cost varies depending on the compute type you choose. Refer to the &lt;a href="https://aws.amazon.com/codebuild/pricing/" rel="noopener noreferrer"&gt;AWS CodeBuild pricing&lt;/a&gt; for more pricing detail .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.0 CodeDeploy: Safe, Automated Deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS CodeDeploy automates application deployments to different targets like EC2, ECS, or Lambda. It supports blue/green, canary, and rolling deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support various deployment strategies such as in-place, canary, and blue/green&lt;/li&gt;
&lt;li&gt;Eliminate manual steps by fully automating deployments&lt;/li&gt;
&lt;li&gt;Integrate alarms to trigger automatic rollbacks and halt deployments when issues are detected&lt;/li&gt;
&lt;li&gt;Deploy applications across multiple hosts seamlessly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is dependent on the deployment target (eg EC2, ECS, or Lambda)&lt;/li&gt;
&lt;li&gt;Free for EC2 and Lambda&lt;/li&gt;
&lt;li&gt;You pay $0.02 per on-premises instance update using CodeDeploy. 
&lt;a href="https://aws.amazon.com/codedeploy/pricing/" rel="noopener noreferrer"&gt;View CodeDeploy Pricing&lt;/a&gt; for more pricing detail.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.0 CodePipeline: End-to-End CI/CD Automation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS CodePipeline orchestrates the full CI/CD flow — integrating your source, build, test, and deployment stages. AWS CodePipeline supports two types of pipelines: V1 and V2, which differ in features and billing.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;V1 pipelines are created by default unless you explicitly specify type V2.&lt;/li&gt;
&lt;li&gt;V2 pipelines offer additional capabilities, including support for triggers and variables.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trigger builds on PR merges from connected git providers like github.&lt;/li&gt;
&lt;li&gt;Use declarative JSON templates to create and update pipelines,&lt;/li&gt;
&lt;li&gt;Manage who can change and control your release workflow with IAM roles.&lt;/li&gt;
&lt;li&gt;monitor pipeline activity via Amazon SNS notifications with event details and source links.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;br&gt;
V1 Pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$1.00/month per active pipeline&lt;/li&gt;
&lt;li&gt;Free for the first 30 days&lt;/li&gt;
&lt;li&gt;No charge if no code runs through the pipeline that month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;V2 Pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$0.002 per minute of action execution time (rounded up)&lt;/li&gt;
&lt;li&gt;Manual and custom actions are free
&lt;a href="https://aws.amazon.com/codepipeline/pricing/" rel="noopener noreferrer"&gt;View CodePipeline Pricing&lt;/a&gt; for more pricing detail.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.0 CodeCommit: Deprecated Git Hosting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As of July 2024, AWS CodeCommit is no longer available to new customers. Existing users can continue using the service, which will still receive security, availability, and performance updates — but no new features are planned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migrating From CodeCommit:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS has provided an &lt;a href="https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/" rel="noopener noreferrer"&gt;official migration guide&lt;/a&gt; to help move your repositories to other git providers (GitHub, GitLab or Bitbucket)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; You can still integrate GitHub or GitLab with CodePipeline using Webhooks or GitHub connections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While AWS has deprecated CodeCommit for new customers, its other CI/CD services CodeBuild, CodeDeploy, and CodePipeline remain powerful, flexible, and cost-effective. By combining them with modern Git providers like GitHub, you can build fully automated pipelines that are secure, scalable, and AWS-native where it matters most.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tip&lt;/em&gt;: Pricing can evolve over time. Always refer to the official AWS pricing documentation linked throughout this article as the source of truth for the most accurate and up-to-date cost information.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>aws</category>
      <category>codepipeline</category>
    </item>
    <item>
      <title>Deploying Dockerized Microservices to AWS ECS Fargate with CodePipeline</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 25 May 2025 16:20:05 +0000</pubDate>
      <link>https://dev.to/esthernnolum/automating-microservice-deployment-to-aws-fargate-using-codepipeline-473o</link>
      <guid>https://dev.to/esthernnolum/automating-microservice-deployment-to-aws-fargate-using-codepipeline-473o</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In modern DevOps workflows, teams often break applications into smaller parts called microservices and run them in containers. Deploying these containers in a way that’s easy to manage, cost-effective, and can grow with demand is really important. That's where Amazon ECS Fargate and AWS CodePipeline come in, they let you run and update these services automatically without managing servers. AWS Fargate is a serverless compute engine for containers, and when combined with CodePipeline, you get fully automated CI/CD. In this article, I’ll explain how it all works and show you step-by-step how to deploy your container-based services using these tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of Docker and AWS&lt;/li&gt;
&lt;li&gt;AWS CLI configured&lt;/li&gt;
&lt;li&gt;A GitHub repository with a simple Dockerized microservices application.&lt;/li&gt;
&lt;li&gt;An existing VPC and subnets (or allow Fargate to create one)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs19mpw6cscn3bxccbr18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs19mpw6cscn3bxccbr18.png" alt="GitHub → CodePipeline → CodeBuild → ECR → ECS Fargate" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub → CodePipeline → CodeBuild → ECR → ECS Fargate&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: hosts the source code and Dockerfiles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CodePipeline&lt;/strong&gt;: detects changes and triggers deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt;: CodeBuild (build Docker image and push to ECR)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS Fargate&lt;/strong&gt;: deploys containers using the latest image.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation Steps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;STEP 1&lt;/strong&gt;: Dockerize your app - Your microservice should include a Dockerfile like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;STEP 2&lt;/strong&gt;: Push Your Code to GitHub - Push your microservice to a GitHub repo if you haven’t done so at this stage. This will be the source for your CodePipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 3&lt;/strong&gt;: Create an Amazon ECR Repository - Before deploying your Dockerized microservice to ECS Fargate, you need a place to store your container images. Amazon Elastic Container Registry (ECR) is AWS's fully managed Docker container registry, and it's where you'll push your built images for ECS to pull from. You can do this manually using the AWS Management Console, Using the AWS CLI or via IaC (Terraform) during infrastructure provisioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 4&lt;/strong&gt;: Define Buildspec for CodeBuild - The buildspec.yml file is a configuration file that tells AWS CodeBuild exactly what to do when it runs. It defines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Commands to run during each phase of the build process&lt;/li&gt;
&lt;li&gt;How to build your Docker image&lt;/li&gt;
&lt;li&gt;Where to push it (ECR, in our case)&lt;/li&gt;
&lt;li&gt;What (if anything) to output as artifacts
AWS CodeBuild reads this file at runtime to carry out your instructions step by step.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws ecr get-login-password | docker login --username AWS --password-stdin &amp;lt;your-ecr-uri&amp;gt;
  build:
    commands:
      - echo Building Docker image...
      - docker build -t my-service .
      - docker tag my-service:latest &amp;lt;your-ecr-uri&amp;gt;:latest
  post_build:
    commands:
      - echo Pushing image to ECR...
      - docker push &amp;lt;your-ecr-uri&amp;gt;:latest

artifacts:
  files: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  with your actual ECR repo URI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 5:&lt;/strong&gt; Create an ECS Fargate Cluster - To run your containerized microservices on AWS, you need an ECS (Elastic Container Service) cluster. When using Fargate, AWS handles the underlying server infrastructure, allowing you to focus only on your container and application code. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to ECS → Task Definitions → Create&lt;/li&gt;
&lt;li&gt;Use Fargate launch type&lt;/li&gt;
&lt;li&gt;Add container with image: :latest&lt;/li&gt;
&lt;li&gt;Set port mappings (e.g., 3000)&lt;/li&gt;
&lt;li&gt;Set memory/CPU (e.g., 512MB/256 CPU)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;STEP 6&lt;/strong&gt;: Create CodePipeline - AWS CodePipeline is a fully managed CI/CD (Continuous Integration and Delivery) service. It automates building, testing, and deploying code every time you make a change.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to CodePipeline → Create Pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source&lt;/strong&gt;: GitHub, Choose Connect to GitHub, and follow the instructions to authenticate with GitHub.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt;: Add build stage page, for Build provider, choose CodeBuild. (create a project using the buildspec.yml in step 4)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt;: ECS → Fargate → Select your cluster and service created in steps 5&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;ECS Fargate + CodePipeline offers a modern, scalable foundation for deploying containerized microservices. Whether you're starting from scratch or modernizing legacy apps, this setup provides a future-proof, low-maintenance path to continuous delivery in the cloud. This approach is particularly well-suited for microservice architectures, where deploying and updating individual services frequently and independently is a key requirement. With the setup described in this article, each microservice can be connected to its own pipeline, reducing deployment risk and making rollback strategies easier to implement.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cicd</category>
      <category>serverless</category>
    </item>
    <item>
      <title>A Beginner’s Guide to Terraform on AWS</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 06 Apr 2025 14:19:35 +0000</pubDate>
      <link>https://dev.to/esthernnolum/a-beginners-guide-to-terraform-on-aws-4leo</link>
      <guid>https://dev.to/esthernnolum/a-beginners-guide-to-terraform-on-aws-4leo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Terraform, developed by HashiCorp, is an Infrastructure as Code (IaC) tool that allows you to define and manage your infrastructure using easy-to-read, declarative configuration files. It automates the entire lifecycle of your infrastructure—from provisioning to updates and teardown.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Terraform for AWS?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Repeatable and consistent environments&lt;/li&gt;
&lt;li&gt;Trackable via Version control for infrastructure&lt;/li&gt;
&lt;li&gt;Automated infrastructure setup&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Installed:&lt;/strong&gt; You need to install Terraform before you can start using it. Refer to the official &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;Terraform documentation&lt;/a&gt; for detailed installation steps, whether you're using a package manager or installing manually based on your operating system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The AWS CLI installed&lt;/strong&gt;: Check the official &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI installation guide&lt;/a&gt; for step-by-step instructions tailored to your operating system.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A. Create AWS account and associated credentials
&lt;/h3&gt;

&lt;p&gt;If you don't already have one, create an &lt;a href="https://aws.amazon.com/free" rel="noopener noreferrer"&gt;AWS account&lt;/a&gt;, this serves as your entry point to Amazon Web Services, where you can manage cloud resources such as servers, storage, databases e.t.c for your applications. The &lt;a href="https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html" rel="noopener noreferrer"&gt;associated credentials&lt;/a&gt; enable you to create and manage resources within your account.&lt;/p&gt;

&lt;h3&gt;
  
  
  B. Steps to Create AWS Credentials for Terraform
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Login to AWS Console&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access the AWS Management Console and search for the &lt;em&gt;IAM&lt;/em&gt; service.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94gps7vnpy4i1rg4o68z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94gps7vnpy4i1rg4o68z.png" alt="IAM service" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a New User&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on &lt;em&gt;Users&lt;/em&gt; in the IAM dashboard.&lt;/li&gt;
&lt;li&gt;Click the &lt;em&gt;Create user&lt;/em&gt; button and enter the required user details.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8v3w9xv71mjea0ol1i5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8v3w9xv71mjea0ol1i5.png" alt="New User" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure User Access&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uncheck the option &lt;em&gt;"Provide user access to the AWS Management Console"&lt;/em&gt; (this is not needed because Terraform is managed via code, not the UI).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Set Permissions&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;em&gt;Next&lt;/em&gt;: To set user permissions.&lt;/li&gt;
&lt;li&gt;For testing, assign the "AdministratorAccess" policy to the user.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v135jkwkr24ezz0nflr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v135jkwkr24ezz0nflr.png" alt="Set Permission" width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Review and Create User&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;em&gt;Next&lt;/em&gt;:  To review your configuration.&lt;/li&gt;
&lt;li&gt;Click the &lt;em&gt;Create user&lt;/em&gt; button to create the Terraform user.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Access Keys Configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once the user is created, click on the username.&lt;/li&gt;
&lt;li&gt;Scroll down to the Access keys section.&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;Create access key&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F714vh7q53b3lw32wvs51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F714vh7q53b3lw32wvs51.png" alt="Create Access key" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Select Use Case&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the pop-up, choose &lt;em&gt;Command Line Interface (CLI)&lt;/em&gt; as the use case.&lt;/li&gt;
&lt;li&gt;Check the confirmation box and click &lt;em&gt;Next&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Set Description and Create Key&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optionally, set a description tag for the access key.&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;Create access key&lt;/em&gt; to generate the credentials.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Store Access Keys Securely&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the Access Key and Secret Key immediately.&lt;/li&gt;
&lt;li&gt;Store them securely — this is the only time you can view or download the secret access key. If you lose it, you will need to create a new one.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  C. Configure AWS CLI
&lt;/h3&gt;

&lt;p&gt;Open a terminal and run the command &lt;code&gt;aws configure&lt;/code&gt;. When prompted, enter the IAM user's Access Key ID and Secret Access Key that were generated in the previous step. Additionally, specify your default region (e.g., &lt;code&gt;us-east-1&lt;/code&gt;) and output format (e.g., &lt;code&gt;json&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note&lt;/em&gt;&lt;/strong&gt;: Always protect your access key and secret key, and periodically rotate them to minimize the risk of compromise.&lt;/p&gt;

&lt;h3&gt;
  
  
  D. Create a basic Terraform script
&lt;/h3&gt;

&lt;p&gt;Now that our AWS account is set up and the CLI is configured, we can start working with Terraform to define our infrastructure. The files used to describe infrastructure in Terraform are called "Terraform configurations." &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To get started, create a dedicated directory for your project, as each configuration must reside in its own working directory. This directory will hold all the necessary configuration files.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir terraform-aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Change into the directory
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd terraform-aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a main.tf file and paste in the configuration below.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
  required_version = "&amp;gt;= 1.4.0"
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}
resource "aws_instance" "app_server" {
  ami           = "ami-05238ab1443fdf48f" #Substitute this with the AMI ID you wish to use.
  instance_type = "t2.micro"
  tags = {
    Name = "MyServerInstance"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform code sets up an AWS EC2 instance in the &lt;code&gt;us-east-1&lt;/code&gt; region, using the specified AMI (&lt;code&gt;ami-05238ab1443fdf48f&lt;/code&gt;) and a &lt;code&gt;t2.micro&lt;/code&gt; instance type, with a tag called &lt;code&gt;MyServerInstance&lt;/code&gt;. It also specifies that Terraform version 1.4.0 or higher and AWS provider version 5.x.x are required.&lt;/p&gt;

&lt;h3&gt;
  
  
  E. Initialize, Plan, and Apply:
&lt;/h3&gt;

&lt;p&gt;When you create or check out a configuration, run the following commands;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform init&lt;/code&gt;: Initializes the working directory and downloads the required providers (e.g., AWS).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform init
_Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~&amp;gt; 5.0"...
- Installing hashicorp/aws v5.94.1...
- Installed hashicorp/aws v5.94.1 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary._
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform fmt&lt;/code&gt;: Automatically formats configurations for improved readability.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform validate&lt;/code&gt;: Checks that the configuration is syntactically correct and internally consistent.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_$ terraform validate
Success! The configuration is valid._

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform plan&lt;/code&gt;: Generates an execution plan, describing the actions Terraform will take to modify the infrastructure. It shows you what will happen but doesn't make any changes. I've shortened some of the output for brevity.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.app_server will be created
  + resource "aws_instance" "app_server" {
      + ami                                  = "ami-05238ab1443fdf48f"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
##...

Plan: 1 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform apply&lt;/code&gt;: Applies the changes to your infrastructure based on the plan. It will ask for confirmation (e.g., typing "yes") before proceeding. I've shortened some of the output for brevity.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_$ terraform apply
##...

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:_
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Post-Execution: Once terraform apply completes, check the EC2 console for the newly created instance.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform destroy&lt;/code&gt;: Destroys all the infrastructure managed by the Terraform configuration. It will prompt for confirmation before deleting the resources, ensuring you can safely tear down the setup when no longer needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using Terraform for Infrastructure as Code (IaC) on AWS simplifies and automates the provisioning and management of cloud resources. By defining your infrastructure in configuration files, you ensure consistency, repeatability, and version control. With Terraform's tools—like &lt;code&gt;terraform init&lt;/code&gt;, &lt;code&gt;terraform plan&lt;/code&gt;, &lt;code&gt;terraform apply&lt;/code&gt;, and &lt;code&gt;terraform destroy&lt;/code&gt;— you can easily create, modify, and delete resources, making it an essential tool for modern DevOps practices. Whether you’re just starting or scaling your infrastructure, Terraform helps streamline the process and enhances your ability to manage cloud environments efficiently.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
    </item>
    <item>
      <title>Automating ECR Token Renewal for Private Registry Pulls in Kubernetes Using a CronJob.</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Mon, 09 Sep 2024 20:14:17 +0000</pubDate>
      <link>https://dev.to/esthernnolum/automating-ecr-docker-authentication-in-kubernetes-using-a-cronjob-5gc6</link>
      <guid>https://dev.to/esthernnolum/automating-ecr-docker-authentication-in-kubernetes-using-a-cronjob-5gc6</guid>
      <description>&lt;p&gt;In Kubernetes, the authentication token for Amazon ECR (Elastic Container Registry) expires every 12 hours. This is due to ECR's use of the '&lt;em&gt;GetAuthorizationToken&lt;/em&gt;' API, which generates a base64-encoded token with a default validity period of 12 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Token Expires in 12 Hours:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Security:&lt;/strong&gt; Short-lived tokens reduce the window of opportunity for unauthorized access if a token is compromised. Regularly rotating the token enhances security by ensuring that stale or leaked tokens are invalidated in a short time frame.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Design:&lt;/strong&gt; AWS designed the ECR token system to issue tokens with a 12-hour expiration as part of its security best practices. This time limit balances the need for frequent reauthentication with minimizing user disruption.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Handling Expiration in Kubernetes:
&lt;/h2&gt;

&lt;p&gt;To maintain continuous access to ECR from within a Kubernetes cluster, it is common to automate the refresh process using a Kubernetes CronJob. This job would periodically refresh the authentication token and update the necessary secrets to ensure uninterrupted image pulls from ECR. This article walks through automating the process using a kubernetes CronJob to refresh ECR credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create a Base Docker Image
&lt;/h2&gt;

&lt;p&gt;You will need a Dockerfile that includes essential tools such as &lt;em&gt;aws-cli, curl, wget, docker&lt;/em&gt;, and &lt;em&gt;kubectl&lt;/em&gt;. This image will serve as the base for the CronJob to refresh the ECR secret.&lt;br&gt;
&lt;em&gt;Dockerfile&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create Dockerfile with this content
FROM alpine:latest
RUN apk --no-cache add aws-cli wget curl docker docker-compose \
    &amp;amp;&amp;amp; wget https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kubectl \
    &amp;amp;&amp;amp; mv kubectl /usr/local/bin/kubectl \
    &amp;amp;&amp;amp; chmod +x /usr/local/bin/kubectl \
    &amp;amp;&amp;amp; apk del wget

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After creating the Dockerfile:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build the image.&lt;/li&gt;
&lt;li&gt;Push it to a public container repository (e.g., Docker Hub or Amazon ECR).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Create an AWS Credentials Secret
&lt;/h2&gt;

&lt;p&gt;In this step, create a Kubernetes secret that contains AWS credentials with permission to access ECR. Ensure that the AWS credentials have the necessary ECR permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic aws-credentials \
  --from-literal=aws_access_key_id=&amp;lt;AWS_ACCESS_KEY_ID&amp;gt; \
  --from-literal=aws_secret_access_key=&amp;lt;AWS_SECRET_ACCESS_KEY&amp;gt; \
  --namespace dev #replace with your namespace

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Create a Service Account
&lt;/h2&gt;

&lt;p&gt;A dedicated Service Account will be used by CronJob to perform its tasks. This Service Account ensures proper security scoping for the refresh job.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;cronjob-service-account.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  name: cronjob-sa
  namespace: dev #replace with your namespace

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply file&lt;br&gt;
&lt;code&gt;kubectl apply -f cronjob-service-account.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Create a Role with Secret Permissions
&lt;/h2&gt;

&lt;p&gt;Create a Role that allows the Service Account to manage secrets, as it will need to update the ECR secret with new credentials.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;cronjob-role.yaml:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cronjob-role
  namespace: dev
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "create", "update", "delete"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the Role:&lt;br&gt;
&lt;code&gt;kubectl apply -f cronjob-role.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Bind the Role to the Service Account
&lt;/h2&gt;

&lt;p&gt;Bind the Role to the Service Account using a RoleBinding to ensure the Service Account has the required permissions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;cronjob-rolebinding.yaml:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cronjob-rolebinding
  namespace: dev
subjects:
- kind: ServiceAccount
  name: cronjob-sa
  namespace: dev
roleRef:
  kind: Role
  name: cronjob-role
  apiGroup: rbac.authorization.k8s.io

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the RoleBinding:&lt;br&gt;
&lt;code&gt;kubectl apply -f cronjob-rolebinding.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 6: Create the Kubernetes CronJob
&lt;/h2&gt;

&lt;p&gt;Now, set up a CronJob that will periodically refresh the ECR Docker registry credentials. In this example, the job runs every 11 hours.&lt;/p&gt;
&lt;h2&gt;
  
  
  The CronJob performs the following actions:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Retrieve the ECR token:&lt;/strong&gt; Using the AWS CLI command aws ecr get-login-password, the CronJob retrieves the authentication token for the ECR registry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create/Update a Kubernetes secret:&lt;/strong&gt; The token is then stored in a Kubernetes secret. This secret is used by Kubernetes to authenticate with the ECR registry whenever it needs to pull Docker images.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;cronjob.yaml:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: CronJob
metadata:
  name: ecr-creds-refresh
  namespace: dev
spec:
  schedule: 0 */11 * * *
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: ecr-creds-refresh
              image: awsecr-kubectl:latest #Input the image from step 1
              command:
                - /bin/sh
                - '-c'
                - &amp;gt;-
                  aws --version 

                  aws ecr get-login-password --region &amp;lt;region&amp;gt;

                  echo "deleting imagepull secret..."

                  kubectl delete secret -n dev --ignore-not-found ${SECRET_NAME}

                  echo "recreating imagepull secret..."

                  kubectl create secret -n dev docker-registry ${SECRET_NAME} \

                  --docker-server=***.dkr.ecr.us-east-1.amazonaws.com \

                  --docker-username=AWS \

                  --docker-password="$(aws ecr get-login-password --region
                  us-east-1)" 
                  echo "secret recreated!!"
              env:
                - name: AWS_REGION
                  value: us-east-1
                - name: SECRET_NAME
                  value: aws-ecr # secret name
                - name: AWS_ACCESS_KEY_ID
                  valueFrom:
                    secretKeyRef:
                      name: aws-credentials
                      key: aws_access_key_id
                - name: AWS_SECRET_ACCESS_KEY
                  valueFrom:
                    secretKeyRef:
                      name: aws-credentials
                      key: aws_secret_access_key
              imagePullPolicy: IfNotPresent
          restartPolicy: OnFailure
          serviceAccountName: cronjob-sa
          serviceAccount: cronjob-sa

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the CronJob:&lt;br&gt;
&lt;code&gt;kubectl apply -f cronjob.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose of the Secret:&lt;/strong&gt;&lt;br&gt;
Kubernetes uses this secret when pulling Docker images from ECR. Without this step, your cluster could lose access to ECR when the token expires, leading to failed deployments or pods unable to start due to image pull errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By automating the update of ECR Docker registry credentials with a Kubernetes CronJob, you can eliminate manual intervention and ensure your cluster always has valid credentials for pulling Docker images from ECR. This approach leverages Kubernetes-native tools like &lt;code&gt;CronJob&lt;/code&gt;, &lt;code&gt;RBAC&lt;/code&gt;, and &lt;code&gt;Secrets&lt;/code&gt; to securely and efficiently manage credentials. The steps outlined, from creating a base Docker image to setting up the necessary roles and service accounts, provide a robust solution for maintaining a seamless container lifecycle in your Kubernetes environment. This automation not only improves security by regularly rotating credentials but also enhances operational efficiency, freeing your team to focus on more critical tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refrence&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_AuthorizationData.html#:~:text=Authorization%20tokens%20are%20valid%20for%2012%20hours.&amp;amp;text=The%20registry%20URL%20to%20use,.region.amazonaws.com%20." rel="noopener noreferrer"&gt;Amazon ECR AuthorizationData&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cronjob</category>
      <category>docker</category>
      <category>aws</category>
    </item>
    <item>
      <title>Provisioning Kubernetes Clusters with Kubespray</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 12 May 2024 12:50:59 +0000</pubDate>
      <link>https://dev.to/esthernnolum/provisioning-kubernetes-clusters-with-kubespray-1a2i</link>
      <guid>https://dev.to/esthernnolum/provisioning-kubernetes-clusters-with-kubespray-1a2i</guid>
      <description>&lt;p&gt;Kubernetes, an open-source orchestration system, automates the deployment and management of containerized applications. For beginners, the journey into Kubernetes can often start with the daunting question: "Where do I begin?"&lt;/p&gt;

&lt;p&gt;In the early days, setting up and managing a Kubernetes cluster was a challenging and time-consuming task. However, with the evolution of Kubernetes, user-friendly solutions have emerged to simplify this process. Among these solutions, Kubespray shines as an invaluable tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubespray&lt;/strong&gt;, an open-source solution, facilitates the automated deployment of Kubernetes clusters across nodes. Engineered to be highly customizable, efficient, and lightweight, Kubespray caters to a wide range of requirements, making Kubernetes cluster deployment accessible to all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview of Kubespray&lt;/strong&gt;&lt;br&gt;
Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and generic Kubernetes cluster configuration management tasks. In this writeup, I'll demonstrate how to deploy a Kubernetes cluster on 3 nodes (1master and 2 worker nodes) using Kubespray. &lt;br&gt;
While a basic understanding of Ansible and Kubernetes terminologies is assumed, the steps are simple enough for beginners to follow along.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Before proceeding, ensure the following prerequisites are in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provision Infrastructure:&lt;/strong&gt; Set up computing resources, such as 3 servers, for the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install Dependencies:&lt;/strong&gt; Install the following dependencies on your Ansible server (can be one of your nodes or a dedicated server {recommended} for ansible):&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/git-guides/install-git"&gt;Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.python-guide.org/starting/install3/linux/"&gt;Python3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.python-guide.org/starting/install3/linux/"&gt;Pip3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#installing-ansible"&gt;Ansible&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setting Up the Cluster&lt;/strong&gt;&lt;br&gt;
Follow these steps to set up your Kubernetes cluster with Kubespray:&lt;br&gt;
&lt;strong&gt;Step 1: Set Up SSH Keys&lt;/strong&gt;&lt;br&gt;
Generate SSH keys on the Ansible node and copy the key to all your cluster nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen # Go with the defaults
ssh-copy-id &amp;lt;user&amp;gt;@&amp;lt;node-IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Download and Configure Kubespray&lt;/strong&gt;&lt;br&gt;
Download the Kubespray GitHub repository and checkout the latest version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:Kubernetes-sigs/Kubespray.git
cd Kubespray
git checkout release-2.xx #replace 'xx' with release number
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Install Python Dependencies&lt;/strong&gt;&lt;br&gt;
Install the required Python dependencies using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install -r ./requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Update Ansible Inventory&lt;/strong&gt;&lt;br&gt;
Update the Ansible inventory file with the IP addresses of your nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(&amp;lt;node1-IP&amp;gt; &amp;lt;node2-IP&amp;gt; &amp;lt;node3-IP&amp;gt;)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Further customize inventory/mycluster/hosts.yaml to specify your master, worker, and etcd nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Review and Customize Configuration&lt;/strong&gt;&lt;br&gt;
Review and customize parameters under inventory/mycluster/group_vars:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Allow Kubernetes Ports&lt;/strong&gt;&lt;br&gt;
If behind a firewall, ensure all necessary Kubernetes ports are allowed. These ports include; api-server, etcd, DNS, kubelet, kube-controller-manager, calico ports, kube-scheduler e.t.c.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Clean Up Old Kubernetes Cluster&lt;/strong&gt;&lt;br&gt;
Run the playbook to clean up any old Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i inventory/mycluster/hosts.yaml --user=&amp;lt;your-user-with-sudo-access&amp;gt; --ask-become-pass --become reset.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 8: Deploy Kubernetes with Kubespray&lt;/strong&gt;&lt;br&gt;
Run the playbook to deploy Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i inventory/my-cluster/hosts.yml --user=&amp;lt;your-user-with-sudo-access&amp;gt; --ask-become-pass --become cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 9: Access the Cluster&lt;/strong&gt;&lt;br&gt;
To access the cluster using kubectl commands, update your kubeconfig file to be in the ~/.kube directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir .kube
cd .kube/
sudo cp /etc/kubernetes/admin.conf config
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa05hx5h6y7cwiempdcsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa05hx5h6y7cwiempdcsx.png" alt="Image description" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The playbook will take some time to complete, but once finished, you'll have a highly available and self-managed Kubernetes cluster at your disposal. &lt;br&gt;
Kubespray enables you to manage Kubernetes clusters, you can add/remove nodes (playbooks/scale.yml|remove_node.yml), delete the entire cluster (playbooks/reset.yml), and performing other Kubernetes management tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Issue with Python Packages Installation: When Ansible is already installed via system packages on the control node, Python packages installed using sudo pip install -r requirements.txt may end up in a different directory tree (e.g., /usr/local/lib/python2.7/dist-packages on Ubuntu) compared to Ansible's directory (e.g., /usr/lib/python2.7/dist-packages/ansible on Ubuntu). Consequently, the ansible-playbook command may fail with the following error:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This likely indicates that a task depends on a module present in requirements.txt.&lt;br&gt;
2.Ensure Firewall Rules Allow Necessary Ports: Make sure that all necessary ports are allowed through the firewall to ensure proper communication between components.&lt;br&gt;
3.Failure to Run Playbook without --become: The playbook will fail to run if the --become flag is not used. Ensure that you include --become to grant necessary privileges for the playbook to execute successfully.&lt;br&gt;
4.For further troubleshooting on any encountered issue, please refer to the &lt;a href="https://github.com/kubernetes-sigs/kubespray"&gt;official Kubespray repository&lt;/a&gt; for comprehensive troubleshooting steps.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kubespray</category>
      <category>devops</category>
      <category>ansible</category>
    </item>
    <item>
      <title>Provisioning Kubernetes Clusters with Kubespray</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 12 May 2024 12:50:56 +0000</pubDate>
      <link>https://dev.to/esthernnolum/provisioning-kubernetes-clusters-with-kubespray-4a9o</link>
      <guid>https://dev.to/esthernnolum/provisioning-kubernetes-clusters-with-kubespray-4a9o</guid>
      <description>&lt;p&gt;Kubernetes, an open-source orchestration system, automates the deployment and management of containerized applications. For beginners, the journey into Kubernetes can often start with the daunting question: "Where do I begin?"&lt;/p&gt;

&lt;p&gt;In the early days, setting up and managing a Kubernetes cluster was a challenging and time-consuming task. However, with the evolution of Kubernetes, user-friendly solutions have emerged to simplify this process. Among these solutions, Kubespray shines as an invaluable tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubespray&lt;/strong&gt;, an open-source solution, facilitates the automated deployment of Kubernetes clusters across nodes. Engineered to be highly customizable, efficient, and lightweight, Kubespray caters to a wide range of requirements, making Kubernetes cluster deployment accessible to all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview of Kubespray&lt;/strong&gt;&lt;br&gt;
Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and generic Kubernetes cluster configuration management tasks. In this writeup, I'll demonstrate how to deploy a Kubernetes cluster on 3 nodes (1master and 2 worker nodes) using Kubespray. &lt;br&gt;
While a basic understanding of Ansible and Kubernetes terminologies is assumed, the steps are simple enough for beginners to follow along.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Before proceeding, ensure the following prerequisites are in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provision Infrastructure:&lt;/strong&gt; Set up computing resources, such as 3 nodes, for your cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install Dependencies:&lt;/strong&gt; Install the following dependencies on your Ansible server:&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/git-guides/install-git" rel="noopener noreferrer"&gt;Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.python-guide.org/starting/install3/linux/" rel="noopener noreferrer"&gt;Python3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.python-guide.org/starting/install3/linux/" rel="noopener noreferrer"&gt;Pip3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#installing-ansible" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setting Up the Cluster&lt;/strong&gt;&lt;br&gt;
Follow these steps to set up your Kubernetes cluster with Kubespray:&lt;br&gt;
&lt;strong&gt;Step 1: Set Up SSH Keys&lt;/strong&gt;&lt;br&gt;
Generate SSH keys on the Ansible node and copy the key to all your cluster nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen # Go with the defaults
ssh-copy-id &amp;lt;user&amp;gt;@&amp;lt;node-IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Download and Configure Kubespray&lt;/strong&gt;&lt;br&gt;
Download the Kubespray GitHub repository and checkout the latest version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:Kubernetes-sigs/Kubespray.git
cd Kubespray
git checkout release-2.xx #replace 'xx' with release number
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Install Python Dependencies&lt;/strong&gt;&lt;br&gt;
Install the required Python dependencies using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install -r ./requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Update Ansible Inventory&lt;/strong&gt;&lt;br&gt;
Update the Ansible inventory file with the IP addresses of your nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(&amp;lt;node1-IP&amp;gt; &amp;lt;node2-IP&amp;gt; &amp;lt;node3-IP&amp;gt;)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Further customize inventory/mycluster/hosts.yaml to specify your master, worker, and etcd nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Review and Customize Configuration&lt;/strong&gt;&lt;br&gt;
Review and customize parameters under inventory/mycluster/group_vars for further customization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Allow Kubernetes Ports&lt;/strong&gt;&lt;br&gt;
If behind a firewall, ensure all necessary Kubernetes ports are allowed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Clean Up Old Kubernetes Cluster&lt;/strong&gt;&lt;br&gt;
Run the playbook to clean up the old Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i inventory/mycluster/hosts.yaml --user=&amp;lt;your-user-with-sudo-access&amp;gt; --ask-become-pass --become reset.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 8: Deploy Kubernetes with Kubespray&lt;/strong&gt;&lt;br&gt;
Run the playbook to deploy Kubespray:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i inventory/my-cluster/hosts.yml --user=&amp;lt;your-user-with-sudo-access&amp;gt; --ask-become-pass --become cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 9: Access the Cluster&lt;/strong&gt;&lt;br&gt;
Access the cluster using kubectl commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir .kube
cd .kube/
sudo cp /etc/kubernetes/admin.conf config
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa05hx5h6y7cwiempdcsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa05hx5h6y7cwiempdcsx.png" alt="Image description" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The playbook will take some time to complete, but once finished, you'll have a highly available and self-managed Kubernetes cluster at your disposal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Issue with Python Packages Installation: When Ansible is already installed via system packages on the control node, Python packages installed using sudo pip install -r requirements.txt may end up in a different directory tree (e.g., /usr/local/lib/python2.7/dist-packages on Ubuntu) compared to Ansible's directory (e.g., /usr/lib/python2.7/dist-packages/ansible on Ubuntu). Consequently, the ansible-playbook command may fail with the following error:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This likely indicates that a task depends on a module present in requirements.txt.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure Firewall Rules Allow Necessary Ports: Make sure that all necessary ports are allowed through the firewall to ensure proper communication between components.&lt;/li&gt;
&lt;li&gt;Failure to Run Playbook without --become: The playbook will fail to run if the --become flag is not used. Ensure that you include --become to grant necessary privileges for the playbook to execute successfully.&lt;/li&gt;
&lt;li&gt;For further troubleshooting on any encountered issue, please refer to the &lt;a href="https://github.com/kubernetes-sigs/kubespray" rel="noopener noreferrer"&gt;official Kubespray repository&lt;/a&gt; for comprehensive troubleshooting steps.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>kubespray</category>
      <category>devops</category>
      <category>ansible</category>
    </item>
    <item>
      <title>Trigger Jenkins builds with Github Webhook Using Smee Client</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 12 May 2024 10:54:09 +0000</pubDate>
      <link>https://dev.to/esthernnolum/trigger-jenkins-builds-with-github-webhook-using-smee-client-5hal</link>
      <guid>https://dev.to/esthernnolum/trigger-jenkins-builds-with-github-webhook-using-smee-client-5hal</guid>
      <description>&lt;p&gt;In line to automating a complete CICD pipeline where Jenkins pipelines are triggered by GitHub web-hook based on specific events; It has been argued that neither polling for updates nor exposing the Jenkins server is desirable in any environment. Both of these issues can be resolved by using the SMEE client, having things not addressable on the web, or locked down in some default way and not burn through API quotas in the polling method.&lt;/p&gt;

&lt;p&gt;Smee is an open source project offered by GitHub that receives web-hook payloads (from Github) and sends them to listening clients(Jenkins).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcigtl97x376s1pgu6txg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcigtl97x376s1pgu6txg.png" alt="Image description" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;USAGE&lt;/strong&gt;&lt;br&gt;
You’ll need a method for making localhost accessible to the internet if your application wants to respond to web-hooks. &lt;a href="http://Smee.io"&gt;http://Smee.io&lt;/a&gt; is a small service that transmits web-hook payloads to your locally running application through the usage of Server-Sent Events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SETUP&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step 1:&lt;/strong&gt; Go to &lt;a href="https://smee.io"&gt;https://smee.io&lt;/a&gt;, you may create a new channel and acquire a special URL (which you should copy for later use) to send payloads to. The public website &lt;a href="http://smee.io"&gt;http://smee.io&lt;/a&gt; and the smee-client are the two parts of Smee that make it function. Through Server-Transmitted Events, a connection type that enables messages to be sent from a source to any clients listening, they communicate with one another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; install the smee client where you have the Jenkins server running using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install --global smee-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will enable the smee client to accept and deliver webhooks.&lt;br&gt;
&lt;strong&gt;Step3:&lt;/strong&gt; Start the smee client now, then direct it to the Jenkins server. I’m currently running it on port 8080. (change both the port and the smee URL as needed)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;smee --url https://smee.io/GSm1B40******SjYS --path /github-webhook/ --port 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This says to connect to the smee service, and web-hooks should be forwarded to /github-webhook/ (the trailing slash is crucial; don’t forget it). You will notice that it is connected and forwarding web-hooks as soon as this is started. You can continue to receive web-hooks as long as you leave this command running.&lt;/p&gt;

&lt;p&gt;You can also setup the service as a systemd service, by adding the smee.service file to /etc/systemd/system directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=smee.io webhook delivery from GitHub
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=jenkins
ExecStart=smee -u https://smee.io/z*****BvuEt --path /generic-webhook-trigger/invoke --port 8080
[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re using github-webhook-trigger in your Jenkins then the ExecStart command should be;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ExecStart=smee -u https://smee.io/z*****BvuEt --path /generic-webhook-trigger/invoke --port 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Configure a pipeline that makes use of github from Jenkins. Make sure to check the ‘GitHub hook trigger for GITScm polling’ or the ‘Generic Webhook Trigger’ depending on your previous choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrz49keuwsoib0jpchvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrz49keuwsoib0jpchvo.png" alt="Image description" width="800" height="239"&gt;&lt;/a&gt;&lt;br&gt;
Next, specify a repository. This will prepare the server to accept webhooks from GitHub. (It’s also OK if you already have a pipeline configured that uses GitHub as the SCM source.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; The last step is to instruct GitHub to notify Smee of webhook events for that repository.&lt;br&gt;
Go to the GitHub repository’s settings tab, select “webhooks,” and then “add webhook.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxevltt7u3w6efoecgao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxevltt7u3w6efoecgao.png" alt="Image description" width="800" height="710"&gt;&lt;/a&gt;&lt;br&gt;
Next, configure the webhook: It should resemble the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the “smee” URL that you copied in the previous step.&lt;/li&gt;
&lt;li&gt;Choose application/json as the content type&lt;/li&gt;
&lt;li&gt;Choose what events you want to trigger the web-hook&lt;/li&gt;
&lt;li&gt;Press Add Web-hook (or update)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The links from &lt;a href="https://www.jenkins.io/blog/2019/01/07/webhook-firewalls/"&gt;jenkins&lt;/a&gt; and &lt;a href="https://github.com/probot/smee.io"&gt;github&lt;/a&gt; will help to further understand how it works.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>github</category>
      <category>webhook</category>
      <category>devops</category>
    </item>
    <item>
      <title>Understanding the Intersection of DevOps: Where Dev and Ops Converge</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 12 May 2024 10:47:44 +0000</pubDate>
      <link>https://dev.to/esthernnolum/understanding-the-intersection-of-devops-where-dev-and-ops-converge-c02</link>
      <guid>https://dev.to/esthernnolum/understanding-the-intersection-of-devops-where-dev-and-ops-converge-c02</guid>
      <description>&lt;p&gt;Within the field of software development and IT operations, the term “DevOps” has gained widespread usage in the current rapidly evolving technological world. However, what precisely is DevOps, and why is it important for contemporary software delivery?&lt;/p&gt;

&lt;p&gt;Transitioning my career trajectory towards DevOps prompted extensive research to grasp the essence and comprehensive scope of DevOps to better understand what I was getting into, and simply put, at its fundamental core; DevOps is like an enhanced cooperative effort between the teams that create software (like developers) and the teams that ensure it runs smoothly (like IT operations). In the software industry, DevOps is about bringing these teams together, facilitating improved communication, utilizing tools to make their jobs easier, and operating in a manner that swiftly and flawlessly releases updates and new features to users. It all comes down to making the software development process faster, more dependable, and more seamless for all parties involved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbmy6yy25im43uo7ea2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbmy6yy25im43uo7ea2g.png" alt="Image description" width="704" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Convergence of Development and Operations&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Collaborative Approach:&lt;/strong&gt; DevOps promotes a collaborative approach, cultivating a culture in which teams from development and operations collaborate throughout the software development lifecycle. DevOps teams work closely together to remove bottlenecks and streamline operations rather than operating alone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Integration and Delivery:&lt;/strong&gt; concepts of continuous integration (CI) and continuous delivery (CD) are central to the discussion. Code development, testing, and deployment are all automated via CI/CD pipelines, enabling regular and dependable software releases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Infrastructure:&lt;/strong&gt; DevOps emphasizes infrastructure as code (IaC), where infrastructure provisioning and management are automated using code-based configurations. This enables consistent, scalable, and reproducible environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Feedback:&lt;/strong&gt; Monitoring and feedback loops are integral to DevOps. Real-time monitoring of applications and systems aids in early issue detection, allowing swift responses and iterative enhancements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By encouraging a culture of cooperation, automation, and continuous improvement, DevOps essentially signifies a fundamental change in software development and operations. Organizations may achieve faster, more dependable, and customer-focused software delivery and maintain their competitiveness in the ever-changing business landscape of today by breaking down traditional barriers between development and operations.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>development</category>
      <category>operations</category>
    </item>
    <item>
      <title>Essential Linux Commands for DevOps Beginners</title>
      <dc:creator>Esther Nnolum</dc:creator>
      <pubDate>Sun, 12 May 2024 10:43:43 +0000</pubDate>
      <link>https://dev.to/esthernnolum/essential-linux-commands-for-devops-beginners-3gp8</link>
      <guid>https://dev.to/esthernnolum/essential-linux-commands-for-devops-beginners-3gp8</guid>
      <description>&lt;p&gt;Linux, a widely used operating system, serves as the backbone for many renowned platforms. Renowned for its flexibility and efficiency, Linux stands out as a powerful operating system extensively employed in the DevOps landscape. The majority of DevOps tools necessitate a Linux environment and It is a prevalent practice that many DevOps tools undergo development and testing primarily on Linux systems before being adapted for use on Windows platforms.&lt;/p&gt;

&lt;p&gt;For DevOps beginners, acquiring proficiency in essential Linux commands is very vital for DevOps tasks such as server management, issue troubleshooting, DevOps tool setup, task automation, e.t.c. In the following sections, we will delve into fundamental Linux commands that are essential for any DevOps enthusiast to grasp.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigating the File System&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;cd&lt;/strong&gt;: Change directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mkdir&lt;/strong&gt;: Create a new directory in the current directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pwd&lt;/strong&gt;: Print the current working directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;touch&lt;/strong&gt;: create a new, empty file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ls&lt;/strong&gt;: List files and directories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cp&lt;/strong&gt;: Copy files or directories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mv&lt;/strong&gt;: Move or rename files or directories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rm&lt;/strong&gt;: Remove files or directories.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir linux-devops
cd /home/ennolum/linux-devops/
touch file1 file2
pwd
ls
cp file1 /home/ennolum/.
mv file2 file3
rm file1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxs9jqnkmuonpjsqgd40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxs9jqnkmuonpjsqgd40.png" alt="Image description" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Managing Processes&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ps&lt;/strong&gt;: Display information about processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;top&lt;/strong&gt;: Monitor system processes in real-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kill&lt;/strong&gt;: Send signals to linux process (SIGKILL/-9 is the signal to terminate a process).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;nice&lt;/strong&gt;: Used to manipulate the priority of a process. Processes with higher priority receive more CPU time compared to those with lower priority
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ps aux
top
kill -9 &amp;lt;process_id&amp;gt;
nice -n &amp;lt;value&amp;gt; &amp;lt;process name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;User and Permissions:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;useradd&lt;/strong&gt;: Create a new user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;passwd&lt;/strong&gt;: Change user password.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;chmod&lt;/strong&gt;: Change file permissions. Note the following&lt;br&gt;
r (read): 4&lt;br&gt;
w (write): 2&lt;br&gt;
x (execute): 1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;chown&lt;/strong&gt;: change file owner&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;useradd &amp;lt;newuser&amp;gt;
passwd &amp;lt;newuser&amp;gt;
chmod 755 file1.sh #7=4+2+1=rwx 5=4+1=rx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Text Processing:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;cat&lt;/strong&gt;: Display the content of a file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;grep&lt;/strong&gt;: Search for a pattern in files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sort&lt;/strong&gt;: Sort lines of text files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;uniq&lt;/strong&gt;: Remove duplicate lines from texts in stdin or files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sed&lt;/strong&gt;: Stream editor for text manipulation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;head &amp;amp; tail&lt;/strong&gt;: Display the first few lines and the last few lines of a file respectively
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat file1.txt
grep &amp;lt;pattern&amp;gt; file.log
sort file.txt # use -r to sort in reverse
sort file.txt | uniq
sed 's/old/new/' file.txt #replace &amp;lt;old&amp;gt; with &amp;lt;new&amp;gt; in file.txt
head file.txt ; tail file.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Network Commands:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ping&lt;/strong&gt;: Check network connectivity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ifconfig&lt;/strong&gt; or ip: Display network configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;netstat&lt;/strong&gt;: Display network status information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;dig&lt;/strong&gt;: Used to query the DNS name server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hostname&lt;/strong&gt;: Used to view and set the hostname of a system
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ping google.com
ifconfig
netstat -an
dig &amp;lt;domainName&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Package Management:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;apt&lt;/strong&gt; (Debian/Ubuntu) or &lt;strong&gt;yum&lt;/strong&gt; (RHEL/CentOS): Install, update, or remove packages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install &amp;lt;package_name&amp;gt;
sudo yum update
sudo yum install &amp;lt;package-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;File Compression and Archiving:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;tar&lt;/strong&gt;: Create, view, or extract tar archives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gzip&lt;/strong&gt; and gunzip: Compress and decompress files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;zip&lt;/strong&gt;: Creates a zip archive from one or more files or directories
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -cvf archive.tar files/ # Combine tar and compression commands
gzip file.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;SSH and Remote Access:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ssh&lt;/strong&gt;: Securely connect to a remote server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scp&lt;/strong&gt;: Copy files securely between local and remote systems or between 2 remote systems.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh &amp;lt;user@remote_host&amp;gt;
scp local_file.txt user@remote_host:/path/to/destination/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;System Information:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;uname&lt;/strong&gt;: Display system information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;df and du&lt;/strong&gt;: Show disk space usage (df — filesystem and du disk usage).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;uptime&lt;/strong&gt;: Displays duration the system has been operational and the average system load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;free -m&lt;/strong&gt;: Provides details on the utilization of the system’s memory
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uname -a
df -h
du -sh /path/to/directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Shell Scripting Basics:&lt;/strong&gt; Shell scripting plays a vital role in automating processes on Linux systems. By creating a script — a file containing a series of commands — you can execute tasks without repeatedly typing commands. This approach enhances efficiency in performing routine tasks, enabling streamlined daily operations and the possibility of scheduling automated executions.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;echo&lt;/strong&gt;: Display a message.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;chmod +x&lt;/strong&gt; script.sh and ./script.sh: Make a script executable and run it respectively.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Hello, DevOps"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Mastering these fundamental Linux commands will provide a solid foundation for DevOps beginners. As you become more comfortable with these commands, you’ll be better equipped to navigate, manage, and automate tasks within a Linux environment.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
