DEV Community

Cover image for Automation of Images and Orchestration in AWS EKS Part 1
Deepak Poudel
Deepak Poudel

Posted on

Automation of Images and Orchestration in AWS EKS Part 1

In this blog, we will be discussing five AWS services: AWS EKS (Elastic Kubernetes Service), AWS ECR (Elastic Container Registry), AWS Cloud9 IDE, AWS CodeCommit, and finally AWS CodePipeline. We'll explore the features and benefits of each service, and how they can be used in combination to build and deploy containerized applications in the cloud. Specifically, we'll look at how EKS provides a managed Kubernetes environment, ECR provides a registry for storing and managing Docker images, Cloud9 provides a cloud-based IDE for development, and CodeCommit provides a managed Git repository for version control. By the end of this blog, you'll have a good understanding of how to use these services to build and deploy containerized applications on AWS.

Thanks to Visual Paradigm for the diagraming tool.
Architecture

Before getting started, you will need an AWS account, a domain name, and a basic understanding of AWS services, including ECR, EKS, CodeCommit, and CodePipeline.

Create a CodeCommit repository

CodeCommit is a fully managed source control service that makes it easy to host private Git repositories. You can use CodeCommit to store your website's source code and manage version control.

To create a CodeCommit repository, follow these steps:

Navigate to the CodeCommit console in the AWS Management Console.
Click "Create repository."
Give your repository a name and click "Create."

AWS CodeCommit

my-cc-repo

After creating the repo, I pushed to the remote that contains a dockerfile. The buildspec.yml also contains steps to build the docker image, tag it and push it to ECR and then invoke updating the EKS.

Create a ECR Repo

ECR is a fully managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. You can use ECR to store your website's Docker container images and manage version control.

To push the image to ECR you need a project and a valid Dockerfile

FROM httpd:2.4
COPY ./index.html /usr/local/apache2/htdocs/
EXPOSE 80
Enter fullscreen mode Exit fullscreen mode

now build the image

docker build -t ecr-address/imagename:tag
Enter fullscreen mode Exit fullscreen mode

then test it locally

docker run -d --name demo-sv -p 9000:80 ecr-address/imagename:tag`
Enter fullscreen mode Exit fullscreen mode

To create an ECR repository, follow these steps:

Navigate to the ECR console in the AWS Management Console.
Click "Create repository."
Give your repository a name and click "Create repository."

ECR

ECR REPO

after the ECR repo is created, open the cluster and view the push commands
Generally the steps are build tag and push from a Dockerfile that is in the project

aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
Enter fullscreen mode Exit fullscreen mode
docker tag e9ae3c220b23 aws_account_id.dkr.ecr.us-west-2.amazonaws.com/my-repository:tag
Enter fullscreen mode Exit fullscreen mode
docker push aws_account_id.dkr.ecr.us-west-2.amazonaws.com/my-repository:tag
Enter fullscreen mode Exit fullscreen mode

Create a Cloud9 environment

Cloud9 is an integrated development environment (IDE) that runs in the cloud. It provides a fully featured Linux environment with a web-based IDE and the ability to run commands in a terminal. You can use Cloud9 to write and test your code, as well as interact with your AWS resources. Please feel comfortable to skip cloud9 and use your local shell, I personally prefer using SSH to set up git.

To create a Cloud9 environment, follow these steps:

Navigate to the Cloud9 console in the AWS Management Console.
Click "Create environment."
Give your environment a name and select the settings you want.
Choose an instance type that suits your needs. For this tutorial, we recommend using the t2.micro instance type.
Click "Create environment."
Cloud9

my-cc-env

For this demonstration I will be using my local git bash. Make sure kubectl is installed either in your cloud9 or your local shell.

Create a EKS Cluster

Cluster Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:UpdateAutoScalingGroup",
                "ec2:AttachVolume",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CreateRoute",
                "ec2:CreateSecurityGroup",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:DeleteRoute",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteVolume",
                "ec2:DescribeInstances",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVolumes",
                "ec2:DescribeVolumesModifications",
                "ec2:DescribeVpcs",
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeAvailabilityZones",
                "ec2:DetachVolume",
                "ec2:ModifyInstanceAttribute",
                "ec2:ModifyVolume",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAddresses",
                "ec2:DescribeInternetGateways",
                "elasticloadbalancing:AddTags",
                "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
                "elasticloadbalancing:AttachLoadBalancerToSubnets",
                "elasticloadbalancing:ConfigureHealthCheck",
                "elasticloadbalancing:CreateListener",
                "elasticloadbalancing:CreateLoadBalancer",
                "elasticloadbalancing:CreateLoadBalancerListeners",
                "elasticloadbalancing:CreateLoadBalancerPolicy",
                "elasticloadbalancing:CreateTargetGroup",
                "elasticloadbalancing:DeleteListener",
                "elasticloadbalancing:DeleteLoadBalancer",
                "elasticloadbalancing:DeleteLoadBalancerListeners",
                "elasticloadbalancing:DeleteTargetGroup",
                "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                "elasticloadbalancing:DeregisterTargets",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "elasticloadbalancing:DescribeLoadBalancerPolicies",
                "elasticloadbalancing:DescribeLoadBalancers",
                "elasticloadbalancing:DescribeTargetGroupAttributes",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeTargetHealth",
                "elasticloadbalancing:DetachLoadBalancerFromSubnets",
                "elasticloadbalancing:ModifyListener",
                "elasticloadbalancing:ModifyLoadBalancerAttributes",
                "elasticloadbalancing:ModifyTargetGroup",
                "elasticloadbalancing:ModifyTargetGroupAttributes",
                "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
                "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
                }
            }
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

NodeGroup Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SharedSecurityGroupRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "ec2:RevokeSecurityGroupIngress",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DescribeInstances",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:DeleteSecurityGroup"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/eks": "*"
                }
            }
        },
        {
            "Sid": "EKSCreatedSecurityGroupRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "ec2:RevokeSecurityGroupIngress",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DescribeInstances",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:DeleteSecurityGroup"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/eks:nodegroup-name": "*"
                }
            }
        },
        {
            "Sid": "LaunchTemplateRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "ec2:DeleteLaunchTemplate",
                "ec2:CreateLaunchTemplateVersion"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/eks:nodegroup-name": "*"
                }
            }
        },
        {
            "Sid": "AutoscalingRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:DeleteAutoScalingGroup",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:CompleteLifecycleAction",
                "autoscaling:PutLifecycleHook",
                "autoscaling:PutNotificationConfiguration",
                "autoscaling:EnableMetricsCollection"
            ],
            "Resource": "arn:aws:autoscaling:*:*:*:autoScalingGroupName/eks-*"
        },
        {
            "Sid": "AllowAutoscalingToCreateSLR",
            "Effect": "Allow",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": "autoscaling.amazonaws.com"
                }
            },
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "*"
        },
        {
            "Sid": "AllowASGCreationByEKS",
            "Effect": "Allow",
            "Action": [
                "autoscaling:CreateOrUpdateTags",
                "autoscaling:CreateAutoScalingGroup"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:TagKeys": [
                        "eks",
                        "eks:cluster-name",
                        "eks:nodegroup-name"
                    ]
                }
            }
        },
        {
            "Sid": "AllowPassRoleToAutoscaling",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "autoscaling.amazonaws.com"
                }
            }
        },
        {
            "Sid": "AllowPassRoleToEC2",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEqualsIfExists": {
                    "iam:PassedToService": [
                        "ec2.amazonaws.com",
                        "ec2.amazonaws.com.cn"
                    ]
                }
            }
        },
        {
            "Sid": "PermissionsToManageResourcesForNodegroups",
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "ec2:CreateLaunchTemplate",
                "ec2:DescribeInstances",
                "iam:GetInstanceProfile",
                "ec2:DescribeLaunchTemplates",
                "autoscaling:DescribeAutoScalingGroups",
                "ec2:CreateSecurityGroup",
                "ec2:DescribeLaunchTemplateVersions",
                "ec2:RunInstances",
                "ec2:DescribeSecurityGroups",
                "ec2:GetConsoleOutput",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSubnets"
            ],
            "Resource": "*"
        },
        {
            "Sid": "PermissionsToCreateAndManageInstanceProfiles",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:AddRoleToInstanceProfile"
            ],
            "Resource": "arn:aws:iam::*:instance-profile/eks-*"
        },
        {
            "Sid": "PermissionsToManageEKSAndKubernetesTags",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringLike": {
                    "aws:TagKeys": [
                        "eks",
                        "eks:cluster-name",
                        "eks:nodegroup-name",
                        "kubernetes.io/cluster/*"
                    ]
                }
            }
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Use these roles for cluster and the nodegroup. If you are on cloud9 make sure it can access the resources.

now configure AWS cli

aws configure
Enter fullscreen mode Exit fullscreen mode

WARNING! don't use role to create the cluster, you can create the cluster but later can't issue kubectl commands. Use the same user to create the cluster and manage the cluster.

EKS is a fully managed Kubernetes service that makes it easy to deploy, manage, and scale containerized applications. You can use EKS to deploy your website's containerized application and manage its infrastructure.

To create an EKS cluster, follow these steps:

Navigate to the EKS console in the AWS Management Console.
Click "Create cluster."
Choose a Kubernetes version and give your cluster a name.
Choose the VPC and subnet where you want to deploy your cluster.
Choose the security group for your cluster and click "Create."

EKS
nodegroup

nodegroup-details

i have minikube setup and this is the status of my local machine

local-nodes-and-pods

config

I have to create a kubeconfig because currently it is configured at my local minikube cluster

sts-command

cat-kube-config

I just backed up my old config file and generated a new config inside .kube

aws eks update-kubeconfig --region region-code --name my-cluster
Enter fullscreen mode Exit fullscreen mode

kube-config-backup

now let's add nodegroup and assign the role to the nodegroup. you can also configure auto scaling on the nodegroup. i've used spot instances

kubectl-node-pod-remote

now you can see two instances as the nodes and no pods in the default namespace

let's run image from ECR

either deploy from a yaml or let the kubectl generate a yaml for you. Here's the yaml that was generated for me previously.

apiVersion: v1
items:
- apiVersion: apps/v1
 kind: Deployment
 metadata:
   annotations:
     deployment.kubernetes.io/revision: "1"
   creationTimestamp: "2023-02-02T05:55:05Z"
   generation: 4
   labels:
     app: my-eks
   name: my-eks
   namespace: default
   resourceVersion: "41352"
   uid: 90024fca-9212-40c5-9546-d9b85dfdd98f
 spec:
   progressDeadlineSeconds: 600
   replicas: 2
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app: my-eks
   strategy:
     rollingUpdate:
       maxSurge: 25%
       maxUnavailable: 25%
     type: RollingUpdate
   template:
     metadata:
       creationTimestamp: null
       labels:
         app: my-eks
     spec:
       containers:
       - image: ecr-address/imagename:tag
         imagePullPolicy: Always
         name: my-eks
         ports:
         - containerPort: 80
           protocol: TCP
         resources: {}
         terminationMessagePath: /dev/termination-log
         terminationMessagePolicy: File
       dnsPolicy: ClusterFirst
       priorityClassName: system-node-critical
       restartPolicy: Always
       schedulerName: default-scheduler
       securityContext: {}
       terminationGracePeriodSeconds: 30
 status:
   availableReplicas: 2
   conditions:
   - lastTransitionTime: "2023-02-02T05:55:05Z"
     lastUpdateTime: "2023-02-02T05:55:45Z"
     message: ReplicaSet "my-eks-c86b768" has successfully progressed.
     reason: NewReplicaSetAvailable
     status: "True"
     type: Progressing
   - lastTransitionTime: "2023-02-02T08:39:56Z"
     lastUpdateTime: "2023-02-02T08:39:56Z"
     message: Deployment has minimum availability.
     reason: MinimumReplicasAvailable
     status: "True"
     type: Available
   observedGeneration: 4
   readyReplicas: 2
   replicas: 2
   updatedReplicas: 2
kind: List
metadata:
 resourceVersion: ""
Enter fullscreen mode Exit fullscreen mode

Ready to create a deployment ?
let's do it from the yaml

kubectl create -f my_deployment_file.yaml

or

kubectl create deployment my-eks --image=ecr-address/image:tag

now expose the deployment

kubectl expose deployment my-eks --type=LoadBalancer --port=80 --target-port=80

type=loadbalancer we can leverage the cloud provider's load balancer. it's elastic load balancer in case of AWS.

now it's just one replication, let's scale the application to run four replicas.

kubectl scale --replicas=4 deployment my-eks

external url

the URL exposes deployment. Your image will be load balanced and is accessible with the external public facing URL.

kubectl get services

Finally Create a Pipeline

Create a CodePipeline pipeline

CodePipeline is a fully managed continuous delivery service that makes it easy to automate your release process. You can use CodePipeline to create a pipeline that automatically builds, tests, and deploys your website.

AWS Codepipeline

I feel like splitting the content in two blogs because it's been long. Automation will be in the next one. Stay tuned. The next blog will continue from this point.

Top comments (0)