AWS Elastic Kubernetes service is a very powerful managed AWS service which allows to run upstream Kubernetes on AWS without having to manage underlying Kubernetes control plane.
K8s was developed by Google based on its experience of running containers and was released open source in 2014 to cloud native computing foundation. It is now an open source container orchestration platform which like any other container management platform allows scheduling, scaling, distributing load across containers, replacing failed containers etc.
EKS is natively integrated with AWS services like IAM, VPC and CloudWatch etc and provides secure and highly scalable way to run applications in Kubernetes cluster.
However, when it comes to implementation there are several things to be taken care of. For example, VPC, subnet, route tables, VPC endpoints, EKS cluster creation, node groups, security groups for cluster as well as node groups, cluster roles, node group roles and the list goes on.
But, what if I tell you that fully working cluster with nodes can be deployed with a simple single command?
eksctl create cluster
Yes, welcome to the world of eksctl, which is a simple CLI tool for creating and managing clusters on elastic Kubernetes service. It was developed by Weaveworks in "Go". It uses CloudFormation templates in back end for deployment.
The above simple command # eksctl create cluster will create a cluster with following:-
- Auto-generated name, e.g. wonderfull-pizza-1527688624
- Two m5.large worker nodes
- Use the official AWS EKS AMI
- AWS Region = us-west-2
- A dedicated VPC with public and private subnets and route tables.
- Managed node groups, security groups for cluster.
Above example is just the tip of the iceberg, The Kubernetes cluster can be further customized using declarative _*cluster.yaml *_file. But before diving deep lets look at the prerequisites of using eksctl.
Install *kubectl *– which is a command line tool for working with Kubernetes clusters.
Following are the steps to install kubectl for Kubernetes ver-1.23.
curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl
Verify the downloaded binary with the SHA-256 sum for binary.
curl -o kubectl.sha256 https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl.sha256
Check the SHA-256 sum for downloaded binary.
openssl sha1 -sha256 kubectl
Apply execute permissions to the binary.
chmod +x ./kubectl
Copy the binary to a folder in your PATH.
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
Add the $HOME/bin path to shell initialization file.
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
Verify kubectl version with the following command:
kubectl version --short --client
Output should be as below:-
[root@ip-172-31-27-189 ~]# kubectl version --short --client
Client Version: v1.23.7-eks-4721010
Install **eksctl **for a Linux server.
Download and extract the latest release of eksctl with the following command.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
Move the extracted binary to /usr/local/bin.
sudo mv /tmp/eksctl /usr/local/bin
Test the installation.
eksctl version
Output should be as below:
[ec2-user@ip-172-31-27-189 ~]$ eksctl versio
0.112.0n
Now since we have kubectl and eksctl installed lets review cluster.yml file which when executed will create:
- IAM role for setup of control plane.
- An EKS cluster in desired subnets/VPC.
- A managed Node-group EC2 type.
- A launch Template and Auto-scaling group with following parameters:
- Instance size= c5.large
- Instance type = spot
- Min size = 2
- Max size = 4
- EBS volume size = 50GB
- EBS volume type = gp2
- EBS volume encryption_
- Provision corresponding kubeconfig, aws-auth and ConfigMap files.
This cluster.yaml file can be deployed using following simple command:
eksctl create cluster -f cluster.yaml
# Cluster.yaml file
# An example of ClusterConfig object using an existing VPC
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: kbc-be-kh-sit-eks-cluster-01
region: eu-central-1
privateCluster:
enabled: true
skipEndPointCreation: true
vpc:
id: "vpc-7435khjhdtrff472d"
cidr: "172.16.24.0/21"
extraCIDRs: ["172.16.34.0/23","172.16.36.0/24"]
subnets:
private:
eu-central-1a:
id: "subnet-78werdfjyu7874gd7"
cidr: "172.16.24.0/24"
eu-central-1b:
id: "subnet-bdhghghdhfdfhuduf"
cidr: "172.16.25.0/24"
managedNodeGroups:
- name: kbc-be-kh-sit-eks-managed-nodegrp-spot-01
instanceTypes: ["c5.large","c5.large"]
spot: true
privateNetworking: true
minSize: 2
maxSize: 4
desiredCapacity: 2
volumeSize: 50
volumeType: gp2
volumeEncrypted: true :
Verify cluster
[ec2-user@ip-172-31-27-189 ~]$ kubectl get sv
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 12mc
Verify Nodes
[ec2-user@ip-172-31-27-189 ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
ip-192-168-15-29.eu-central-1.compute.internal Ready <none> 6m4s v1.22.12-eks-ba74326
ip-192-168-37-52.eu-central-1.compute.internal Ready <none> 6m v1.22.12-eks-ba74326s
Likewise, whole setup can be terminated with following simple command:
eksctl delete cluster -f cluster.yaml
Well, after experiencing this personally, I can say this is just a small teaser of an exceedingly advance service which has blazing possibilities.
--Reference- Amazon Web Service
Top comments (0)