In this article I am going to show you how to Create a cluster in Amazon EKS and install kubectl using Terraform
Please visit my previous article Create a cluster in Amazon EKS and install kubectl
Please visit my GitHub Repository for EKS articles on various topics being updated on constant basis.
Let’s get started!
Objectives:
1. Sign in to AWS Management Console
2. Create your Amazon EKS cluster role
3. Create the organizational structure on Cloud9 environment
4. Under EKS-files
directory: Create 4 files - variables.tf
, terraform.tfvars
, main.tf
, outputs.tf
.
5. Initialize, Plan and Apply Terraform
6. Validate all resources created created in AWS Console
7. Create an Environment in CloudShell
8. Install kubectl
9. Configure your AWS CloudShell to communicate with your cluster
Pre-requisites:
- AWS user account with admin access, not a root account.
- Cloud9 IDE with AWS CLI.
- Create an IAM role for EKS
Resources Used:
Terraform documentation
What is Amazon EKS?
Steps for implementation to this project:
1. Sign in to AWS Management Console
- Make sure you're in the N. Virginia (us-east-1) region
2. Create your Amazon EKS cluster role
1.
2.
3.
- Next
4.
- Next
5.
6.
- Create role
7.
- Note down the ARN of the role
R-EKSRole
3. Let’s create the following organizational structure on Cloud9 environment as shown below:
4. Under EKS-files directory:
Create 4 files -
variables.tf
,terraform.tfvars
,main.tf
,outputs.tf
- variables.tf - to declare all the global variables with a short description and a default value.
variable "access_key" {
description = "Access key to AWS console"
}
variable "secret_key" {
description = "Secret key to AWS console"
}
variable "region" {
description = "AWS region"
}
- 2. terraform.tfvars - Replace the values of access_key and secret_key by copying your AWS Access Key ID and Secret Access Key ID.
region = "us-east-1"
access_key = "<YOUR AWS CONSOLE ACCESS ID>"
secret_key = "<YOUR AWS CONSOLE SECRET KEY>"
- 3. main.tf - Creating EKS cluster - Subnet-ID of us-east-1a
- Subnet-ID of us-east-1b
- copy YOUR_IAM_ROLE_ARN, Subnet-ID of us-east-1a and Subnet-ID of us-east-1b and replace them with your values.
##### Creating an EKS Cluster #####
resource "aws_eks_cluster" "cluster" {
name = "whiz"
role_arn = "<YOUR_IAM_ROLE_ARN>"
vpc_config {
subnet_ids = ["SUBNET-ID 1", "SUBNET-ID 2"]
}
}
provider "aws" {
region = "${var.region}"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
##### Creating an EKS Cluster #####
resource "aws_eks_cluster" "cluster" {
name = "rev"
role_arn = "<YOUR_IAM_ROLE_ARN>"
vpc_config {
subnet_ids = ["subnet-05f279c5812013c5e", "subnet-0bf17b905f79d0a5f"]
}
}
- 4. outputs.tf - displays the output as EKS cluster endpoint
output "cluster" {
value = aws_eks_cluster.cluster.endpoint
}
5. Initialize, Plan and Apply Terraform
terraform init - will check for all the plugin dependencies and download them if required, this will be used for creating a deployment plan.
cd EKS-files
terraform plan - To generate the action plans, run the below command:
terraform apply - Create all the resources declared in main.tf configuration file
Wait for 4-5 minutes to create all the resources.
6. Validate all resources created in AWS Console
7. Create an Environment in CloudShell
- on CloudShell
8. Install kubectl
1
Install kubectl on AWS CloudShell - Download the Amazon EKS vended kubectl binary for your cluster's Kubernetes version from Amazon S3
2
Apply execute permissions to the binary
3
Copy the binary to a folder in your PATH - If you have already installed a version of kubectl, then create a $HOME/bin/kubectl and ensure that $HOME/bin comes first in your $PATH.
4
After you install kubectl, you can verify its version with the following command:
### 1 ###
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubectl
### 2 ###
chmod +x ./kubectl
### 3 ###
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
### 4 ###
kubectl version --short --client
9. Configure your AWS CloudShell to communicate with your cluster
aws eks update-kubeconfig --region us-east-1 --name <EKS Cluster Name>
kubectl get svc
Cleanup
- terraform destroy
- terraform destroyed
- Delete R-EKSRole
- Delete EKS Cluster
- Delete Cloud9 environment
What we have done so far
Using Terraform
, We have successfully created and launched Amazon EKS Cluster, installed Kubectl in AWS Cloudshell and configured AWS Cloudshell to communicate with AWS EKS Cluster.
Top comments (0)