DEV Community


Posted on • Updated on

Create EKS in less than 2 minutes using terraform

Kubernetes (k8s) is an open-source platform that helps to automate deployments, scaling and managing containerized applications. We can run kubernetes cluster in any cloud providers(AWS/GCP/AZURE),on-premises.Public cloud providers makes kubernetes as a service like Elastic kubernetes service (EKS) in AWS, Kubernetes engine in GCP etc. Cloud providers manage control plane components on your behalf, you manage only worker node groups and your application deployments. In on-premise infrastructure, we can install kubernetes control plane through configuration management tools like ansible/chef/puppet or use binaries like kubeadm,eksctl etc. Monitoring and managing kubernetes control plane components like etcd, controller manager, scheduler & api server is a kind of pain when you have large cluster of micro services to maintain.

Well, that’s the reason this article focus on creating Elastic Kubernetes service (EKS) cluster using terraform in AWS.

I am using terraform 0.11.14 version to create Kubernetes eks 1.14 cluster. The reason for using this bit older version of eks, is because in the upcoming blog I am planning to share how to upgrade the cluster to a newer version of eks and migrate the applications without any hassle. There are various terraform modules for eks out in the market to create eks cluster or you can create your own terraform modules. But this article focuses how to create eks cluster in a matter of minutes by running three simple commands of terraform. In this tutorial, we used the official terraform aws eks modules. Here is the Github link for the module terraform-aws-module.Before we fire up the terraform modules, we need to set up values for the local terraform variables

provider "aws" {
  version = "2.6.0"
  region = "${local.aws_region}"

locals {
  # pick the instance type of your choice
  worker_instance_type       = "m5.xlarge"
  # create ssh-key pair in aws in specific region
  ssh_key_pair               = "xxxxxxxxxxxxxxxxx"
  # create the vpc & get the vpc id or get the default vpc id
  vpc_id                     = "vpc-xxxxxxxxxxxxxxxx"
  # specify the region where the cluster should reside
  aws_region                 = "xxxxxxxxx"
  # Name the eks cluster of your choice
  cluster_name               = "xxxxxxxxxxxxxxx"
  # Name the worker group name of your choice
  instance_worker_group_name = "xxxxxxxxxxxxxx"
  # Get the username from aws IAM
  username_iam               = "xxxxxxxxxxxxxxxxx"
  # Get aws account id and username from aws IAM - fill it up below
  user_arn                   = "arn:aws:iam::<aws account id>:user/<username in aws IAM >"

# Cluster will be placed in these subnets:
variable "eks_subnet" {
    default =[

data "aws_subnet_ids" "eks_subnet" {
    vpc_id = "${local.vpc_id}"
    filter {
        name   = "cidr-block"
        values = "${var.eks_subnet}"

module "eks_cluster" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "4.0.2"

  cluster_name    = "${local.cluster_name}"
  cluster_version = "1.14"

  vpc_id          = "${local.vpc_id}"
  subnets         = "${data.aws_subnet_ids.eks_subnet.ids}"

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true

  worker_additional_security_group_ids = []

  worker_ami_name_filter    = "v20190927"

  worker_group_count        = "1"
  worker_groups = [
      name                  = "${local.instance_worker_group_name}"
      instance_type         = "${local.worker_instance_type}"
      asg_desired_capacity  = "2"
      asg_min_size          = "2"
      asg_max_size          = "3"
      key_name              = "${local.ssh_key_pair}"
      autoscaling_enabled   = true
      protect_from_scale_in = true

  map_users_count = 1
  map_users = [
    # ADMINS
      user_arn = "${local.user_arn}"
      username = "${local.username_iam}"
      group    = "system:masters"

If you run this code in production, place the cluster in private subnets

2.terraform v0.11.14
3.kubectl vpc, IAM username & ssh key pair name

1.Copy the code into your local machine and save it as extension file
2.Install terraform and check the version to be v0.11.14
3.Make sure you authenticate aws from the machine where you run this code by having aws secret key and secret access key
4.Initialize the terraform by running terraform init
5.Plan the module by using this command terraform plan
6.Apply the module by running terraform apply command once after you verify the output from terraform plan
7.Your eks cluster will be ready in 15 minutes (max)

After we run the terraform code, go to AWS console and check the status of eks cluster. once the cluster is ready, we can access the cluster from the command line. In order to do that, we need to install aws cli and kubectl in the command line. Below command will bring the kube config to your local machine.

aws eks --region <region> update-kubeconfig --name <cluster name> --alias <alias cluster name>


kubectl get nodes
NAME                                         STATUS   ROLES    AGE   VERSION    Ready    <none>   1h   v1.14.7-eks-1861c5   Ready    <none>   1h   v1.14.7-eks-1861c5
kubectl get pods
NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE
kube-system   aws-node-wedf4            1/1     Running   0          1h
kube-system   aws-node-fghj3            1/1     Running   0          1h
kube-system   coredns-2345678d-cewmee   1/1     Running   0          1h
kube-system   coredns-2345678d-wlp6jt   1/1     Running   0          1h
kube-system   kube-proxy-wlp6j          1/1     Running   0          1h
kube-system   kube-proxy-qcbnv          1/1     Running   0          1h

I hope this blog helps, see you in the next one ....

Top comments (0)