DEV Community

Cover image for Designing the Ultimate AWS EKS Cluster: A Terraform Blueprint for Success
munikeraragon for Citrux Digital

Posted on • Originally published at citruxdigital.com

Designing the Ultimate AWS EKS Cluster: A Terraform Blueprint for Success

Imagine a world where deploying and managing Kubernetes clusters in the cloud isn't a complex task but a seamless and efficient experience. This is where AWS Elastic Kubernetes Service (EKS) comes into play, simplifying the lives of developers and cloud architects. By leveraging the powerful combination of EKS and Terraform, you not only automate infrastructure deployment but also ensure consistency and scalability with just a few commands.

This article will guide you step by step through the process of how to deploy a Kubernetes cluster in AWS using Terraform, we will have a product and order API example to show you the results!

We will cover:

  1. What is AWS EKS?
  2. A guide to deploy a cluster using Terraform

What is AWS EKS?

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that simplifies running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane. Some key features that makes great AWS EKS are:

  • Secure Networking and Authentication: Integrates with AWS networking and security services, including AWS Identity and Access Management (IAM).
  • Easy Cluster Scaling: Supports horizontal pod autoscaling and cluster autoscaling based on workload demand.
  • Managed Kubernetes Experience: Allows you to manage clusters using eksctl, AWS Management Console, AWS CLI, API, kubectl, and Terraform.
  • High Availability: Provides high availability for your control plane across multiple Availability Zones.
  • Integration with AWS Services: Seamlessly integrates with other AWS services for a comprehensive platform to deploy and manage containerized applications.

Guide to deploy a cluster using Terraform

Using Terraform with AWS EKS simplifies provisioning, configuration, and management of Kubernetes clusters. These are the steps to deploy a cluster using terraform:

Aws credentials

  • Clone Repository: You can clone this repository Citrux-Systems/aws-eks-terraform-demo , it will have the setup for EKS cluster and an example with app implementation.
  • Configure set up: Once you have cloned the repository, you can modify the region, cluster name and namespace where you need to deploy in variables.tf file, for our demo we used 'us-west-2' , “citrux-demo-eks-${random_string.suffix.result}' and “ecommerce'
variable 'region' {
  description = 'AWS region'
  type        = string
  default     = 'us-west-2'
}

resource 'random_string' 'suffix' {
  length  = 8
  special = false
}

data 'aws_availability_zones' 'available' {}

locals {
  name            = 'citrux-demo-eks-${random_string.suffix.result}'
  region          = var.region
  cluster_version = '1.30'
  instance_types  = ['t3.medium'] # can be multiple, comma separated

  vpc_cidr = '10.0.0.0/16'
  azs      = slice(data.aws_availability_zones.available.names, 0, 3)

  tags = {
    Blueprint  = local.name
    GitHubRepo = 'github.com/aws-ia/terraform-aws-eks-blueprints'
  }
  namespace = 'ecommerce'
}
Enter fullscreen mode Exit fullscreen mode

Terraform files:

variables.tf : In this file you’ll find the configuration of all variables you need to use. Here yoou can customize the cluster name, region, EC2 instance type for nodes (more information here), tags and Kubernetes namespace (change to the namespace your manifest files look for deploy the app).

variable 'region' {
  description = 'AWS region'
  type        = string
  default     = 'us-west-2'
}

resource 'random_string' 'suffix' {
  length  = 8
  special = false
}

//fetches a list of available Availability Zones (AZs) in the specified AWS region.
data 'aws_availability_zones' 'available' {}

locals {
  name            = 'citrux-demo-eks-${random_string.suffix.result}'
  region          = var.region
  cluster_version = '1.30'
  instance_types  = ['t3.medium'] # can be multiple, comma separated

  vpc_cidr = '10.0.0.0/16' //The CIDR block for the VPC where the EKS cluster will be deployed
  azs      = slice(data.aws_availability_zones.available.names, 0, 3)

  tags = {
    Blueprint  = local.name
    GitHubRepo = 'github.com/aws-ia/terraform-aws-eks-blueprints'
  }
  namespace = 'ecommerce' //represents the Kubernetes namespace where resources will be deployed.
}
Enter fullscreen mode Exit fullscreen mode

providers.tf : In this file you’ll find a provider configuration where allows Terraform to interact with a specific cloud provider in the region you previously define.

provider 'aws' {
  region = local.region
}
Enter fullscreen mode Exit fullscreen mode

terraform.tf : In this file you’ll find a Terraform configuration with all the required providers and versions needed to install and use for managing resources

terraform {

  required_providers {
    aws = {
      source  = 'hashicorp/aws'
      version = '~> 5.47.0'
    }
    random = {
      source  = 'hashicorp/random'
      version = '~> 3.6.1'
    }
    nullres = {
      source  = 'hashicorp/null'
      version = '>= 3.1'
    }
    kubernetes = {
      source  = 'hashicorp/kubernetes'
      version = '>= 2.20'
    }
    helm = {
      source  = 'hashicorp/helm'
      version = '>= 2.9'
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

main.tf : In this file you’ll find the definition and resources configuration in Terraform. We are using the module pattern. In our case we have 2 modules: VPC and EKS. In the Main.tf you need to mapped the modules and pass the value for the variables the modules are expecting.

module 'vpc' {
  source = './modules/vpc'
  name   = local.name
  vpc_cidr = local.vpc_cidr
  azs = local.azs
  tags = local.tags
}
module 'eks' {
  source = './modules/eks'
  region = var.region
  name = local.name
  cluster_version = local.cluster_version
  instance_types = local.instance_types
  vpc_id = module.vpc.vpc_id
  private_subnets = module.vpc.private_subnets
  tags = local.tags
  namespace = local.namespace
} 
Enter fullscreen mode Exit fullscreen mode

VPC Module:

variable.tf : You’ll find the variables definition for the VPC module.

variable 'name' {
  description = 'name for the VPC'
  type        = string
}

//This variable specifies the CIDR block (IP address range) for the VPC
variable 'vpc_cidr' {
  description = 'CIDR block for the VPC'
  type        = string
}

//This variable is used to specify the Availability Zones (AZs) in which the VPC resources will be distributed
variable 'azs' {
  description = 'Availability zones'
  type        = list(string)
}

//This variable is used to specify a set of tags (key-value pairs) that will be applied to the resources created within the VPC
variable 'tags' {
  description = 'Tags to apply to resources'
  type        = map(string)
}
Enter fullscreen mode Exit fullscreen mode

main.tf : You’ll find a terraform configuration that creates a Virtual Private Cloud (VPC) using terraform modules with public and private subnets, a NAT Gateway, and configurations for running a Kubernetes cluster on Amazon EKS.

module 'vpc' {
  source  = 'terraform-aws-modules/vpc/aws'
  version = '5.0.0'

  name = var.name
  cidr = var.vpc_cidr

  azs             = var.azs //Availability Zones (AZs) to be used for the VPC
  public_subnets  = [for k, v in var.azs : cidrsubnet(var.vpc_cidr, 8, k)]
  private_subnets = [for k, v in var.azs : cidrsubnet(var.vpc_cidr, 8, k + 10)]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  # Manage so we can name
  manage_default_network_acl    = true
  default_network_acl_tags      = { Name = '${var.name}-default' }
  manage_default_route_table    = true
  default_route_table_tags      = { Name = '${var.name}-default' }
  manage_default_security_group = true
  default_security_group_tags   = { Name = '${var.name}-default' }

  public_subnet_tags = {
    'kubernetes.io/cluster/${var.name}' = 'shared'
    'kubernetes.io/role/elb'              = 1
  }

  private_subnet_tags = {
    'kubernetes.io/cluster/${var.name}' = 'shared'
    'kubernetes.io/role/internal-elb'     = 1
  }

  tags = var.tags
}

output 'vpc_id' {
  value = module.vpc.vpc_id
}

output 'private_subnets' {
  value = module.vpc.private_subnets
}
Enter fullscreen mode Exit fullscreen mode

EKS module:

variables.tf : You’ll find the variables definition for the EKS module.

variable 'region' {
  description = 'AWS region'
  type        = string
}

variable 'name' {
  description = 'name for the EKS cluster'
  type        = string
}

variable 'cluster_version' {
  description = 'EKS cluster version'
  type        = string
}

//This variable is used to specify the EC2 instance types that will be used for the worker nodes in the EKS cluster.
variable 'instance_types' {
  description = 'EC2 instance types'
  type        = list(string)
}

//This variable is used to specify a set of tags that will be applied to the resources created within the EKS cluster. 
variable 'tags' {
  description = 'Tags to apply to resources'
  type        = map(string)
}

variable 'vpc_id' {
  description = 'VPC ID'
  type        = string
}

variable 'private_subnets' {
  description = 'Private subnet IDs'
  type        = list(string)
}

variable 'namespace' {
  description = 'Kubernetes namespace'
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

providers.tf : You’ll find the configuration for Kubernetes provider (which allows Terraform to interact with a Kubernetes cluster) and helm (which allows Terraform to manage Helm releases (packages of pre-configured Kubernetes resources) in a Kubernetes cluster).

# Kubernetes provider
# You should **not** schedule deployments and services in this workspace.
# This keeps workspaces modular (one for provision EKS, another for scheduling
# Kubernetes resources) as per best practices.
provider 'kubernetes' {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = 'client.authentication.k8s.io/v1beta1'
    command     = 'aws'
    # This requires the awscli to be installed locally where Terraform is executed
    args = ['eks', 'get-token', '--cluster-name', module.eks.cluster_name]
  }
}

provider 'helm' {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = 'client.authentication.k8s.io/v1beta1'
      command     = 'aws'
      # This requires the awscli to be installed locally where Terraform is executed
      args = ['eks', 'get-token', '--cluster-name', module.eks.cluster_name]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

main.tf : You’ll find the configuration that creates an EKS cluster with the cluster name, Kubernetes version, VPC and subnet settings, node groups, and add-ons. It also uses the eks_blueprints_addons (More information here) to allow the creation of load balancers and allow access from browser to the services we’ll deplloy. Finally, it updates the local kubeconfig file to allow the use of kubectl and creates a Kubernetes namespace for deploying resources.

module 'eks' {
  source = 'terraform-aws-modules/eks/aws'
  version = '19.15.1'

  cluster_name    = var.name
  cluster_version = var.cluster_version
  cluster_endpoint_public_access = true //allows public access to the EKS cluster's API server endpoint.

  vpc_id             = var.vpc_id
  subnet_ids = var.private_subnets
  control_plane_subnet_ids = var.private_subnets

  # EKS Addons
  cluster_addons = {
    aws-ebs-csi-driver = {
      most_recent = true
    }
    coredns    = {}
    kube-proxy = {}
    vpc-cni    = {}
  }

  eks_managed_node_group_defaults = {
    # Needed by the aws-ebs-csi-driver
    iam_role_additional_policies = {
      AmazonEBSCSIDriverPolicy = 'arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy'
    }
  }

  eks_managed_node_groups = {
    one = {
      node_group_name = 'node-group-1'
      instance_types  = var.instance_types
      min_size        = 1
      max_size        = 3
      desired_size    = 2
      subnet_ids      = var.private_subnets
    }
    two = {
      node_group_name = 'node-group-2'
      instance_types  = var.instance_types
      min_size        = 1
      max_size        = 2
      desired_size    = 1
      subnet_ids      = var.private_subnets
    }
  }

  tags = var.tags
}

//This module installs and configures various add-ons for the EKS cluster, such as the AWS Load Balancer Controller, Metrics Server, and AWS CloudWatch Metrics.
module 'eks_blueprints_addons' {
  source = 'aws-ia/eks-blueprints-addons/aws'
  version = '1.16.3'

  cluster_name        = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  # K8S Add-ons
  enable_aws_load_balancer_controller = true
  aws_load_balancer_controller = {
    set = [
      {
        name  = 'vpcId'
        value = var.vpc_id
      },
      {
        name  = 'podDisruptionBudget.maxUnavailable'
        value = 1
      },
      {
        name  = 'enableServiceMutatorWebhook'
        value = 'false'
      }
    ]
  }
  enable_metrics_server               = true
  enable_aws_cloudwatch_metrics       = false

  tags = var.tags
}

# To update local kubeconfig with new cluster details
resource 'null_resource' 'kubeconfig' {
  depends_on = [module.eks_blueprints_addons]
  provisioner 'local-exec' {
    command = 'aws eks --region ${var.region}  update-kubeconfig --name ${var.name}'
    environment = {
      AWS_CLUSTER_NAME = var.name
    }
  }
}

resource 'null_resource' 'create_namespace' {
  depends_on = [null_resource.kubeconfig]
  provisioner 'local-exec' {
    command = 'kubectl create namespace ${var.namespace}'
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Run terraform: Now when is all set, you can create the resources using Terraform.

First, run this command to initialize Terraform:terraform init -upgrade

Then run these commands to add and apply the resources we need: terraform plan -out terraform.plan , terraform apply terraform.plan

After that, it will take about 10-15 minutes to get it done. It will create the resources in AWS and you will get your cluster information:

cluster created image

Then, you can validate the status in AWS Console:

Go to search bar and type 'Elastic Kubernetes'

elastic kubernetes service image

And then you will see the cluster created and it’s status, it has to be 'Active'

active cluster image

  • Connect with Kubectl: run the following command to allow kubectl modify your eks cluster so you can deploy containers has needed, you will have to give it your EKS information. Don’t forget to modify your region and cluster name.

aws eks --region <your-region> update-kubeconfig --name <cluster_name>

  • Deploy containers with manifest files: now you can go to the raw-manifests folder to apply all manifest files in order to deploy containers with your application. Don’t forget that all .yaml files you should review it and modify according to your app: your namespace, load balancer name, services name and paths.
cd raw-manifests
kubectl apply -f ingress.yaml,products-service.yaml,products-deployment.yaml,orders-service.yaml,orders-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

It will appears something like this when the containers are deployed

ingress.networking.k8s.io/alb-ingress created
service/products-service created
deployment.apps/products created
service/orders-service created
deployment.apps/orders created
Enter fullscreen mode Exit fullscreen mode
  • Get endpoint to make http requests: Use the following command you’ll obtain the address for your application deployment:
kubectl get ingress -n ecommerce
Enter fullscreen mode Exit fullscreen mode
NAME          CLASS   HOSTS   ADDRESS                                                   PORTS   AGE
alb-ingress   alb     *       new-ecommerce-alb-406866228.us-west-2.elb.amazonaws.com   80      8m22s
Enter fullscreen mode Exit fullscreen mode

Now, you just have to modify your address for what you want to see from your application. In our case, we have 3 endpoints, one for orders and one for products with paths: ‘v1/orders’ ,‘v1/products’ and ‘v1/orders/products’, and we’ll see the information we have in each one:

http://new-ecommerce-alb-406866228.us-west-2.elb.amazonaws.com/v1/orders

[
  {
    'id': '1',
    'productId': '1a',
    'orderFor': 'Herbert Kelvin Jr.',
    'deliveryAddress': 'Asphalt Street',
    'deliveryDate': '02/11/2023',
    'deliveryType': 'STANDARD'
  },
  {
    'id': '2',
    'productId': '1b',
    'orderFor': 'John Zulu Nunez',
    'deliveryAddress': 'Beta Road',
    'deliveryDate': '10/10/2023',
    'deliveryType': 'FAST DELIVERY'
  },
  {
    'id': '3',
    'productId': '1c',
    'orderFor': 'Lael Fanklin',
    'deliveryAddress': 'Charlie Avenue',
    'deliveryDate': '02/10/2023',
    'deliveryType': 'STANDARD'
  },
  {
    'id': '4',
    'productId': '1d',
    'orderFor': 'Candice Chipilli',
    'deliveryAddress': 'Delta Downing View',
    'deliveryDate': '22/09/2023',
    'deliveryType': 'FAST DELIVERY'
  },
  {
    'id': '5',
    'productId': '1d',
    'orderFor': 'Tedashii Tembo',
    'deliveryAddress': 'Echo Complex',
    'deliveryDate': '12/12/2023',
    'deliveryType': 'FAST DELIVERY'
  }
]
Enter fullscreen mode Exit fullscreen mode

http://new-ecommerce-alb-406866228.us-west-2.elb.amazonaws.com/v1/products

[
  {
    'id': '1a',
    'name': 'Hoodie'
  },
  {
    'id': '1b',
    'name': 'Sticker'
  },
  {
    'id': '1c',
    'name': 'Socks'
  },
  {
    'id': '1d',
    'name': 'T-Shirt'
  },
  {
    'id': '1e',
    'name': 'Beanie'
  }
]
Enter fullscreen mode Exit fullscreen mode

http://new-ecommerce-alb-406866228.us-west-2.elb.amazonaws.com/v1/orders/products

[
   {
      'id':'1',
      'productId':'1a',
      'orderFor':'Herbert Kelvin Jr.',
      'deliveryAddress':'Asphalt Street',
      'deliveryDate':'02/11/2023',
      'deliveryType':'STANDARD',
      'product':{
         'id':'1a',
         'name':'Hoodie'
      }
   },
   {
      'id':'2',
      'productId':'1b',
      'orderFor':'John Zulu Nunez',
      'deliveryAddress':'Beta Road',
      'deliveryDate':'10/10/2023',
      'deliveryType':'FAST DELIVERY',
      'product':{
         'id':'1b',
         'name':'Sticker'
      }
   },
   {
      'id':'3',
      'productId':'1c',
      'orderFor':'Lael Fanklin',
      'deliveryAddress':'Charlie Avenue',
      'deliveryDate':'02/10/2023',
      'deliveryType':'STANDARD',
      'product':{
         'id':'1c',
         'name':'Socks'
      }
   },
   {
      'id':'4',
      'productId':'1d',
      'orderFor':'Candice Chipilli',
      'deliveryAddress':'Delta Downing View',
      'deliveryDate':'22/09/2023',
      'deliveryType':'FAST DELIVERY',
      'product':{
         'id':'1d',
         'name':'T-Shirt'
      }
   },
   {
      'id':'5',
      'productId':'1d',
      'orderFor':'Tedashii Tembo',
      'deliveryAddress':'Echo Complex',
      'deliveryDate':'12/12/2023',
      'deliveryType':'FAST DELIVERY',
      'product':{
         'id':'1d',
         'name':'T-Shirt'
      }
   }
]
Enter fullscreen mode Exit fullscreen mode

Notice how the last endpoint include products information from the products endpoint. This means the pods can communicate with each other, thanks to private subnet settings.

Conclusion

AWS Elastic Kubernetes Services is a great alternative to deploy kubernetes cluster in the cloud, reducing the weight of manage the control plane with the self-managed nodes. Nonetheless, the amount of required kubernetes knowledge needed is quite high, making it better for migrate running kubernetes or create really big application with a need for certain infraestructure management. In our study case, the application just retrieve information, this case is commonly solved with API calls, in this kind of cases using lambda functions and api-gateways for orders and products would be better for quickly development, less infraestructure knowledge and reduced costs.

References

Overview

Cluster Architecture

Kubernetes Components

Kubernetes en AWS | Amazon Web Services

Provisioning AWS EKS Cluster with Terraform - Tutorial

Top comments (0)