DEV Community

Cover image for Deploying Your First Kubernetes Cluster on AWS Using EKS
Christopher Vensand
Christopher Vensand

Posted on

Deploying Your First Kubernetes Cluster on AWS Using EKS

Welcome to the second post in my series, Building Internet Scale Services with Kubernetes and AWS. If you're new to Kubernetes or AWS, I recommend going back and reading the first post for foundational knowledge about these technologies.

In this post, I'll walk you through creating your first Kubernetes cluster in AWS using EKS (Elastic Kubernetes Service). We’ll use Terraform to provision the infrastructure, ensuring we can easily modify or recreate our setup whenever needed.


Why Use Terraform?

Terraform allows you to define your infrastructure in code and then provision it across different platforms. For example, you might want a Kubernetes cluster in AWS for your core applications, a database in GCP for specialized features, and an object store in Azure. Using Terraform, you can define all of these resources in a unified set of files making multi-cloud architectures simpler to manage.

If you relied entirely on the AWS Console (the web interface) to create your infrastructure, you would have to click the same sequence of buttons every time you wanted to recreate your setup. With Terraform, you can simply run your code again. This “infrastructure as code” approach is a powerful way to maintain and scale your infrastructure.


Setting Up an AWS Account

Before provisioning any infrastructure, you’ll need an AWS account. The easiest way to do this is through the AWS Console. Head over to the AWS sign-up page and create your account.


Creating an IAM User

Once your AWS account is created, you'll need to create an IAM (Identity and Access Management) user whose credentials Terraform will use to provision your infrastructure.

  1. In the AWS Console, type IAM in the search bar at the top and select the IAM service.
  2. Click on Users under Access management on the left panel.
  3. Since you likely won't have any users yet, click Create user.
  4. Choose any username you like.
  5. Under Set permissions, click Attach policies directly and select AdministratorAccess.
    • Note: In a production environment, you should grant only the minimum permissions needed rather than full administrative privileges. However, for simplicity in this tutorial, we’ll use AdministratorAccess.
  6. Review your settings and click Create user.

Generating AWS Access Keys

Next, you’ll need to create an Access Key for this IAM user. Terraform will use these credentials to communicate with your AWS account.

  1. Click on your newly created user.
  2. Go to the Security credentials tab.
  3. Under Access keys, click Create access key.
  4. For Use case, choose Command Line Interface (CLI).
  5. Follow the prompts, then click Create access key.
  6. Important: Copy or download the Access Key and Secret Access Key. You will not be able to view the Secret Access Key again once you close this window. If lost, you’ll have to create a new key.

That’s all you need from the AWS Console for now!


Setting Up Your Terminal

Now that you’ve created your IAM user and obtained your AWS access keys, it’s time to install the command-line tools needed to provision your Kubernetes cluster. You’ll need to install and configure two essential CLIs: the AWS CLI and Terraform.

macOS

Install Homebrew and run the following commands in your terminal.

brew install awscli terraform
aws --version
terraform --version
Enter fullscreen mode Exit fullscreen mode

Other Platforms

If you’re using Windows or Linux, refer to the official documentation for installation details:

Configuring the AWS CLI

Once both CLIs are installed, configure the AWS CLI to use the credentials you obtained earlier by running the following command:

aws configure
Enter fullscreen mode Exit fullscreen mode

When prompted, enter:

  • AWS Access Key ID: Your IAM user’s access key.
  • AWS Secret Access Key: Your IAM user’s secret key.
  • Default region name (e.g., us-east-1, us-west-2): Your preferred AWS region.
  • Default output format (e.g., json): The output format for CLI commands.

Your system is now ready to interface directly with AWS through the command line!


Understanding the Terraform Code

With your environment set up, let’s clone the repository and review how the Terraform files work together to provision your Kubernetes cluster:

git clone https://github.com/chrisvensand/terraform-aws-eks
cd terraform-aws-eks
Enter fullscreen mode Exit fullscreen mode

Inside this repository, you’ll find three main Terraform files—main.tf, provider.tf, and versions.tf—that define your AWS and EKS resources. Below is a breakdown of what each file does:

main.tf

This file contains the core configurations for your VPC (networking) and EKS cluster. It references publicly available Terraform modules to simplify the setup:

  1. VPC Module
   module "vpc" {
     source  = "terraform-aws-modules/vpc/aws"
     name = "my-eks-cluster-vpc"
     cidr = "10.0.0.0/16"

     azs             = slice(data.aws_availability_zones.available.names, 0, 2)
     public_subnets  = ["10.0.1.0/24", "10.0.2.0/24"]
     private_subnets = ["10.0.3.0/24", "10.0.4.0/24"]

     enable_nat_gateway    = true
     single_nat_gateway    = true
     enable_dns_hostnames  = true
     enable_dns_support    = true
   }
Enter fullscreen mode Exit fullscreen mode
  • source: Uses the terraform-aws-modules/vpc/aws module, which wraps all the AWS VPC resources (VPC, subnets, NAT gateways, etc.) in an easy-to-use package.
  • CIDR, public_subnets, private_subnets: Define the IP address ranges for your VPC. The public subnets are accessible from the internet, while the private subnets are reserved for internal traffic (where your EKS worker nodes will typically reside).
  • enable_nat_gateway and single_nat_gateway: Provision a NAT gateway for secure outbound internet traffic from private subnets.
  • enable_dns_hostnames and enable_dns_support: Enable DNS for resources in your VPC, allowing you to map IP addresses to DNS names.
  1. EKS Module
   module "eks" {
     source  = "terraform-aws-modules/eks/aws"
     version = "~> 20.31"

     cluster_name    = "my-eks-cluster"
     cluster_version = "1.31"

     enable_cluster_creator_admin_permissions = true

     cluster_compute_config = {
       enabled    = true
       node_pools = ["general-purpose"]
     }

     vpc_id     = module.vpc.vpc_id
     subnet_ids = module.vpc.private_subnets

     tags = {
       Environment = "dev"
       Terraform   = "true"
     }
   }
Enter fullscreen mode Exit fullscreen mode
  • source: Leverages the terraform-aws-modules/eks/aws module. This module takes care of creating the EKS control plane, node groups, and associated IAM roles.
  • cluster_name and cluster_version: Specify the name and Kubernetes version for your cluster.
  • enable_cluster_creator_admin_permissions: Grants the user deploying the cluster (i.e., your AWS account credentials) full administrative access to the cluster.
  • cluster_compute_config: Configures how node groups are created, naming them and selecting instance types or node pool strategies.
  • vpc_id and subnet_ids: Tie the EKS cluster into the VPC created in the vpc module, ensuring the cluster resides in private subnets.
  • tags: Attach key-value tags to your EKS resources for easy tracking and organization.
  1. Data Source: aws_availability_zones
   data "aws_availability_zones" "available" {}
Enter fullscreen mode Exit fullscreen mode
  • Fetches the list of available AWS Availability Zones in your chosen region. The VPC module references this data to build out the subnets in two of those zones (ensuring high availability).

provider.tf

provider "aws" {
  region  = "us-west-2"
  profile = "default"
}
Enter fullscreen mode Exit fullscreen mode
  • region: Specifies the default AWS region (in this case, us-west-2) where Terraform will provision your resources.
  • profile: Defines the AWS CLI profile to use. If you’ve already configured your ~/.aws/credentials, specifying default means Terraform will use that profile’s credentials.

versions.tf

terraform {
  required_version = ">= 1.3.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
  • required_version: Ensures you’re running Terraform v1.3.0 or later.
  • required_providers: Declares which providers are needed (in this case, aws) and where Terraform should download them from (the HashiCorp registry).

Finally, Deploying Your EKS Cluster

All set! You’ve done a lot of preparation, but for a fully managed, infinitely scalable Kubernetes cluster, the setup is actually pretty streamlined. Here’s how to deploy:

  1. Initialize Terraform
   terraform init
Enter fullscreen mode Exit fullscreen mode

This downloads any required plugins or modules.

  1. Apply your infrastructure
   terraform apply
Enter fullscreen mode Exit fullscreen mode

Terraform will display a plan, detailing every resource it intends to create. Review and confirm to kick off the deployment.

Note: This can take around 10 minutes, so be patient. Terraform will keep you updated on progress.

Once the deployment finishes, head to the EKS page in the AWS Console. You should see your brand-new cluster ready to roll! Give yourself a well-deserved pat on the back.

Scalability Note: An EKS cluster can scale to 1,000 nodes (and even beyond, if you request a limit increase from AWS). This should be more than enough capacity for most use cases. If you truly need global presence, you can reuse these same Terraform configurations in additional AWS regions worldwide—just change the region in your provider settings.


What’s Next?

In the next blog post, we’ll build a simplified version of “Netflix” on our new Kubernetes cluster, exploring how to run and manage a more complex, microservices-style application. With your EKS cluster in place, the sky’s the limit!

Feel free to drop any questions or comments below. I hope this tutorial empowers you to spin up scalable, resilient Kubernetes clusters on AWS with Terraform—happy building!


About Me

I previously worked at Riot Games as a software engineer on the infrastructure team. While there, I helped the company transition from on-premises infrastructure to AWS and optimized backend services for games like Valorant and League of Legends, running on Kubernetes clusters worldwide.

Top comments (0)