DEV Community

Phu Hoang
Phu Hoang

Posted on

Amazon EKS From The Ground Up - Part 1: Control Plane & Infrastructure

Introduction

As a DevOps engineer, the ability to provision and manage Kubernetes clusters efficiently is essential. Amazon EKS (Elastic Kubernetes Service) makes this significantly easier—you can create an EKS cluster with just a single command or a few clicks in the AWS Console.

While you’re sipping your coffee, however, a lot is happening behind the scenes.

This article is the first part of a deep-dive series that breaks down those behind-the-scenes actions. We will examine the components that make up an Amazon EKS cluster, focusing specifically on the Infrastructure layer and IAM & Security, which form the foundation of everything that comes later.

A Brief Concept of EKS

Amazon EKS is a managed Kubernetes service that allows you to run Kubernetes without having to operate the Control Plane yourself.

In simple terms, EKS handles the two most difficult aspects of running Kubernetes:

  1. Setting up and managing the Control Plane

    The Control Plane is the “brain” of Kubernetes. It consists of the API server, the etcd database (which stores cluster state), the scheduler, and the controller manager. AWS ensures that this Control Plane is highly available, secure, and continuously patched.

  2. Deep integration with AWS services

    EKS tightly integrates Kubernetes with core AWS services such as VPC networking, IAM-based security, and Elastic Load Balancing.

Building the Foundation – Infrastructure First

A very common misconception among newcomers is the following:

EKS = Kubernetes, and AWS handles everything.

In reality, EKS is split into two distinct responsibility domains:

Component Partition Components Management Responsibility
AWS-managed (Control Plane) API Server, etcd, Scheduler, Controller Manager AWS (not configurable)
Your AWS Account (Data Plane & Networking) VPC, Subnets, Security Groups, IAM Roles, EC2 Worker Nodes or Fargate Pods You (must be designed and maintained)

For this reason, the remainder of this article focuses on the infrastructure that you are responsible for, before the EKS Control Plane is even created.

part_1_architecture

Step 0 – Preparing the Tools

Before creating the EKS cluster, ensure you have the following ready:

  • An AWS account with access to the AWS Console
  • AWS CLI installed and configured
  • kubectl installed

Although this article is console-first, we will use the CLI to verify and explain what happens under the hood.

Step 1 – Preparing the VPC

The EKS Control Plane itself does not run inside your VPC. However, you still need a VPC for:

  • EC2 worker nodes
  • Pod IP address allocation
  • Security groups to control traffic
  • Internet or NAT access for outbound communication

In short: without a properly designed VPC, EKS cannot function.

1.1. Actions in the AWS Console

→ Open the Amazon VPC service

→ Click Create VPC

→ Choose VPC and more

Recommended Configuration (Practice / Production-Ready Baseline)

Component Value Notes
Name eks-demo-vpc
IPv4 CIDR 10.0.0.0/16
Availability Zones 2 High availability
Public subnets 2 Used for NAT Gateways and public load balancers
Private subnets 2 Where worker nodes will run
NAT Gateway 1 per AZ Required for outbound access
DNS Hostnames Enabled Required for nodes to resolve the API server

image.png

Step 2 – Adding Tags to Subnets

Amazon EKS does not automatically infer which subnets it should use. Many EKS features—especially the AWS Load Balancer Controller—depend entirely on subnet tags.

2.1. Required Tags

  • Cluster discovery tag (required on all subnets):

    kubernetes.io/cluster/demo-eks-cluster = shared
    
  • Public load balancer subnets (public subnets only):

    kubernetes.io/role/elb=1
    
  • Internal load balancer subnets (private subnets only):

    kubernetes.io/role/internal-elb =1
    

Incorrect or missing tags are one of the most common causes of load balancer failures in EKS.

2.2. Actions in the AWS Console

→ Open VPC → Subnets

→ Locate subnets named eks-demo-subnet-*

→ Select eks-demo-subnet-public1

→ Open the Tags tab → Manage tags

image.png

→ Add tags:

  • kubernetes.io/cluster/demo-eks-cluster = shared
  • kubernetes.io/role/elb = 1

→ Save

image.png

Repeat for eks-demo-subnet-public2.

image.png

For each private subnets, add tags:

  • kubernetes.io/cluster/demo-eks-cluster = shared
  • kubernetes.io/role/internal-elb = 1

image.png

image.png

Step 3 – Creating the IAM Role for the EKS Control Plane

Although AWS manages the Control Plane, it still needs permissions to interact with resources in your AWS account, such as:

  • Creating and managing ENIs
  • Attaching security groups
  • Communicating with VPC resources

This is achieved through an IAM Role that the EKS service assumes.

3.1. Actions in the AWS Console

→ Open IAM → Roles

→ Click Create role

image.png

→ Trusted entity:

  • Type: AWS service
  • Service: EKS
  • Use case: EKS – Cluster

image.png

→ Continue to permissions

  • AmazonEKSClusterPolicy is attached automatically

image.png

→ Name the role: EKSClusterRole

→ Create the role

image.png

3.2. Trust Policy Explanation

The role’s trust policy contains the following statement:

{
    "Effect":"Allow",
    "Principal":{
        "Service":"eks.amazonaws.com"
    },
  "Action":"sts:AssumeRole"
}
Enter fullscreen mode Exit fullscreen mode

This means:

“Allow the EKS service to assume this role and call AWS APIs using the permissions defined in AmazonEKSClusterPolicy.”

Granting only this policy follows the Principle of Least Privilege and is sufficient for the Control Plane to operate correctly.

Step 4 – Creating the EKS Cluster

With the infrastructure and IAM role in place, we can now create the Control Plane.

4.1. Actions in the AWS Console

→ Open Amazon EKS

→ Click Create cluster

→ Choose Custom configuration

→ Disable EKS Auto Mode

In Cluster Configuration step:

→ Name: demo-eks-cluster

→ IAM role: EKSClusterRole

→ Kubernetes version: 1.34 (latest stable at the time of writing)

→ Leave other settings as default

image.png

In Networking Configuration step:

→ VPC: eks-demo-vpc

→ Subnets: select all 4 subnets

→ Security group: default

→ Endpoint access: Public and Private

Public + Private access allows management from your local machine while ensuring worker nodes communicate internally within the VPC.

image.png

In Logging Configuration step:

→ Enable the following logs (recommended):

  • API server
  • Audit
  • Authenticator

image.png

In Add-ons Configuration step:

→ Keep the AWS-recommended defaults:

  • kube-proxy
  • Amazon VPC CNI
  • Node monitoring agent
  • CoreDNS
  • Amazon EKS Pod Identity Agent
  • External DNS
  • Metrics Server

→ Click Create and wait for the cluster status to become Healthy.

image.png

Behind the scenes, this is equivalent to:

aws eks create-cluster \
  --name demo-eks-cluster \
  --role-arn arn:aws:iam::<account-id>:role/EKSClusterRole \
  --resources-vpc-config subnetIds=<subnet-public1>,<subnet-public2>,<subnet-private1>,<subnet-private2>
Enter fullscreen mode Exit fullscreen mode

Which do you prefer AWS Console or AWS CLI? 😉

Step 5 – Connecting and Verifying the Cluster

Once the cluster is ACTIVE, connect to it from your local machine.

5.1 Fetching the kubeconfig

aws eks update-kubeconfig --name demo-eks-cluster --region us-east-1
Enter fullscreen mode Exit fullscreen mode

This command:

  • Retrieves the API endpoint
  • Updates ~/.kube/config
  • Configures authentication using your current IAM identity

5.2 Verifying the Cluster

kubectl cluster-info
kubectl get namespaces
kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Expected Results

  • cluster-info: API server is reachable
  • get namespaces: default namespaces are listed
  • get nodes: no nodes returned

This is expected.

The Control Plane is ready, but no worker nodes exist yet, so Kubernetes has no compute capacity to schedule Pods. Core system Pods such as CoreDNS remain in the Pending state.

image.png

Conclusion

In this first part of the series, we built the foundation of an Amazon EKS cluster from the ground up:

  • Designed a production-ready VPC with properly tagged subnets
  • Created an IAM role for the EKS Control Plane
  • Provisioned the EKS Control Plane with secure endpoint access
  • Verified connectivity while intentionally stopping before worker nodes

At this point, we have built the brain of Kubernetes.

It is powered on, networked, and authorized—but it is still waiting for its “hands”.

Top comments (0)