DEV Community

Cover image for Creating a Secured Kubernetes Cluster on Amazon EKS
Kelechi Edeh
Kelechi Edeh

Posted on

Creating a Secured Kubernetes Cluster on Amazon EKS

Amazon Elastic Kubernetes Service (EKS) makes it easier to deploy, manage, and scale containerized applications using Kubernetes on AWS. But for production environments, ensuring the security of your cluster is critical especially when deploying it inside a private network.

In this article, I document how I deployed a fully secured Amazon EKS (Elastic Kubernetes Service) cluster inside private subnets using AWS best practices. My approach ensured that the Kubernetes control plane, worker nodes, and sensitive services were all protected from public exposure while still being fully manageable and scalable.

Security Considerations Before You Begin

Before diving into the technical steps, I aligned my infrastructure design with core security principles in AWS and Kubernetes. Here are the key things I considered while designing the cluster:

  1. VPC and Subnet Design: I ensured the EKS nodes were launched in private subnets, and only necessary resources like the bastion host resided in public subnets.
  2. IAM and Identity Management: I created distinct IAM roles for the EKS control plane, node group, and EC2 bastion host using the principle of least privilege.
  3. Node Security: Nodes were isolated, updated, and assigned minimal permissions.
  4. Bastion Host for Access: I deployed a bastion host as a secure jump box, with SSH restricted to my IP address.
  5. Kubernetes Network Policies: I applied policies to control pod-to-pod communication within the cluster.
  6. Secrets and Configuration Management: Kubernetes secrets were encrypted using AWS KMS.
  7. Logging and Monitoring: I enabled logging to Amazon CloudWatch and audit trails via CloudTrail and GuardDuty.
  8. Control Plane Access: I limited EKS API server access to specific IPs and used RBAC for access control.
  9. Use of IRSA (IAM Roles for Service Accounts): This ensured fine-grained pod-level permissions without over-privileging node IAM roles.
  10. Cluster Updates and Maintenance: I deployed the latest Kubernetes version and hardened AMIs, with plans to automate updates.

Steps I Took to Deploy the Cluster

Step 1: Created a Custom VPC with Public and Private Subnets

I began by creating a custom VPC with:

  • 2 public subnets: to host internet-facing resources like a bastion host.
  • 2 private subnets: dedicated to hosting the EKS worker nodes and the control plane.

Each pair of public/private subnets was spread across two Availability Zones to ensure high availability. I also ensured the private subnets had no direct route to the internet, and only communicated outward through NAT Gateways in the public subnets.

Image description

Step 2: Create IAM Role for the EKS Cluster Control Plane

You need an IAM role that allows EKS to create and manage resources on your behalf. To allow EKS to interact with other AWS services, I created an IAM role for the EKS control plane with the AmazonEKSClusterPolicy and AmazonEKSServicePolicy attached.

This IAM role will be specified during cluster creation to ensure the control plane had the necessary permissions to manage resources securely.
To create this IAM role, follow these steps:

  • Navigate to the IAM Roles section in the AWS Console.
  • Click on Create role.
  • Under Trusted entity type, choose AWS service.
  • For the use case, select EKS - Cluster
  • Attach the policies to the role
  • Review and click Create

Image description

Step 3: Create the EKS Cluster

The EKS cluster using the AWS Management Console.

  • Navigate to EKS > Clusters
  • Click Create Cluster
  • Under Name, enter secure-eks-cluster
  • Select the Kubernetes version
  • Under Cluster Service Role, select the IAM role you created earlier
  • Choose VPC and subnets created in Step 1
  • Enable Private access to the Kubernetes API server and disable public access in the cluster endpoint access
  • Enable control plane logging (audit, API, authenticator, etc.)
  • For Amazon EKS add-ons, i selected the CoreDNS, Amazon VPC CNI, and kube-proxy add-ons
  • Click Create

Creating the eks cluster may take several minutes. Wait for the cluster status to change to "Active."

Image description

Step 4: Create IAM Role for Node Group

Creating a node group requires IAM roles to be used by the nodes.

  • Go to IAM > Roles > Create Role
  • Choose EC2
  • Attach:
    • AmazonEKSWorkerNodePolicy
    • AmazonEKS_CNI_Policy
    • AmazonEC2ContainerRegistryReadOnly
  • Name the role
  • Click create

Step 5: Create Security Group for Node Group

Before creating your node group, it's important to set up a security group that defines how the nodes can communicate.

How to Create the Security Group

  • Navigate to EC2 > Security Groups
  • Click Create Security Group
  • Name it
  • Select your VPC
  • Under Inbound rules, add:
  • Type: SSH, Source: My IP
  • This allows internal traffic within the cluster. You can further tighten this based on workloads.
  • Under Outbound rules, keep the default: All traffic allowed. Click Create security group

You will associate this security group with the node group in the next step.

Step 6: Create Node Group in EKS Cluster

A node group is a group of EC2 instances that supply compute capacity to your Amazon EKS cluster. Multiple node groups can be added to the cluster

  • In the EKS Console, go to your cluster > Compute tab
  • Click Add Node Group
  • Enter name of node group
  • Select the IAM role created in step 4
  • Choose instance type (e.g., t3.medium), desired size, min and max nodes
  • Select the private subnets
  • Configure remote access to node and select your valid key pair
  • Select the security group created in step 5
  • Review and Create

Image description

Step 7: Launch a Bastion Host

A bastion host (also called a jump box) is a special-purpose EC2 instance used to securely access resources in a private network such as your EKS worker nodes or Kubernetes control plane without exposing those resources to the public internet.

The EKS cluster created above was created with private endpoint access only. This means it cannot be reached from your local machine or the public internet. A bastion host solves this by acting as a secure intermediary.

To access your private EKS cluster, create a bastion host in a public subnet.

  • Go to EC2 > Launch Instance
  • Select AMI (I selected Amazon linux 2023 AMI)
  • Select Instace type (t2.micro - Free Tier)
  • Select key pair for SSH access
  • Choose your custom VPC and public subnet created in step 1
  • Create a security group allowing only your IP on port 22

Image description

Step 8: Configure AWS CLI in Bastion

Once the Bastion host instance is running, you can securely connect to your private EKS resources using the bastion host. By establishing an SSH session to the bastion, you'll gain command-line access within the VPC, allowing you to run kubectl and other AWS CLI tools without exposing your Kubernetes API or nodes to the internet.

SSH into the instance:

ssh -i my-key.pem ec2-user@<bastion-public-ip>
Enter fullscreen mode Exit fullscreen mode

Next, we will configure the AWS CLI with the credentials and preferences it needs to interact with your AWS account.
It prompts you to enter four key pieces of information:

  • AWS Access Key ID: This is part of your credentials used to authenticate your identity with AWS services.
  • AWS Secret Access Key: A secret counterpart to your access key. It should be kept safe and never exposed in public code or documentation.
  • Default Region Name: Specifies the AWS region you want the CLI to interact with by default (e.g., us-east-1, us-west-2). This should match the region where your EKS cluster is deployed.
  • Default Output Format: Controls how the CLI formats the output. Common options are:
    • json (default)
    • table (human-readable)
    • text (compact, script-friendly)
aws configure
Enter fullscreen mode Exit fullscreen mode
AWS Access Key ID [None]: ****************
AWS Secret Access Key [None]: ***********************
Default region name [None]: us-east-1
Default output format [None]: json
Enter fullscreen mode Exit fullscreen mode

After this, the CLI stores the credentials in:

~/.aws/credentials
~/.aws/config
Enter fullscreen mode Exit fullscreen mode

These files allow AWS CLI and tools like kubectl (via aws eks update-kubeconfig) to authenticate and communicate securely with your AWS resources.

Step 9: Configure kubectl in Bastion Host

Once the AWS CLI is installed, I proceeded to install the kubectl. The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster.

To install kubectl on Linux for Kubernetes 1.33, i followed the following steps

curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.33.0/2025-05-01/bin/linux/amd64/kubectl
Enter fullscreen mode Exit fullscreen mode

Check the SHA-256 checksum for your downloaded binary with one of the following commands.

sha256sum -c kubectl.sha256
Enter fullscreen mode Exit fullscreen mode

Apply execute permissions to the binary.

chmod +x ./kubectl
Enter fullscreen mode Exit fullscreen mode

Copy the binary to a folder in your PATH

mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
Enter fullscreen mode Exit fullscreen mode

You can explore the full step-by-step process here

Step 10: Create a Kubeconfig File

A kubeconfig file is a configuration file used by the kubectl command-line tool to determine how to access and authenticate with a Kubernetes cluster.

I created a kubeconfig file automatically using the command below

aws eks update-kubeconfig --region region-code --name my-cluster
Enter fullscreen mode Exit fullscreen mode

Replace region-code with the AWS Region that your cluster is in and replace my-cluster with the name of your cluster.

Test your configuration.

kubectl get pods -A
Enter fullscreen mode Exit fullscreen mode

Output

NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-9dhsx             2/2     Running   0          90m
kube-system   aws-node-r2sxb             2/2     Running   0          90m
Enter fullscreen mode Exit fullscreen mode

You can read more on creating a kubeconfig file here

Important: Allow HTTPS Traffic from Bastion to EKS
Action: Update the security group associated with your EKS cluster to allow inbound HTTPS traffic (TCP port 443) from the bastion host’s security group.

Why?
The EKS API server listens on port 443 for secure communication. Since my cluster is configured with private access only, my kubectl commands (executed from the bastion host) need permission to reach the API server. By allowing traffic on port 443 from the bastion host's security group, i have enabled secure cluster management without exposing the API publicly.

Step 11: Configure IAM Roles for EC2 instances

Initially, I used long-term IAM user credentials inside Kubernetes pods to access AWS services like S3. However, this is not considered a best practice, especially in production environments, because it introduces risks such as credential leakage and over-privileged access.

To follow security best practices, I migrated to using IAM Roles. This allows specific pods to securely assume fine-grained IAM roles without storing access keys inside the container.

  • Create IAM role for bastion host instance
  • Go to EC2 > Instances > Bastion Host > Actions > Security
  • Click Modify IAM role
  • Select the role created for the bastion host instance
  • Click Update Iam role

Even after assigning an IAM role to your bastion host EC2 instance via the AWS Console, the AWS CLI may still use old IAM user credentials stored in ~/.aws/credentials and ~/.aws/config. This happens because the AWS CLI defaults to credentials in those files before checking for instance metadata.

To fix this issus, i renamed the existing AWS CLI Credentials

mv ~/.aws/credentials ~/.aws/credentials.bak
mv ~/.aws/config ~/.aws/config.bak
Enter fullscreen mode Exit fullscreen mode

Next, i ran AWS CLI Commands without stored credentials

aws sts get-caller-identity
Enter fullscreen mode Exit fullscreen mode

Output

Image description

Conclusion

Deploying a fully private and secure Amazon EKS cluster required careful planning, precision, and a solid grasp of AWS infrastructure and Kubernetes operations. From designing the VPC and locking down IAM roles, to configuring private API access and implementing IRSA, every decision was guided by security best practices

Managing access through a bastion host, fine-tuning security groups, and resolving credential conflicts reinforced the importance of automation, least privilege, and clean access boundaries.

If you're interested in automating this process with Terraform, stay tuned for my next article where I’ll share how I built a fully automated version of this architecture!

Top comments (0)