First, you're going to use the following CLIs as local tooling:
- eksctl: to create EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters
- aws: to interact with AWS services
- kubectl: manage resources within your Kubernetes cluster.
⚠️ Before creating the cluster, be sure you have an IAM identity set up in place, i.e.:
-
IAM User specifically for running
eksctlcommands.With the needed policies (AmazonEKSClusterPolicyneeded for EKS cluster management,AmazonEKSWorkerNodePolicyneeded for node group operations,AmazonEC2FullAccessRequiredfor VPC, subnets, SGs, EC2 instances,AWSCloudFormationFullAccessrequired by eksctl, etc.). i.e.eksctl-manager
⚠️ The user is overly broad for production, since it grants full IAM control over your entire account. For least privilege, you'd scope it down to only the actions eksctl needs.
- IAM role (with
AmazonEKSClusterPolicy) that AWS's EKS control plane assumes to manage AWS resources inside your account on your behalf.
For the IAM User, create an access key (select Command Line Interface (CLI) as the use case), copy the Access key and Secret access key.
On the local machine, do aws configure to set AWS Access Key ID and AWS Secret Access Key (of home user), add a section to ~/.aws/config. Now you can use those on the local machine to configure a new profile, i.e., eks-manager:
aws configure --profile eks-manager
AWS Access Key ID [None]: ....
AWS Secret Access Key [None]: ...
Default region name [None]: us-east-1
aws configure set region us-east-1 --profile eks-manager
You need a VPC before creating the EKS cluster, but one of the advantages of using eksctl is that it automatically creates a VPC (if you do not provide one).
Create demo-eks cluster in us-east-1 running Kubernetes 1.33, with:
- a single managed node group,
- autoscaling enabled, OIDC configured (this enables IAM Roles for Service Accounts (IRSA), allowing individual pods to have their own IAM permissions rather than using the node's broad permissions,
- ALB ingress support: It attaches the necessary IAM policies to your worker nodes to enable them to interact with an Application Load Balancer. This flag alone does not install the controller.
- No SSH access to worker nodes.
eksctl create cluster \
--profile eks-manager \
--name demo-eks \
--region us-east-1 \
--version 1.33 \
--managed \
--nodegroup-name ng-general \
--node-type t3.medium \
--nodes 2 \
--nodes-min 2 \
--nodes-max 4 \
--with-oidc \
--alb-ingress-access \
--ssh-access=false
`
From the networking perspective, the following resources have been created:
- A VPC in the desired region us-east-1
- Two subnets in each availability zone, one public subnet (for NAT GW, Internet GW, and LB) and one private (for worker nodes).
- InternetGateway allows inbound and outbound traffic for resources in public subnets (VPC Edge, which provides the route for external traffic to enter the VPC)
- NATGateway enabling egress traffic from private subnets (it translates the private IP of your worker node into its own public IP to reach the internet)
Finally, add the cluster to your kubeconfig: aws eks update-kubeconfig --region us-east-1 --name demo-eks --profile eks-manager




Top comments (0)