Steps to provision EKS cluster in AWS
- Create 5 VPC Endpoints, please note for FargateOnly cluster we also need to add one extra VPC endpoint which is for “sts”
com.amazonaws.region.ecr.api
com.amazonaws.region.ecr.dkr
com.amazonaws.region.ec2
com.amazonaws.region.s3
com.amazonaws.region.sts
- Create the EKS Fargate Only cluster using the 3 PrivateOnly subnets with the command below:
$ eksctl create cluster \
--name fargateprod \
--region us-east-1 \
--vpc-private-subnets=subnet-089c8482f000f3qwe,subnet-001022620f283121cb,subnet-0387722f71210b1f4a \
--fargate
Please make sure to replace the three subnets in the command above with the subnets of your VPC, which can be checked in the output section of the Stack which was created from step1
Once Cluster is created, update the Private API server endpoint for this cluster to “true” and make sure it has a Fargate profile for “default” and “kube-system” namespaces.
After following the above steps the coredns pods should come to “Ready” state and you will be able to see the nodes as well.
Cli
# Create cluster
export CLUSTER=my-eks-ak
eksctl create cluster --name=$CLUSTER \
--vpc-private-subnets=subnet-b14e21f7,subnet-76f21611,subnet-a8f814e1 \
--region ap-southeast-1 --fargate
# check coredns
kubectl get pods -n kube-system
# enable private endpoint access
eksctl utils update-cluster-endpoints --cluster $CLUSTER --private-access=true --approve
# delete coredns pods to retart it
kubectl delete pod <pending coredns pod> -n kube-system
# check coredns
kubectl get pods -n kube-system
References
[https://github.com/tohwsw/aws-eks-workshop-fargate]
[https://github.com/tohwsw/aws-eks-workshop-fargate]
[https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/]
[https://eksctl.io/usage/vpc-networking/]
Top comments (0)