DEV Community

lakshmojirao999
lakshmojirao999

Posted on

Deep dive into the AWS EKS cluster networking and its best practices

Running Kubernetes on AWS requires an understanding of both AWS networking configuration and Kubernetes networking requirements.

An EKS cluster consists of two VPCs: one VPC managed by AWS that hosts the Kubernetes control plane and a second VPC managed by customers that hosts the Kubernetes worker nodes (EC2 instances) where containers run, as well as other AWS infrastructure (like load balancers) used by the cluster. All worker nodes need the ability to connect to the managed API server endpoint. This connection allows the worker node to register itself with the Kubernetes control plane and to receive requests to run application pods.

A quick overview of EKS Cluster Components.

Cluster Components
Control Plane: The control plane serves as an endpoint for the managed Kubernetes API server, which is used to communicate with the cluster. It operates on a specific group of EC2 instances in an Amazon-managed AWS account.
Data Plane: The EC2 instances in your company's AWS account host the Kubernetes worker nodes. They establish a connection with the control plane via the API endpoint.
Note: We can opt to limit the endpoint's visibility over the public internet, make it reachable from the public internet, or keep it fully private with API Endpoint access control. You can use any of the following networking modes to control API endpoint access with the EKS.

Networking

Networking
The Data/worker nodes connect either to the public endpoint, or through the EKS-managed elastic network interfaces (ENIs) that are placed in the subnets that you provide when you create the cluster.
The route that worker nodes take to connect is determined by whether you have enabled or disabled the private endpoint for your cluster. Even when the private endpoint is disabled, EKS still provisions ENIs to allow for actions that originate from the Kubernetes API server, such as kubectl exec and logs.

Only Public Endpoint 

Public Endpoint
This is the default behavior for new Amazon EKS clusters. In this mode, Kubernetes API requests that originate from your cluster's VPC (such as worker node to control plane communication) leave the VPC but not Amazon's network. In this case, the worker nodes should be deployed in a public subnet or a private subnet with a route to NAT enabled. The cluster's API server is accessible from the internet. By default, your public endpoint is accessible from anywhere on the internet (0.0.0.0/0).

We can, optionally, limit the CIDR blocks that can access the public endpoint. But if you opt for the option of limiting the API server access to specific CIDR, It is recommended to have both Public and Private endpoints.

Only Private Endpoint

Private Endpoint
When only Private Endpoint is enabled, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC.
This private hosted zone is managed by Amazon EKS, and it doesn't appear in your account's Route 53 resources. All traffic to your cluster API server must come from within your cluster's VPC or a connected network. There is no public access to your API server from the internet. Any kubectl commands must come from your VPC or a connected network.

Public and Private endpoints

Public and Private endpoints
When both the public and private endpoints are enabled, Kubernetes API requests from within the VPC communicate to the control plane via the X-ENIs within your VPC. Your cluster API server is accessible from the internet.
Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC.

Top comments (0)