EKS looks scary?
No worries — we’ve got the map! Let’s navigate nodes, pods, and clusters the easy way.
Every EKS cluster will have a single endpoint URL used by tools such as kubectl, the main Kubernetes client.
1.Understanding the EKS control plane
When a new cluster is created, a new control plane is created in an AWS-owned VPC in a separate account. There are a minimum of two API servers per control plane, spread across two Availability Zones for resilience, which are then exposed through a public network load balancer (NLB). The etcd servers are spread across three Availability Zones and configured in an
autoscaling group, again for resilience.
The clusters administrators and/or users have no direct access to the cluster’s servers; they can only access the K8s API through the load balancer. The API servers are integrated with the worker nodes running under a different account/VPC owned by the customer by creating Elastic Network Interfaces (ENIs) in two Availability Zones. The kubelet agent running on the worker nodes uses a Route 53 private hosted zone, attached to the worker node VPC, to resolve the IP addresses associated with the ENIs. The following diagram illustrates this architecture:
2.The API Server Component
The API server acts as the communication gateway for both internal cluster components and external clients or services that seek access to the cluster. It is the sole interface through which all agents, users, or services interact with the components within the cluster. The cluster components themselves can only communicate with each other through the API server. It has exclusive access to the cluster database (etcd) which keeps the critical cluster metadata, secrets and other necessary information for cluster management.
3.The ETCD Component
An EKS cluster is a distributed system where several nodes or instances work together to run a workload.2 Distributed systems are susceptible to failures due to their complex interconnections. To recover from such failures, EKS relies on the data stored in the etcd key-value database to maintain the clusters in their desired states. The etcd database holds critical metadata and essential information required to maintain the desired state of EKS clusters.
4.The Controller Manager Component
The controller manager component is the mother of all cluster controllers. It manages the object or cluster controllers. Cluster or object controllers maintain the desired state of their respective cluster objects. They include the node controller which monitors the health of cluster nodes and manages their lifecycle, the replication controller which maintains the desired number of pod replicas in the cluster, the endpoint controller which manages the mapping of services to pods, and the service account and token controllers which manage service accounts and their associated tokens.
Typically, controllers monitor events such as create, update, and delete on resources and trigger appropriate responses to restore objects to their desired state.
5.Scheduler Component
The scheduler is responsible for assigning the most appropriate node to an incoming workload or pod. It relies on decision-making algorithms and a deep understanding of the cluster environment to perform this task. The scheduler takes into account various factors, including resource requirements, hardware and software constraints, policy rules, and affinity or anti-affinity specifications to select a suitable node. Additional inputs in the decision-making process include data locality, potential inter-workload interference, and deadlines
Let's Look at this picture Once:
Did You Know how the end-user Communication works....?
End users can interact with pods or applications running on the data plane through a load balancer deployed not in the control plane but in the data plane as shown in below image. This interaction does not require a NAT or Internet gateway, regardless of the cluster configuration option.
In this setup, the load balancer acts as the frontend, handling traffic from end users, while the worker nodes serve as the backend. The load balancer routes traffic from the end users to the appropriate application pods running on the worker nodes, ensuring efficient communication without exposing the internal network.
Heyyyyy....!
Congrats! You just turned EKS from “What the Kube?” to “I got this.”
Keep building, keep exploring, and remember: every pod counts!
Until next time — happy kubectl-ing!
Let's Connect with me in LinkedIn: https://www.linkedin.com/in/munisaiteja-narravula/
Top comments (0)