AWS is excited to announce that you can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.26.
You can create new 1.26 clusters or upgrade your existing clusters to 1.26 using the Amazon EKS console, the eksctl
command line interface, or through an infrastructure-as-code tool.
Release of Kubernetes v1.26!
This release includes a total of 37 enhancements: eleven of them are graduating to Stable, ten are graduating to Beta, and sixteen of them are entering Alpha. We also have twelve features being deprecated or removed, three of which we better detail in this announcement.
Prerequisites to upgrade
Before upgrading to Kubernetes v1.26 in Amazon EKS, there are some important tasks you need to complete. The following section outlines two key changes that you must address before upgrading.
Virtual Private Cloud (VPC) Container Network Interface (CNI) plugin. You must upgrade your VPC CNI plugin version to 1.12 or higher. Earlier versions of the VPC CNI will cause the CNI to crash, because it relied on the CRI v1alpha2API, which has been removed from Kubernetes v1.26. For step-by-step instructions to upgrade the VPC CNI in your cluster, refer to Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
Containerd runtimes. You must upgrade to containerd version 1.6.0 or higher. Kubernetes v1.26 has dropped support for CRI v1alpha2, which causes the kubelet to not register nodes when the container runtime does not support CRI v1. This affects containerd minor versions 1.5 and below, which are not supported in Kubernetes 1.26. Similarly, other container runtimes that only support v1alpha2 must also be updated. If you’re using an Amazon EKS optimized AMI, check which version of containerd it uses. Note that the version of containerd included in custom AMIs may vary, and some older versions may not be compatible with Kubernetes v1.26. To ensure compatibility, upgrade to the latest Amazon EKS optimized AMI, which includes containerd version 1.6.19 to avoid any compatibility issues.
The CPU Manager gives workloads more control over their CPU and performance requirements. This is important for multi-socket servers where workloads can be scheduled—or moved—between CPUs based on availability. Some workloads require tight control over their CPU scheduling to avoid cache misses or to have higher bandwidth with main memory and scheduled on the same bus with NUMA.
High-Performance Computing (HPC) workloads like image rendering will are able to run faster if they are not re-scheduled to different CPUs during execution.
*Change in container image registry *
In the previous release, Kubernetes changed the container registry, allowing the spread of the load across multiple Cloud Providers and Regions, a change that reduced the reliance on a single entity and provided a faster download experience for a large number of users.
This release of Kubernetes is the first that is exclusively published in the new registry.k8s.io
container image registry. In the (now legacy) k8s.gcr.io
image registry, no container images tags for v1.26 will be published, and only tags from releases before v1.26 will continue to be updated. Refer to registry.k8s.io: faster, cheaper and Generally Available for more information on the motivation, advantages, and implications of this significant change
Top comments (0)