Released April 22, 2026.
Kubernetes v1.36, just released on April 22, 2026, brings a wide set of improvements that feel less like experimentation and more like completion. With around 70 enhancements across stable, beta, and alpha stages, this release is not about flashy new ideas. It is about finishing what has been in progress for years and making core features reliable enough for everyday production use.
The release is named Haru, a sound in Japanese associated with spring and clarity. It is a fitting name. Many long-running efforts have finally reached maturity, especially in areas like security, resource management, and cluster operations.
The security story is the headline
Three separate security features reached GA (General Availability) in this release, and taken together they represent a meaningful shift in how Kubernetes handles isolation and privilege. None of them are new ideas, but all three are now stable and production-ready.
User Namespaces is the biggest one. The concept has been in progress since KEP #127 and it has been a long road. What it does is remap a container's internal root user to an unprivileged UID on the actual host. A process inside the container thinks it is root. The host disagrees. If that process breaks out of the container, it lands on the node with no privileges at all. This does not eliminate container escapes, but it takes the worst-case outcome from "attacker has root on your node" to "attacker has nothing." For multi-tenant clusters, that gap is everything.
Fine-grained kubelet API authorization is the other one that should have existed sooner. Previously, granting monitoring tools access to the kubelet's API meant handing over the nodes/proxy permission, which is much broader than any metrics scraper actually needs. This feature enables precise, least-privilege access control over individual kubelet API endpoints. Monitoring and observability tools can now get exactly the access they need and nothing more.
Constrained impersonation is in beta, not GA, but it deserves mention here because it closes a real security gap. Kubernetes impersonation has historically been all-or-nothing: if you have permission to impersonate an identity, you can do anything that identity can do. The constrained model splits this into two separate checks. You need permission to impersonate, and you separately need permission for the specific action you want to perform as that identity. Misconfigured RBAC no longer translates into accidental privilege escalation.
Admission webhooks just got a serious competitor
If you have run admission webhooks in production you know the operational tax. There is the webhook service to maintain, the TLS certificates to rotate, the latency added to every single API request, and the debugging experience when something goes wrong mid-deploy. It is not unmanageable but it is never free.
Mutating Admission Policies reached stable in v1.36. You write mutation logic in CEL directly in the API server. No external service, no network hop, no certificate. For the majority of real-world use cases like enforcing labels, defaulting fields, injecting sidecars this covers the ground. The feature is stable, versioned, and backed by the full API machinery guarantees.
This does not mean webhooks are going away. Complex mutations that require external state or multiple API calls still need a webhook. But if your webhooks are doing simple declarative work, the migration path is now clear and the destination is considerably less painful to operate.
PSI metrics give you a better view of what your nodes are actually doing
Utilization metrics lie to you, slightly. A node at 70% CPU utilization can be perfectly healthy or completely saturated depending on whether processes are actually getting scheduled. Pressure Stall Information tells you what utilization cannot: how much time workloads are blocked waiting for CPU, memory, or I/O.
The difference in practice is significant. With PSI you can distinguish between a node that is busy and a node that is stalling. Vertical autoscalers become more accurate. Resource requests become easier to tune. Noisy neighbor problems become visible before they cause actual incidents instead of after. This feature is now stable, and it requires cgroupv2 which should already be your baseline at this point.
OCI artifacts as volumes is quietly interesting for ML teams
This one did not get top billing in the release notes but it has real implications for how ML inference workloads get built.
You can now mount an OCI artifact directly into a pod as a volume. The kubelet pulls from any OCI-compliant registry and mounts the content. No PVC, no storage backend, no init container pulling files down at startup. If you already package your model weights or static assets as OCI images for versioning purposes, you can now deliver them to pods using the exact same registry infrastructure you use for container images.
Content-addressable storage, immutable versioning, and familiar tooling, all for free, because you were already using a registry. Teams that are not yet doing this should at least be aware it is now an option.
Gang scheduling is finally coming to core Kubernetes
This has been an open gap for a long time. Distributed training jobs, simulation workloads, anything where a group of pods either all need to run together or it is not worth running any of them, vanilla Kubernetes has never had a good answer. You either used external schedulers like Volcano or you engineered around the limitation.
The new PodGroup API, landing in alpha, treats a group of pods as a single scheduling unit. Either all pods in the group are bound to nodes, or none of them are. No more partial allocations where a job half-starts and then sits waiting for resources that may never arrive.
It is alpha, so do not build production workflows on it today. But the design is sound and the direction is clear. Native gang scheduling in core Kubernetes is coming.
Things worth knowing before you upgrade
Node log query is GA. You can now read kubelet and kube-proxy logs via kubectl without SSH access to the node. The NodeLogQuery feature gate is on by default. Small ergonomic improvement but during an incident where node access requires a separate approval flow, it matters.
The gitRepo volume plugin is permanently removed. It was deprecated in v1.11 and has been gone in spirit for years, but until v1.36 it could still technically run. Now it cannot and there is no flag to bring it back. The plugin ran git as root on the node which is as bad as it sounds. Replace it with an init container running git-sync. If you have workloads still using gitRepo, they will fail after this upgrade. Check before you upgrade.
Service.spec.externalIPs is deprecated. This field has been a known MITM attack vector since CVE-2020-8554. Full removal is planned for v1.43, so you have time, but start looking at LoadBalancer services or the Gateway API as replacements.
Mixed version proxy is in beta. HA clusters doing rolling control plane upgrades have always had a window where different apiservers are serving different API versions, leading to sporadic 404s. This feature routes each request to whichever apiserver can actually serve it. Much safer upgrade windows for anyone running multiple apiserver replicas.
Mutable resources for suspended jobs is beta and enabled by default. You can suspend a Job, update its CPU, memory, or GPU requests, and resume it without destroying and recreating pods. If you are building queue-based batch scheduling systems this is the feature that makes dynamic resource adjustment actually clean to implement.
HPA scale-to-zero for external metrics is in alpha. The autoscaler can now bring a deployment down to zero replicas based on Object or External metrics like queue depth, and scale it back up when work arrives. Still behind a feature gate, but the infrastructure cost savings for idle queue-driven workloads are obvious.
Full changelog
โ Stable (GA) in v1.36
- User Namespaces in pods
- Mutating Admission Policies
- External signing of ServiceAccount tokens
- Fine-grained kubelet API authorization
- Volume Group Snapshots
- Mutable CSINode Allocatable (volume attach limits)
- OCI Artifact volume source
- Node log query
- PSI metrics on cgroupv2
- SELinux volume labeling speedup
- DRA admin access for ResourceClaims and ResourceClaimTemplates
- DRA prioritized alternatives in device requests
- DRA extended PodResources for Dynamic Resource Allocation
- Declarative validation with validation-gen
- Removal of gogoprotobuf dependency for Kubernetes API types
- Portworx in-tree to CSI driver migration
- ProcMount option
- CSI driver opt-in for service account tokens via secrets field
๐ต Promoted to Beta in v1.36
- Constrained impersonation
- Strict IP/CIDR validation
- Mutable container resources for suspended Jobs
- Mixed version proxy
- DRA partitionable devices and consumable capacity
- DRA device taints and tolerations
- DRA ResourceClaim device status
- Memory QoS with cgroupv2
-
.kubercuser preferences for kubectl - Staleness mitigation for controllers
- ComponentStatusz
/statuszendpoint - ComponentFlagz
/flagzendpoint - Resource health status via
allocatedResourcesStatus
๐งช New in Alpha in v1.36
- Workload Aware Scheduling with PodGroup API
- HPA scale-to-zero for Object and External metrics
- Native sparse histogram support for Kubernetes metrics
- Manifest-based admission control configuration
- CRI list streaming for the kubelet
- DRA native resources for CPU management
- DRA downward API for resource attributes
- DRA ResourceClaim support for workload controllers
โ ๏ธ Deprecations and Removals
-
Removed:
gitRepovolume plugin, permanently and without a fallback option -
Deprecated:
Service.spec.externalIPs, planned removal in v1.43
Conclusion
v1.36 is a maturation release more than a revolution. The headline features are things the community has been waiting on for a while (user namespaces, mutating admission policies, PSI metrics) and watching them land at GA feels less like news and more like relief. The alpha features, particularly gang scheduling and HPA scale-to-zero, sketch out clearly where the next few releases are going for AI and batch workloads.
The practical upgrade advice is simple: audit for gitRepo usage before anything else. That is the one thing in this release that will break silently if you are not looking for it. Everything else is additive or behind feature gates.
The Kubernetes project has been shipping at a remarkable cadence for over a decade. v1.36 is another steady step in the same direction.
References
- Kubernetes v1.36 Official Release Blog
- Full v1.36 Release Notes on GitHub
- KEP #127: User Namespaces
- KEP #3962: Mutating Admission Policies
- KEP #4639: OCI Artifact Volume Source
- KEP #4205: PSI Metrics on cgroupv2
- KEP #4671: Workload Aware Scheduling
- KEP #5040: Remove gitRepo Volume Driver
- KEP #5707: Deprecate Service externalIPs
- Kubernetes Deprecation and Removal Policy
Top comments (0)