DEV Community

Cover image for What is New in Kubernetes 1.36: A Complete Guide to the Haru Release
Torque for MechCloud Academy

Posted on

What is New in Kubernetes 1.36: A Complete Guide to the Haru Release

The cloud native ecosystem is in a state of constant evolution. In late April 2026, the community proudly introduced Kubernetes 1.36, officially codenamed Haru. In the Japanese language, the word Haru carries several beautiful meanings including spring, clear skies, and far off. This codename perfectly encapsulates the thematic spirit of this release. It brings long awaited architectural features into the clear light of stable status, introduces fresh innovations for the spring of a new technological era, and provides a visionary glimpse into the far off future of distributed operating systems.

As artificial intelligence workloads and complex heterogeneous environments dominate the infrastructure landscape, Kubernetes is rapidly transitioning from a simple container orchestration platform into a highly sophisticated distributed operating system tailored specifically for the AI era. In this comprehensive guide, we will explore everything platform engineers, developers, and system administrators need to know about Kubernetes 1.36. We will cover the massive advancements in Dynamic Resource Allocation, the vital security features that have finally reached general availability, the intelligent scheduling mechanisms for machine learning workloads, and the necessary code deprecations that clean up legacy technical debt.

Whether you are managing massive multi tenant clusters or deploying highly specialized data science pipelines, Kubernetes 1.36 offers an incredible array of powerful new tools. Let us dive deep into the specific enhancements and architectural shifts that make this release one of the most exciting updates in recent history.

The Evolution of Dynamic Resource Allocation

One of the primary focal points of Kubernetes 1.36 is the massive enhancement of the Dynamic Resource Allocation framework. Historically, assigning specialized hardware such as GPUs, TPUs, and FPGAs to containers required clunky device plugins that lacked flexibility. With the exponential rise of AI training and machine learning inference workloads, platform engineering teams needed a robust, native, and granular way to handle expensive hardware accelerators. This release delivers several major advancements to bridge that gap.

First and foremost is the introduction of partitionable devices. In older versions of the platform, dedicating a highly expensive graphics processing unit to a single pod often resulted in massive resource underutilization. With this newly introduced capability, a single hardware accelerator can be programmatically split into multiple logical units. These smaller logical units can be safely and independently shared across various workloads. This ensures that platform administrators can maximize efficiency and squeeze every ounce of performance out of their specialized hardware budgets.

Next, Kubernetes 1.36 introduces Device Attributes in the Downward API. Previously, if a workload needed to know the exact physical device it was utilizing, it had to manually query the remote API server or rely on highly customized external controllers. Now, the Dynamic Resource Allocation driver can easily populate device metadata directly into a standard JSON file mounted inside the container. Your intelligent applications can instantly discover their assigned PCIe bus addresses, unique hardware identifiers, and specific driver attributes as simple environment variables or localized files.

Furthermore, the release introduces native hardware taints and tolerations. Much like traditional node taints, administrators can now apply conditional taints directly to specialized hardware devices. If a specific accelerator is overheating, requires firmware maintenance, or is reserved for a high priority data science team, an administrator can instantly taint the device. Only pods configured with the appropriate mathematical tolerations will be permitted to access it. This unprecedented level of granularity allows infrastructure teams to perform localized hardware maintenance without completely draining an entire node of its general compute workloads.

Finally, we see the implementation of Resource Availability Visibility. Previously, determining cluster wide hardware capacity required elevated administrative privileges and highly complex cross namespace queries. Now, users can issue a unified request object to the control plane, which automatically compiles a status summary of all available resources. This provides immediate insights into real time cluster capacity, ensuring that automated deployment pipelines can make mathematically intelligent decisions before attempting to schedule resource heavy batch tasks.

Monumental Upgrades in Security and Isolation

Security remains paramount in strictly regulated multi tenant environments. Kubernetes 1.36 brings several heavily anticipated security enhancements directly to stable status. The most notable programmatic achievement is the graduation of User Namespaces in Pods to general availability. Container isolation has always been a notoriously complex challenge. By automatically mapping the root user inside a container to a completely unprivileged user on the host node, this feature guarantees that even if a malicious actor successfully escapes the container environment, they possess absolutely zero administrative power over the underlying host infrastructure. Cluster operators can now confidently deploy these hardened isolation techniques to protect highly sensitive production environments from zero day vulnerabilities.

Another massive architectural win for security and operational simplicity is the stabilization of Mutating Admission Policies. In the past, platform teams had to deploy, secure, and monitor complex external webhooks to systematically mutate incoming API requests. This required maintaining additional infrastructure, added significant network latency, and often created dangerous single points of failure during the cluster bootstrapping process. Now, cluster administrators can define mutation rules natively in pure YAML using the Common Expression Language. By entirely bypassing the need for external webhooks, control planes become significantly more resilient, exponentially faster, and much easier to continuously maintain.

Additionally, the release directly addresses a critical startup vulnerability with the introduction of Manifest Based Admission Control Configuration. Historically, deeply integrated security policies were stored dynamically as standard API objects. If the core API server crashed and restarted, there was occasionally a brief temporal window where incoming requests could be processed before the complex security rules fully loaded into memory. By defining these admission control policies firmly inside static boot manifests, Kubernetes ensures that your security posture is strictly enforced from the very first millisecond of operation.

We also see the highly anticipated Faster SELinux Labelling for Volumes reaching general availability. Instead of sequentially and recursively relabeling every single file housed inside a massive persistent volume, the background kubelet process now utilizes a highly optimized mount option to apply the correct security context instantly at the filesystem level. This completely eradicates pod startup delays on strictly enforced operating systems, bringing immense performance benefits to security conscious organizations.

Smarter Workload Management and Advanced Scheduling

The orchestration of highly complex, mathematically intensive, and distributed workloads requires incredibly intelligent scheduling. Kubernetes 1.36 introduces features specifically designed from the ground up to handle high performance computing and distributed AI tasks.

The Topology Aware Scheduling algorithm represents a major alpha addition to the scheduling ecosystem. When dealing with tightly coupled computational workloads, such as deep neural network models that require immense bandwidth between individual nodes, random pod placement is no longer sufficient. This newly refined scheduler safely treats a group of related pods as a single logical unit. It meticulously ensures they are physically placed within the most optimal network topology, such as the exact same physical server rack or interconnected via a dedicated high speed backbone.

Building upon this physical awareness is the newly proposed Workload Aware Preemption mechanism. Traditional cluster preemption operates strictly on a per pod basis. If a high priority system job desperately needed resources, the scheduler might forcefully evict a single pod from an actively running AI training job. Because distributed computing tasks are deeply interdependent, losing just one single pod immediately renders the entire calculated job useless, subsequently wasting massive amounts of compute time. The new workload aware logic beautifully ensures that preemption happens entirely at the overarching job level. The scheduler will either preempt entire lower priority workloads or safely do nothing at all, perfectly preserving the computational integrity of active batch processes.

For standard background processing and queue management, the Mutable Pod Resources for Suspended Jobs capability has officially been enabled by default as a beta feature. Intelligent queue controllers can now dynamically adjust the CPU, active memory, and specialized accelerator limits of a actively suspended job right before it logically resumes. This incredible capability allows batch processors to gracefully adapt to the real time operational conditions of the cluster. They can seamlessly scale down resource requests during peak traffic hours and aggressively scale them up when computational capacity is abundant.

Furthermore, the ubiquitous Horizontal Pod Autoscaler finally supports scaling down to absolute zero replicas based on deeply integrated external metrics. If a specific microservice application only processes messages from an external cloud queue, and that particular queue happens to be completely empty, the autoscaler can safely terminate all running pods. This scale to zero functionality is absolutely essential for minimizing expensive cloud costs in event driven serverless architectures.

Storage Visibility and Infrastructure Telemetry

Storage capacity management receives a highly requested quality of life upgrade with the introduction of the Persistent Volume Claim Last Used Time tracking metric. In massive enterprise grade clusters, identifying completely abandoned cloud storage is an absolute operational nightmare. Digital volumes silently accumulate over time, aggressively racking up astronomical cloud bills despite being completely detached from any actively running application. Kubernetes 1.36 cleanly introduces an explicit unused condition directly into the persistent volume claim status. This critical visibility allows financial operations teams to quickly identify and routinely garbage collect orphaned persistent volumes, massively optimizing ongoing storage expenditures.

On the physical node level, the Container Storage Interface drivers can now dynamically update the maximum number of physical volumes a given node can support. Previously, if a heavily loaded node encountered systemic resource exhaustion, dynamically updating the strict volume limit required a full component restart. The intelligent kubelet can now fluidly adjust these upper limits dynamically based entirely on active driver feedback. This prevents the master scheduler from accidentally assigning critical workloads to nodes that have quietly hit their underlying storage limitations.

In the realm of active telemetry, Pressure Stall Information integration has successfully reached general availability. The kubelet natively ingests and continuously exposes detailed metrics regarding CPU utilization, active memory consumption, and input output pressure directly into the standard Summary API. Platform engineers no longer need to strictly rely on external node exporters to proactively detect underlying hardware bottlenecks. The core system natively provides real time, barometer like insights into dangerous resource starvation long before it causes a catastrophic cascading failure.

Networking Upgrades and Modern Observability

While legacy networking models have served the community incredibly well for years, Kubernetes 1.36 signals a major architectural push toward significantly more modern networking paradigms. The strategic deprecation of older networking fields actively pushes users towards the highly modernized Gateway API, which consistently offers highly declarative, role oriented routing rules. Unlike legacy ingress controllers, the Gateway API cleanly separates the organizational responsibilities of underlying infrastructure providers, internal cluster operators, and standard application developers.

Furthermore, the networking special interest group has officially implemented dynamic source IP resolution for NodePort services operating at the namespace level. This allows security administrators to strictly enforce localized egress and ingress network policies. Additionally, greatly improved IPv6 egress policy handling now instantly returns standard destination unreachable signals when unauthorized traffic is actively denied, substantially improving the diagnosability of highly complex dual stack networking issues.

For clusters running advanced configurations, platform teams will benefit immensely from vastly improved resource management through In Place Vertical Scaling for active pods, which has gracefully transitioned to alpha status. Previously, static computational policies that granted specific pods exclusive access to isolated CPU cores struggled to correctly reconcile changes in active resource requests without fully restarting the underlying container. This newly engineered enhancement allows critical applications to dynamically increase their computational power completely on the fly. For heavy databases or real time streaming data applications experiencing sudden spikes in external traffic, this robust feature confidently ensures that network performance remains highly optimal without ever incurring the painful downtime of a forced pod restart.

Essential Cleanups: Deprecations and Removals

A deeply healthy open source ecosystem requires routinely pruning legacy code. Kubernetes 1.36 firmly follows through on several long standing deprecations to strictly enforce modern operational security practices.

The most widely discussed removal is the complete eradication of the gitRepo volume driver. In the early days of container orchestration, users desperately needed a simple way to deploy active applications directly from source control. This legacy plugin allowed background pods to clone Git repositories directly during their initialization startup. However, it unfortunately operated with notoriously high privileges and consistently posed a significant risk of remote code execution on the underlying host node. Upgrading your clusters to this new release will instantly break declarative manifests that still rely on this heavily outdated volume type. Engineering teams must immediately transition to using standard init containers to clone remote repositories safely.

Another highly critical deprecation involves the heavily scrutinized external IPs field located within standard Service specifications. For several years, this deeply problematic field allowed non privileged users to maliciously hijack internal traffic by blatantly claiming arbitrary IP addresses, essentially opening the digital door for severe man in the middle attacks. Kubernetes 1.36 boldly introduces a strict feature gate to actively block the proxy routing systems from processing these dangerous rules. Over the next few planned release cycles, this specific functionality will be completely eradicated from the codebase. Platform teams are strongly encouraged to permanently migrate their external traffic routing over to the modern, highly robust, and securely designed Gateway API.

Lastly, the standard command line interface effectively introduces significantly cleaner localized configuration management. The brand new configuration file architecture neatly separates highly sensitive cluster credentials from standard user display preferences. This highly intelligent architectural shift completely prevents accidental credential leaks and thoroughly standardizes the structured way software developers interact with multiple remote clusters simultaneously.

Final Thoughts and Operational Conclusion

Kubernetes 1.36 represents a truly transformative software release that directly addresses the complex operational needs of the modern cloud native ecosystem. By heavily investing architectural effort into Dynamic Resource Allocation, systematically stabilizing highly critical security features like User Namespaces, and fundamentally optimizing core scheduling algorithms for mathematically intensive artificial intelligence workloads, the open source project definitively continues to prove its incredible resilience and widespread adaptability.

As you meticulously prepare your organizational clusters for this massive infrastructure upgrade, ensure that you carefully review your existing admission controllers, actively update your legacy storage manifests, and methodically migrate your operational configurations away from explicitly deprecated networking fields. Fully embracing the powerful innovations brought forth by the Haru release will confidently ensure your underlying infrastructure remains inherently secure, highly cost efficient, and structurally prepared for the next brilliant generation of intelligent cloud applications.

Top comments (0)