DEV Community

Cover image for Kubernetes v1.35 Raises the Cost of Bad Certificate Hygiene
Seaionl
Seaionl

Posted on

Kubernetes v1.35 Raises the Cost of Bad Certificate Hygiene

Kubernetes v1.35, codenamed Timbernetes, was released in December 2025 with a strong focus on stability, performance, and security hardening.

At first glance, it doesn't look like a "TLS release." There's no headline feature about certificates expiring or PKI changes. But taken together, the changes in v1.35 continue a multi-release shift in Kubernetes that makes poor certificate hygiene more dangerous and more disruptive than before.

SSL/TLS failures were already painful. In v1.35, they are easier to trigger, harder to paper over, and more likely to surface as production incidents rather than obvious config errors.


Certificates are no longer just an ingress concern

In early Kubernetes setups, certificates mostly mattered at the edges:

  • ingress controllers
  • public HTTPS endpoints
  • a handful of webhooks

Modern clusters are different.

Certificates now underpin:

  • service-to-service authentication
  • workload identity
  • control-plane authentication via OIDC/JWT
  • internal APIs and admission webhooks

By v1.35, Kubernetes has leaned even further into this model. Certificates are increasingly part of the normal control flow, not optional security add-ons.


Pod Certificates reach beta (but require intent)

Kubernetes v1.35 moves Pod Certificates to beta via the PodCertificateRequest API and projected podCertificate volumes.

This is an important milestone. It means Kubernetes-native certificate issuance for pods is now considered stable enough for early production use.

However, two things matter operationally:

  • Pod Certificates are opt-in and feature-gate dependent
  • Adoption is still cluster and distro-specific

For many clusters today, mTLS is still handled by cert-manager or service meshes. But v1.35 makes a different path viable: workloads can receive certificates directly from Kubernetes, with the kubelet involved in issuance and rotation.

For teams that enable this, certificate lifecycle issues move closer to the application layer. When something goes wrong, failures don't look like "TLS errors" anymore. They look like:

  • retries
  • timeouts
  • intermittent 5xxs
  • partial service degradation

That shift makes certificate visibility more important, not less.


In-place Pod resource updates mean pods live longer

In-place Pod CPU and memory updates graduate to GA in v1.35. This allows operators to adjust resource requests and limits without restarting pods.

This is a big win for:

  • stateful workloads
  • long-running batch jobs
  • latency-sensitive services

It also changes a subtle assumption many teams relied on:
pods don’t restart as often anymore.

Longer-lived pods increase the likelihood that certificates expire during a pod’s lifetime. Restart-based "fixes" become less common, and rotation failures become more visible.

If certificates tied to workloads aren't monitored or rotated correctly, failures now surface mid-flight rather than being masked by redeploys.


Structured authentication config tightens control-plane trust

Kubernetes has been moving away from flag-based JWT/OIDC configuration since v1.30, introducing structured authentication configuration for the API server.

By v1.35, this pattern is no longer new, but it is increasingly adopted. More clusters now:

  • rely on structured JWT/OIDC config
  • update authentication settings dynamically
  • integrate external identity providers more deeply

This improves security and flexibility, but it also means:

  • more certificate-backed trust chains
  • stricter validation paths
  • fewer "it still works" shortcuts

Expired or misconfigured issuer certificates can break:

  • kubectl access
  • controllers
  • CI/CD pipelines
  • automated cluster operations

These failures are often discovered during upgrades or audits, not during routine testing.


Legacy escape hatches continue to disappear

Kubernetes v1.35 continues the project's long-running effort to remove or phase out legacy components.

Notably:

  • cgroup v1 support is effectively gone in upstream expectations
  • validation and defaults continue to tighten across subsystems
  • older runtime and networking paths are increasingly discouraged

The practical effect is that clusters running on outdated assumptions lose tolerance for:

  • old OpenSSL behavior
  • weak or manual certificate practices
  • undocumented trust paths

Certificates that once "sort of worked" are more likely to fail fast.


SSL expiration rarely fails in obvious ways

One of the most dangerous aspects of certificate issues in Kubernetes is how they manifest:

  • pods marked Ready but unable to serve traffic
  • intermittent request failures
  • admission webhooks silently blocking deployments
  • controllers stuck retrying
  • services flapping without clear errors

These symptoms are frequently misdiagnosed as:

  • DNS issues
  • network problems
  • resource pressure
  • application bugs

By the time certificate expiration is identified as the root cause, user impact has usually already occurred.


Websites still matter, and failures cascade

Even with all the internal complexity, most clusters still expose:

  • websites
  • APIs
  • OAuth callbacks
  • webhooks

An expired public certificate can:

  • break CI/CD integrations
  • block authentication flows
  • fail load balancer health checks
  • trigger browser hard-failures

In Kubernetes environments, external and internal certificate failures often cascade across layers.


What certificate hygiene looks like in a v1.35 world

Clusters running Kubernetes v1.35 should treat certificates as first-class operational assets:

  • maintain a clear inventory of certificates
  • monitor expiration proactively
  • understand rotation behavior rather than assuming it works
  • avoid one-off, manual issuance paths

Most SSL-related outages are not caused by cryptography. They're caused by missing visibility.


Where tools like CertPing fit

Some teams manage this with internal scripts and alerts. Others stitch together monitoring, PKI tooling, and spreadsheets.

Platforms like CertPing exist to centralize:

  • SSL/TLS expiration monitoring
  • website and endpoint checks
  • early warning before certificates expire
  • visibility across domains and environments

The goal isn’t to replace Kubernetes-native features, but to make sure the certificates those features depend on don’t quietly become single points of failure.


Final thought

Kubernetes v1.35 improves reliability, security, and flexibility.
It also raises the operational cost of ignoring certificates.

SSL expiration is no longer an edge case. It's an inevitability unless actively managed.

Upgrading your cluster without a certificate strategy doesn't make you safer.
It just makes the failures sharper.


References

Top comments (0)