DEV Community

Rishabh Jain
Rishabh Jain

Posted on

Enhance Your Deployments with Pod Topology Spread Constraints: K8s 1.30

Pod Topology Spread Constraints in Kubernetes help us spread Pods evenly across different parts of a cluster, such as nodes or zones. This is great for keeping our applications resilient and available. This feature makes sure to avoid clustering too many Pods in one spot, which could lead to a single point of failure.

Key Parameters:-

  1. Topology Key:- This is a label key that defines where your Pods can be placed in the cluster. Available topology keys
    ⇒ kubernetes.io/hostname → This key spreads Pods across different nodes within the cluster.
    ⇒ topology.kubernetes.io/zone → This key spreads Pods across different availability zones.
    ⇒ topology.kubernetes.io/region → This key spreads Pods across different regions.

  2. MaxSkew:- Maximum allowed difference in the number of Pods between the most and least populated groups defined by your Topology Key.

  3. WhenUnsatisfiable:- This is what Kubernetes does when it can't meet your specified Pod spread criteria.
    ⇒ DoNotSchedule → Prevents scheduling if the constraint is violated.
    ⇒ ScheduleAnyway → Allows scheduling but logs a warning.

  4. LabelSelector:- Standard Kubernetes label selector to filter which Pods are considered for the constraint.

  5. minDomain:- Ensures that pods are spread across at least ‘n’ different zones. This is an optional field which can only be used when using the

Basic YAML configuration

apiVersion: apps/v1 
kind: Deployment 
metadata:
 name: new-app 
spec:
 replicas: 5 
 selector:
  matchLabels: 
   app: new-app
template: 
 metadata:
  labels:
    app: new-app
 spec: 
   topologySpreadConstraints:
     - maxSkew: 1
       topologyKey: "topology.kubernetes.io/zone" 
       whenUnsatisfiable: "DoNotSchedule" 
       labelSelector:
         matchLabels: 
          app: new-app
        minDomains: 3 
   containers:
    - name: new-app-container 
      image: new-app-image
Enter fullscreen mode Exit fullscreen mode

Note -

Suppose we have three zones, and we are deploying Pods with a maxSkew of 1. The distribution might look like this:
• Zone A: 2 Pods
• Zone B: 2 Pods
• Zone C: 1 Pod
Here, we can see that the maximum difference in Pod counts between the zones is not more than 1. Therefore, any new Pods will be scheduled as long as this difference does not exceed 1. If the difference were greater than 1, the system would not schedule any additional Pods in the zones where this limit is exceeded.

Pod Topology Spread Constraints can be used with various k8s objects:-

  1. Deployments
  2. StatefulSets
  3. DaemonSets
  4. ReplicaSets
  5. Jobs and CronJobs

Some common use cases:-

  1. Deploying a web application with its replicas spread evenly across multiple availability zones to ensure high availability and fault tolerance.
  2. Deploying a stateful application, such as a database, with Pods spread across different nodes to prevent data loss in case of node failure.
  3. Deploying a batch processing application with workloads distributed across multiple zones to optimize resource utilization and ensure processing continuity.

Other mechanisms to achieve balanced Pod distribution and resilience:-

  1. Pod AntiAffinity
  2. Node Affinity
  3. Cluster Autoscaler with Balance-Similar-Node Groups
  4. Manual Distribution
  5. Custom Schedulers

Significant advantages of Pod Topology Spread Constraints over other mechanisms

  1. Enhanced Granularity and Control: Pod Topology Spread Constraints allow precise control over Pod distribution across various domains (e.g., zones, nodes), ensuring a balanced deployment with minimal skew between domains.
  2. Automation and Simplicity: Unlike Pod AntiAffinity and Node Affinity, which can be complex and require manual management, Pod Topology Spread Constraints automatically balance Pods based on predefined rules, reducing manual effort and errors.
  3. Proactive Balancing: This feature ensures Pods are evenly distributed at the time of scheduling, unlike the Cluster Autoscaler which reacts to imbalances after they occur, providing more immediate and consistent balance.
  4. Versatility Across Domains: While Node Affinity focuses on nodes, Pod Topology Spread Constraints work across multiple topology domains, making them more versatile for different deployment scenarios.
  5. Standardized and Built-In: As a native Kubernetes feature, Pod Topology Spread Constraints offer a standardized approach, eliminating the need for custom schedulers and ensuring compatibility with Kubernetes updates and community support.

Top comments (0)