DEV Community

Cover image for Understanding Taints and Tolerations in Kubernetes
Jensen Jose
Jensen Jose

Posted on

Understanding Taints and Tolerations in Kubernetes

Welcome back to my blog series on Kubernetes! Today we will be taking a dive into a crucial yet confusing topic: Taints and Tolerations. Understanding this concept is vital for anyone working with Kubernetes, as it helps manage workloads more effectively. By the end of this post, you'll have a clear understanding of how to use taints and tolerations, and you'll be able to apply these concepts confidently in your own projects.

What Are Taints and Tolerations?

Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. Taints are applied to nodes, and they repel pods that do not have the corresponding toleration. This mechanism is essential for managing workloads that have specific requirements, such as running AI workloads on nodes with GPUs.

How Taints Work

A taint is a key-value pair that you apply to a node. For instance, you might have a node dedicated to AI workloads, which requires GPUs. You can taint this node with key=value, such as GPU=true. This taint will prevent pods that do not tolerate this taint from being scheduled on the node.

How Tolerations Work

To allow a pod to be scheduled on a node with a taint, you need to add a toleration to the pod. A toleration has to match the taint's key-value pair. For example, if your node has a taint GPU=true, your pod must have a toleration GPU=true to be scheduled on that node.

Taints and Tolerations in Action

Let's break down a practical example:

  1. Tainting a Node:
kubectl taint nodes <node-name> GPU=true:NoSchedule
Enter fullscreen mode Exit fullscreen mode

This command applies a taint to a node, ensuring that only pods with the toleration GPU=true can be scheduled on it.

  1. Adding a Toleration to a Pod:
apiVersion: v1
kind: Pod
metadata:
  name: ai-pod
spec:
  containers:
  - name: ai-container
    image: ai-image
  tolerations:
  - key: "GPU"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
Enter fullscreen mode Exit fullscreen mode

This YAML file defines a pod with a toleration that matches the node taint.

When you create this pod, Kubernetes will check the taint on the node and the toleration on the pod. If they match, the pod will be scheduled on the tainted node.

Effects of Taints

There are three main effects that you can specify with taints:

  1. NoSchedule: Pods that do not tolerate the taint will not be scheduled on the node.
  2. PreferNoSchedule: Kubernetes will try to avoid scheduling pods that do not tolerate the taint on the node, but it is not guaranteed.
  3. NoExecute: Pods that do not tolerate the taint will be evicted from the node if they are already running.

Node Selectors

While taints and tolerations control which pods can be scheduled on which nodes, node selectors are another way to control pod placement. Node selectors work by adding labels to nodes and specifying those labels in pod specifications.

apiVersion: v1
kind: Pod
metadata:
  name: ai-pod
spec:
  containers:
  - name: ai-container
    image: ai-image
  nodeSelector:
    GPU: "true"
Enter fullscreen mode Exit fullscreen mode

This configuration ensures that the pod is only scheduled on nodes with the label GPU=true.

Example: Scheduling Pods with Taints and Tolerations

Let's see how this works in practice. First, we'll taint a node:

kubectl taint nodes worker1 GPU=true:NoSchedule
Enter fullscreen mode Exit fullscreen mode

Next, we'll create a pod with a matching toleration:

apiVersion: v1
kind: Pod
metadata:
  name: ai-pod
spec:
  containers:
  - name: ai-container
    image: ai-image
  tolerations:
  - key: "GPU"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
Enter fullscreen mode Exit fullscreen mode

Apply this pod configuration:

kubectl apply -f ai-pod.yaml
Enter fullscreen mode Exit fullscreen mode

The pod will be scheduled on the tainted node because it has the appropriate toleration.

Conclusion

Taints and tolerations are powerful tools in Kubernetes that help you manage where pods are scheduled. By using taints, you can prevent certain workloads from running on specific nodes, while tolerations allow pods to be scheduled on nodes with matching taints. Node selectors provide additional control over pod placement by matching pod labels to node labels.

I hope this post has clarified the concept of taints and tolerations for you. In the next blog post, we'll explore node affinity and anti-affinity, which provide even more control over pod scheduling.

Happy coding, and stay tuned for the next post in this series!

For further reference, check out the detailed YouTube video here:

Top comments (1)

Collapse
 
vieirandre profile image
André Vieira • Edited

I'd suggest you to point out some practical use cases for the K8s features you describe.

Given the three kinds of taints, you did so for NoSchedule, exemplifying that a node might be reserved for AI workloads requiring GPU.

Regarding the other two:

  • PreferNoSchedule, for instance, can be used on nodes meant for database workloads, but other pods would be allowed when there's no other node available.
  • NoExecute can be applied to nodes that need to be drained for maintenance or are encountering issues.