DEV Community


Add NVIDIA GPU support to k3s with containerd

mweibel profile image Michael Weibel ・4 min read

After some failed attempts of adding GPU support to k3s, this article describes how to boot up a worker node with NVIDIA GPU support.
k3s, for those who are new to it, is a very small kubernetes distribution.

There are a few reasons why adding GPU support is not that easy. Main reason is that k3s is using containerd as it's container runtime. Most tutorials and also the official NVIDIA k8s device plugin assume docker as the container runtime. While you can easily switch to docker in k3s, we didn't want to change the runtime itself.
Kubernetes itself has a guide for adding GPU support which outlines the basic steps.

The following recipe has been tested on GCP instances n2-standard-1 with a NVIDIA Tesla T4 GPU attached.
It assumes a running master node. Each worker with an attached GPU needs a few additional steps which are outlined below.

Create device plugin DaemonSet

The device plugin is responsible for advertising the resource on a node (via kubelet).
This needs to be done on the kubernetes node only once. Every node with the label then gets automatically a pod from this DaemonSet assigned.

$ kubectl apply -f

The following steps are necessary for each node which needs GPU support. Placing it in a startup script is a good option.

Install drivers

# required kernel module
modprobe ipmi_devintf

# add necessary repositories
add-apt-repository -y ppa:graphics-drivers
curl -s -L | \
  apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L$distribution/nvidia-container-runtime.list | \
  tee /etc/apt/sources.list.d/nvidia-container-runtime.list
apt-get update

# install graphics driver
apt-get install -y nvidia-driver-440 nvidia-container-runtime nvidia-modprobe

Ensure nvidia driver is loaded and device files ready


/sbin/modprobe nvidia

if [ "$?" -eq 0 ]; then
  # Count the number of NVIDIA controllers found.
  NVDEVS=`lspci | grep -i NVIDIA`
  N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
  NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`

  N=`expr $N3D + $NVGA - 1`
  for i in `seq 0 $N`; do
    mknod -m 666 /dev/nvidia$i c 195 $i

  mknod -m 666 /dev/nvidiactl c 195 255

  exit 1

/sbin/modprobe nvidia-uvm

if [ "$?" -eq 0 ]; then
  # Find out the major device number used by the nvidia-uvm driver
  D=`grep nvidia-uvm /proc/devices | awk '{print $1}'`

  mknod -m 666 /dev/nvidia-uvm c $D 0
  exit 1

Install k3s

curl -sfL | \
    K3S_URL=https://IP_OF_MASTER_ADDRESS:6443 \
    K3S_TOKEN=CONTENT_OF_/var/lib/rancher/k3s/server/node-token_ON_MASTER \
    sh -s - \
    --node-label "$(curl -fs "" -H "Metadata-Flavor: Google")"

INSTALL_K3S_SKIP_START prevents k3s from starting, as we need first to change containerd config (see below)
node-label should be set to that key, the value is only important if you want to schedule pods based on the GPU available. The example here uses a metadata attribute on the instance within GCE. Feel free to change that to something else or just true.

Configure containerd

Containerd needs to be changed to use a different container runtime. This can be achieved by adjusting the config.toml or rather creating a config.toml.tmpl file.

mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/

# why "EOF":
# ($ signs would need to be escaped -> use "EOF" instead of EOF)
cat <<"EOF" > /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
  path = "{{ .NodeConfig.Containerd.Opt }}"

  stream_server_address = ""
  stream_server_port = "10010"

{{- if .IsRunningInUserNS }}
  disable_cgroup = true
  disable_apparmor = true
  restrict_oom_score_adj = true

{{- if .NodeConfig.AgentConfig.PauseImage }}
  sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"

{{- if not .NodeConfig.NoFlannel }}
  bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
  conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"

  # ---- changed from 'io.containerd.runc.v2' for GPU support
  runtime_type = "io.containerd.runtime.v1.linux"

# ---- added for GPU support
  runtime = "nvidia-container-runtime"

{{ if .PrivateRegistryConfig }}
{{ if .PrivateRegistryConfig.Mirrors }}
{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
  endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]

{{range $k, $v := .PrivateRegistryConfig.Configs }}
{{ if $v.Auth }}
  {{ if $v.Auth.Username }}username = "{{ $v.Auth.Username }}"{{end}}
  {{ if $v.Auth.Password }}password = "{{ $v.Auth.Password }}"{{end}}
  {{ if $v.Auth.Auth }}auth = "{{ $v.Auth.Auth }}"{{end}}
  {{ if $v.Auth.IdentityToken }}identitytoken = "{{ $v.Auth.IdentityToken }}"{{end}}
{{ if $v.TLS }}
  {{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
  {{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
  {{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}

Start k3s agent

Start k3s: systemctl start k3s-agent

That's it! :)

Discussion (-1)

Editor guide