DEV Community

Cover image for A Complete Guide to Real-Time GPU Usage Monitoring
James Skelton for DigitalOcean

Posted on with Vinayak Baranwal • Originally published at digitalocean.com

A Complete Guide to Real-Time GPU Usage Monitoring

The fastest way to monitor GPU utilization in real time on Linux is to run nvidia-smi --loop=1, which refreshes GPU stats every second including core utilization, VRAM usage, temperature, and power draw.

Monitoring GPU utilization in real time starts with nvidia-smi, then expands to per-process views, container metrics, and alerts for long-running jobs. This guide shows command-level workflows you can run on Ubuntu, GPU Droplets, Docker hosts, and Kubernetes clusters.

If you are building or operating deep learning systems, pair this guide with How To Set Up a Deep Learning Environment on Ubuntu and DigitalOcean GPU Droplets.

Key Takeaways

  • Use nvidia-smi --loop=1 for the fastest host-level real-time GPU check on Linux.
  • Use nvidia-smi pmon -s um to identify which PID is using GPU cores and GPU memory bandwidth.
  • For terminal dashboards, use nvtop for interactive drill-down and gpustat for lightweight snapshots.
  • In containers and Kubernetes, expose metrics through NVIDIA runtime support and DCGM Exporter.
  • Persistent alerting belongs in monitoring platforms such as Datadog Agent or Zabbix templates.
  • GPU memory utilization and GPU core utilization are separate signals, high memory with low cores is common in input-stalled jobs.
  • On Windows, Unified GPU Usage Monitoring aggregates engine activity and surfaces it in Task Manager and WMI.

What GPU Utilization Metrics Actually Mean

GPU utilization metrics tell you whether your job is compute-bound, memory-bound, input-bound, or idle between batches. Start by tracking core utilization, memory usage, memory controller load, temperature, and power draw together instead of looking at one metric in isolation.

GPU Core Utilization vs. Memory Utilization

GPU core utilization is the percentage of time kernels are actively executing on SMs during the sampling window. GPU memory utilization in nvidia-smi usually refers to memory controller activity, while memory usage is allocated VRAM in MiB.

Low core utilization with high allocated VRAM often means the model is resident but waiting on data or synchronization. High core utilization with low memory controller activity is more common in compute-heavy kernels.

SM Utilization, Memory Bandwidth, and Power Draw

SM utilization tells you whether CUDA cores are busy, memory bandwidth indicates how hard memory channels are being driven, and power draw shows electrical load relative to the card limit. These three together explain why two workloads with similar utilization percentages can perform differently.

Use power.draw, power.limit, and utilization metrics in the same sample window when tuning batch size and dataloader workers. If power is capped while utilization is high, clock throttling can be the next bottleneck to investigate.

Why These Metrics Matter for Deep Learning Workloads

These metrics matter because training throughput is gated by the slowest stage in the pipeline. If GPU cores are idle while CPU or storage is saturated, adding another GPU will not fix throughput.

<$>[note]
For a practical environment baseline before tuning, follow How To Set Up a Deep Learning Environment on Ubuntu.
<$>

GPU Bottlenecks and Out of Memory Errors

Most GPU incidents in ML pipelines come from input bottlenecks or VRAM pressure. Diagnose both at the same time by sampling GPU, CPU, and process-level memory while a real training job is running.

CPU Preprocessing Bottlenecks

If CPU preprocessing is the bottleneck, GPU utilization drops between mini-batches even when VRAM remains allocated. This pattern appears when image decode, augmentation, or tokenization is slower than kernel execution.

Check host pressure while your training loop runs:

top
Enter fullscreen mode Exit fullscreen mode
vmstat 1
Enter fullscreen mode Exit fullscreen mode
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
2  0      0 824320  74384 901212    0    0     6    10  420  980 18  4 76  2  0
Enter fullscreen mode Exit fullscreen mode

In vmstat, watch r, wa, bi, and us plus sy together. r is runnable processes, and if it stays above your CPU core count, the CPU is saturated. wa is CPU time waiting on I/O, and sustained values above 10 to 15 during training often mean dataloader workers are blocked on disk reads. bi is blocks received from storage, and high bi with high wa points to storage bottlenecks instead of compute. us + sy is total active CPU time, and if it is high while GPU-Util is low, preprocessing is outrunning the GPU. If wa is high, increase dataloader workers or switch to faster storage. If us + sy is high with low GPU-Util, move transforms to GPU with a library such as Kornia.

What Causes OOM Errors and How to Resolve Them

OOM errors happen when requested allocations exceed available VRAM, often due to large batch sizes, long sequence lengths, or concurrent GPU processes. Resolve OOM by lowering memory pressure first, then increasing workload cautiously.

Common fixes:

  • Reduce batch size or sequence length.
  • Use gradient accumulation to keep effective batch size.
  • Enable mixed precision where supported.
  • Terminate stale GPU processes before restart.
  • Move expensive transforms to more efficient pipeline stages.

If a stale process is still holding VRAM after a failed run, list active compute processes, verify ownership, terminate the stale PID, then confirm memory was released.

nvidia-smi --query-compute-apps=pid,used_memory,process_name --format=csv,noheader
Enter fullscreen mode Exit fullscreen mode
18211, 17664 MiB, python
18304, 512 MiB, python
Enter fullscreen mode Exit fullscreen mode
ps -p <PID> -o pid,user,etime,cmd
Enter fullscreen mode Exit fullscreen mode
kill -9 <PID>
Enter fullscreen mode Exit fullscreen mode

<$>[warning]
Do not kill unknown PIDs on shared hosts. Verify process ownership and job context first.
<$>

nvidia-smi # Confirm VRAM is now released
Enter fullscreen mode Exit fullscreen mode

Monitoring GPU Utilization with nvidia-smi

nvidia-smi is the fastest built-in tool for real-time GPU telemetry on Linux servers. It is available with NVIDIA drivers and documents fields used by most higher-level integrations.

Reference docs:

Basic nvidia-smi Output and What Each Field Shows

Run nvidia-smi with no flags for a full snapshot of GPU and process state. Focus first on GPU-Util, Memory-Usage, Temp, and Pwr:Usage/Cap.

nvidia-smi
Enter fullscreen mode Exit fullscreen mode
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.xx       Driver Version: 550.xx       CUDA Version: 12.x    |
| GPU  Name        Temp   Pwr:Usage/Cap   Memory-Usage   GPU-Util  Compute M. |
| 0    H100        53C    215W / 350W     18240MiB/81920MiB   78%    Default |
+-----------------------------------------------------------------------------+
| Processes:                                                                |
| GPU   PID   Type   Process name                                GPU Memory |
| 0   18211     C    python train.py                                17664MiB|
+-----------------------------------------------------------------------------+
Enter fullscreen mode Exit fullscreen mode

If GPU-Util shows 0% while a job appears to be running, check three common causes. The job may still be in a CPU-bound preprocessing stage and has not submitted work to the GPU yet. The process may have errored and stayed alive but idle. The job may also be running on a different GPU index, so list all devices with nvidia-smi --list-gpus and check each one.

Running nvidia-smi in Continuous Loop Mode

Use loop mode when you need live updates without writing scripts. --loop=1 refreshes once per second.

nvidia-smi --loop=1
Enter fullscreen mode Exit fullscreen mode
Wed Mar 26 12:00:01 2026
... snapshot ...
Wed Mar 26 12:00:02 2026
... snapshot ...
Enter fullscreen mode Exit fullscreen mode

Logging nvidia-smi Output to a File

Write sampled output to a file for post-run inspection. Redirect stdout so each sample is timestamped in your shell history and log stream.

nvidia-smi --loop=5 > gpu.log
Enter fullscreen mode Exit fullscreen mode
# gpu.log now contains one snapshot every 5 seconds
Enter fullscreen mode Exit fullscreen mode

Querying Specific Metrics with nvidia-smi --query-gpu

Use --query-gpu with --format=csv when you need parseable output for scripts. This is the preferred pattern for cron jobs and custom exporters.

nvidia-smi --query-gpu=timestamp,index,name,utilization.gpu,utilization.memory,memory.used,memory.total,temperature.gpu,power.draw --format=csv,noheader,nounits
Enter fullscreen mode Exit fullscreen mode
2026/03/26 12:10:02.123, 0, NVIDIA H100 80GB HBM3, 82, 54, 18420, 81920, 55, 228.31
Enter fullscreen mode Exit fullscreen mode

Per-Process GPU Monitoring

Per-process monitoring answers which application is consuming GPU time right now. Use nvidia-smi pmon to inspect utilization by PID instead of by device only.

Using nvidia-smi pmon for Process-Level Metrics

Run pmon in loop mode to monitor active compute processes. -s um displays utilization and memory throughput related activity by process.

nvidia-smi pmon -s um -d 1
Enter fullscreen mode Exit fullscreen mode
# gpu   pid  type    sm   mem   enc   dec   command
    0 18211     C    76    41     0     0   python
    0 18304     C    12     8     0     0   python
Enter fullscreen mode Exit fullscreen mode

gpu is the GPU index the process is running on. pid is the process ID. type is workload class, where C is compute, G is graphics, and M is mixed. sm is the percentage of time spent executing kernels on streaming multiprocessors. mem is the percentage of time the memory interface was active for that process. enc and dec are encoder and decoder utilization percentages. command is the truncated process name.

Correlating Process IDs to Application Names

Map PIDs to full command lines to identify notebook kernels, training scripts, and inference workers. This is required when multiple Python jobs are running under one user.

ps -p 18211 -o pid,user,etime,cmd
Enter fullscreen mode Exit fullscreen mode
  PID USER     ELAPSED CMD
18211 mlops    01:22:11 python train.py --model llama --batch-size 8
Enter fullscreen mode Exit fullscreen mode

Interactive GPU Monitoring with nvtop and gpustat

Use nvtop when you want interactive process control and gpustat when you want compact snapshots in scripts. Both tools complement nvidia-smi rather than replace it.

Installing and Running nvtop

Install nvtop from Ubuntu repositories, then start it in the terminal. It provides live bars and per-process views similar to htop.

sudo apt update && sudo apt install -y nvtop
Enter fullscreen mode Exit fullscreen mode
nvtop
Enter fullscreen mode Exit fullscreen mode
GPU0  78%  MEM 18240/81920 MiB  TEMP 54C  PWR 221W
PID 18211 python train.py   GPU 72%   MEM 17664MiB
Enter fullscreen mode Exit fullscreen mode

Installing and Running gpustat

Install gpustat with pip, then use watch mode for one-second updates. This is useful in SSH sessions where minimal output matters.

python3 -m pip install --user gpustat
Enter fullscreen mode Exit fullscreen mode
gpustat --watch 1
Enter fullscreen mode Exit fullscreen mode
hostname  Thu Mar 26 12:25:44 2026
[0] NVIDIA H100 | 54C, 79 % | 18420 / 81920 MB | python/18211(17664M)
Enter fullscreen mode Exit fullscreen mode

When to Use nvtop vs. gpustat vs. nvidia-smi

Use nvidia-smi for canonical driver-level data and scripted queries. Use gpustat for low-noise terminal snapshots, and use nvtop for interactive process monitoring during active debugging.

GPU Monitoring with Glances

Use Glances when you need one terminal dashboard for GPU, CPU, memory, disk, and network at once. Install with the GPU extra so NVIDIA metrics are available.

python3 -m pip install 'glances[gpu]'
Enter fullscreen mode Exit fullscreen mode
glances
Enter fullscreen mode Exit fullscreen mode
GPU NVIDIA H100: util 77% | mem 18240/81920MiB | temp 54C | power 220W
CPU: 21.4%  MEM: 62.1%  LOAD: 2.13 1.87 1.66
Enter fullscreen mode Exit fullscreen mode

In the Glances GPU line, util maps to GPU core activity, and mem shows allocated versus total VRAM. temp and power indicate thermal and electrical load during the sample window. Use these values together to identify whether workload pressure is compute, memory, or thermal related. Glances is a better choice than nvidia-smi when you want CPU, memory, disk, and GPU in one non-scrolling view during interactive debugging on a single node.

<$>[note]
If glances shows no GPU section, verify that NVIDIA drivers are installed on the host and the Python environment running Glances can access NVML.
<$>

GPU Monitoring Inside Docker Containers and Kubernetes

Containerized GPU monitoring requires host runtime support first, then workload-level metric collection. Start with NVIDIA Container Toolkit for Docker and DCGM Exporter for Kubernetes clusters.

Exposing GPU Metrics in Docker with the NVIDIA Container Toolkit

Install the NVIDIA Container Toolkit on the host, then run containers with --gpus all. Inside the container, nvidia-smi should show host GPU telemetry.

Use this after setting up Docker by following How To Install and Use Docker on Ubuntu.

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
Enter fullscreen mode Exit fullscreen mode
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
Enter fullscreen mode Exit fullscreen mode
sudo apt update && sudo apt install -y nvidia-container-toolkit
Enter fullscreen mode Exit fullscreen mode
sudo nvidia-ctk runtime configure --runtime=docker
Enter fullscreen mode Exit fullscreen mode
sudo systemctl restart docker
Enter fullscreen mode Exit fullscreen mode

<$>[note]
The NVIDIA runtime is only active after the Docker daemon restarts. Already-running containers are not affected, but any new container launched after the restart will have GPU access. For full installation details, see the NVIDIA Container Toolkit guide.
<$>

docker run --rm --gpus all nvidia/cuda:12.4.1-base-ubuntu22.04 nvidia-smi
Enter fullscreen mode Exit fullscreen mode
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.xx       Driver Version: 550.xx       CUDA Version: 12.x    |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
+-----------------------------------------------------------------------------+
Enter fullscreen mode Exit fullscreen mode

Monitoring GPU Utilization in Kubernetes with DCGM Exporter

Deploy DCGM Exporter as a DaemonSet on GPU nodes to expose Prometheus metrics. This creates scrape targets with per-GPU and per-pod metric labels.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: dcgm-exporter
  namespace: gpu-monitoring
spec:
  selector:
    matchLabels:
      app: dcgm-exporter
  template:
    metadata:
      labels:
        app: dcgm-exporter
    spec:
      nodeSelector:
        nvidia.com/gpu.present: "true"
      containers:
        - name: dcgm-exporter
          image: nvcr.io/nvidia/k8s/dcgm-exporter:3.3.8-3.6.0-ubuntu22.04
          ports:
            - containerPort: 9400
Enter fullscreen mode Exit fullscreen mode
# HELP DCGM_FI_DEV_GPU_UTIL GPU utilization (in %).
# TYPE DCGM_FI_DEV_GPU_UTIL gauge
DCGM_FI_DEV_GPU_UTIL{gpu="0",UUID="GPU-..."} 78
Enter fullscreen mode Exit fullscreen mode

Viewing GPU Metrics in a DigitalOcean Managed Kubernetes Cluster

To collect GPU metrics in a DOKS cluster, configure Prometheus to scrape the DCGM Exporter DaemonSet, then visualize the data in Grafana or forward it to a hosted monitoring backend. Separate GPU dashboards by node pool and workload labels to avoid mixed tenancy confusion.

Before deployment, review An Introduction to Kubernetes if your team is new to cluster primitives.

scrape_configs:
  - job_name: dcgm-exporter
    static_configs:
      - targets: ['<node-ip>:9400']
Enter fullscreen mode Exit fullscreen mode

In a DOKS cluster, use DaemonSet pod IPs or a Kubernetes Service DNS name instead of static node IP targets. For Grafana dashboard import details, see NVIDIA DCGM Exporter documentation.

Setting Up Persistent GPU Monitoring with Datadog

Use Datadog when you need long-term retention, tag-based slicing, and alert routing to on-call systems. Install the Agent on each GPU node and enable the NVIDIA integration.

Installing the Datadog Agent with NVIDIA GPU Support

Install Agent 7 on the GPU host, then enable the nvidia_gpu integration. Keep host drivers and NVML available to the Agent process.

DD_API_KEY="<YOUR_DATADOG_API_KEY>" DD_SITE="datadoghq.com" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"
Enter fullscreen mode Exit fullscreen mode

<$>[note]
The NVML integration is not bundled with Agent 7 by default. Install it separately, then configure nvml.d/conf.yaml.
<$>

sudo datadog-agent integration install -t datadog-nvml==1.0.9
Enter fullscreen mode Exit fullscreen mode

<$>[note]
Verify the latest available version of the NVML integration before installing.
<$>

Configuring the GPU Integration and Tag Strategy

Define tags at the host and integration level so you can group by cluster, environment, and workload type. This keeps alert routing and dashboard filters usable at scale.

init_config:

instances:
  - min_collection_interval: 15
    tags:
      - env:prod
      - role:training
      - gpu_vendor:nvidia
Enter fullscreen mode Exit fullscreen mode

Save this as /etc/datadog-agent/conf.d/nvml.d/conf.yaml, then restart:

sudo systemctl restart datadog-agent
Enter fullscreen mode Exit fullscreen mode

Building a Real-Time GPU Dashboard and Setting Alerts

Create timeseries panels for nvidia.gpu.utilization, nvidia.gpu.memory.used, and nvidia.gpu.temperature, then alert on sustained saturation. A practical first alert is GPU utilization above 95% for 10 minutes on production training nodes.

Use How To Monitor Your Infrastructure with Datadog for dashboard and monitor fundamentals.

Example monitor query:
avg(last_10m):avg:nvidia.gpu.utilization{env:prod,role:training} by {host,gpu_index} > 95
Enter fullscreen mode Exit fullscreen mode

Setting Up GPU Monitoring with Zabbix

To monitor GPU hosts with Zabbix, install the Zabbix agent on each GPU host, import the NVIDIA GPU template, and configure trigger thresholds for utilization and temperature. Zabbix is the right choice when you need self-hosted monitoring with custom alerting and existing enterprise integrations.

Enabling the NVIDIA GPU Template in Zabbix

Import or attach an NVIDIA GPU template in Zabbix, then bind it to hosts that have NVIDIA drivers installed. Template items should poll utilization, memory, temperature, and power.

Path: Data collection -> Templates -> Import
Template: Nvidia by Zabbix agent 2
For some versions, the active mode variant is: Nvidia by Zabbix agent 2 active
Official template source: https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/templates/app/nvidia_agent2
Enter fullscreen mode Exit fullscreen mode

Configuring Triggers for Utilization Thresholds

Create triggers for sustained high utilization, high temperature, and unexpected drops to zero utilization during scheduled training windows. Use trigger expressions with time windows to avoid noise from short spikes.

Example trigger logic using Zabbix agent 2 template item keys:
avg(/GPU Host/nvidia.smi[{#GPUINDEX},utilization.gpu],10m)>95
and
last(/GPU Host/nvidia.smi[{#GPUINDEX},temperature.gpu])>85
Enter fullscreen mode Exit fullscreen mode

{#GPUINDEX} is a low-level discovery macro populated automatically by the template. You do not need to set it manually.

Enabling Unified GPU Usage Monitoring on Windows

Unified GPU Usage Monitoring aggregates activity from multiple GPU engines into a single usage view that operators can read quickly. Enable it through NVIDIA Control Panel first, then verify registry policy where required by your driver profile.

What Unified GPU Usage Monitoring Is

Unified monitoring combines graphics, compute, copy, and video engine activity into one normalized utilization metric. This improves cross-process visibility when mixed workloads run on the same adapter.

How to Enable It via NVIDIA Control Panel and Registry

In NVIDIA Control Panel, enable the GPU activity monitoring feature and apply settings system-wide. If your environment uses managed policy, set the registry value used by your NVIDIA driver branch to turn on unified usage reporting.

Windows Registry example for GPU performance counter visibility:
HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\NVTweak
Value name: RmProfilingAdminOnly (DWORD)
Set to 0 to allow non-admin access to GPU performance counters, set to 1 for admin-only.
Reference: https://developer.nvidia.com/ERR_NVGPUCTRPERM
Enter fullscreen mode Exit fullscreen mode
reg query "HKLM\SOFTWARE\NVIDIA Corporation\Global" /s
Enter fullscreen mode Exit fullscreen mode

<$>[warning]
Registry value names for unified usage reporting vary by driver branch and policy tooling. Validate the exact key and value against your NVIDIA enterprise driver documentation before changing production systems.
<$>

Reading Unified GPU Data via Task Manager and WMI

After enabling unified monitoring, Task Manager can display GPU engine and aggregate usage per process. WMI queries can then be used for scripted collection in Windows-based monitoring workflows.

powershell -Command "Get-Counter '\GPU Engine(*)\Utilization Percentage' | Select-Object -ExpandProperty CounterSamples | Select-Object InstanceName,CookedValue"
Enter fullscreen mode Exit fullscreen mode
InstanceName                                   CookedValue
pid_1204_luid_0x00000000_0x0000_engtype_3D     27.31
pid_1820_luid_0x00000000_0x0000_engtype_Compute_0  74.02
Enter fullscreen mode Exit fullscreen mode

Comparing GPU Monitoring Tools

Use this table to pick a tool based on data depth, operational overhead, and alerting needs. Start with CLI tools for diagnostics, then add Datadog, Zabbix, or DCGM pipelines for persistent monitoring.

Feature and Trade-off Comparison Table

Tool Platform Refresh Rate Per-Process View Alerting Cost
nvidia-smi Linux, Windows 1s+ (--loop) Yes (process list, pmon) No native alerts Free
nvtop Linux Near real time interactive Yes No native alerts Free
gpustat Linux 1s+ (--watch) Yes (summary) No native alerts Free
Glances Linux, macOS, Windows 1s+ Partial No native alerts Free
atop Linux Configurable interval Indirect for GPU No native alerts Free
Datadog Agent Linux, Windows 15s typical agent interval Yes (tag and host context) Yes Paid
Zabbix Linux, Windows Configurable polling Yes (template dependent) Yes Free (self-hosted)
DCGM Exporter Linux, Kubernetes Scrape interval based Yes (label dependent) Via Prometheus/Grafana Alertmanager Free

Choosing the Right Tool for Your Use Case

For single-node debugging, start with nvidia-smi and nvtop. For fleet-level visibility across GPU Droplets and Kubernetes nodes, use DCGM Exporter plus your monitoring backend or deploy Datadog or Zabbix for retention and alerting.
If you need a historical record of GPU activity alongside CPU, memory, and disk in a single log, atop captures all of these at configurable intervals and is worth adding to long-running training hosts alongside nvidia-smi.

Conclusion

Real-time GPU utilization monitoring is essential for optimizing deep learning performance, troubleshooting bottlenecks, and achieving efficient resource usage—whether running on single nodes, inside containers, or scaling across clustered environments. The right monitoring tool depends on your specific use case: quick one-off checks, interactive debugging, continuous fleet-wide visibility, or long-term metric retention and alerting.

Start with simple tools like nvidia-smi for instant visibility, and progress to dashboarding, custom alerting, and enterprise-grade solutions as your needs grow. With the strategies and tools outlined in this guide, you can proactively monitor, troubleshoot, and maximize the performance of your GPU workloads—ensuring smoother operation for development, training, and deployment pipelines.

Top comments (0)