DEV Community

Vu Nguyen
Vu Nguyen

Posted on

I Couldn’t Debug My AI/ML GPU Incident - So I Built gpuxray

Several weeks ago, I encountered some problems with ML jobs running on my GPU server. I received alerts triggered at midnight, and one of the jobs failed due to GPU memory usage.

The next morning, I performed a root-cause analysis to understand what had happened the night before. However, I couldn’t identify the issue because I only had access to overall GPU usage metrics at current time. I used nvidia-smi and nvtop to inspect the current state, but there was no trace about the issue we got from last night. Therefore, I realized I needed a solution to prevent similar problems from happening in the future.

I tried using DCGM exporter to expose GPU metrics, but it couldn’t provide PID-level metrics. I also tested it in a Kubernetes environment to get pod-level metrics, but it didn’t work because our GPUs only support time-slicing mode.

Therefore, I developed an open-source tool called gpuxray to monitor GPUs at the process level. gpuxray has helped our team significantly when observing and investigating bottlenecks in AI/ML processes running on Linux servers. It exposes metrics in Prometheus format, which we use to build Grafana dashboards for visualizing resource usage at the process level.

dashboard

We deployed gpuxray in a Kubernetes cluster as a DaemonSet on all GPU nodes that need to be monitored.

> kubectl -n kube-operators get daemonset/gpuxray 
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
gpuxray   2         2         2       2            2           node.k8s.cluster/gpu=exists   20d
Enter fullscreen mode Exit fullscreen mode

With the setup described here, we can easily enable per-process GPU observability.

gpuxray achieves high performance while consuming minimal resources. Because it is built using eBPF to trace GPU memory-related events. This is powerful because eBPF allows us to observe what is happening inside the kernel based on specific use cases - in this case, we create probes these are attached to CUDA API.
The project is built on a solid codebase, making it easy to extend in the future. If you have ideas, feel free to discuss or open a pull request.

Design and Architecture

Now, I will describe the architecture of gpuxray to help you understand how it works.

Architecture

Basically, the userspace-code handles the main logic and is written in Go. The eBPF-program is attached to CUDA API calls. When these APIs are invoked, events are captured. The eBPF-program performs lightweight processing at the kernel level, updates eBPF maps, and sends events to the ring-buffer.The userspace-code then consumes events from the ring-buffer, processes them, and produces the final metrics output.

Performance and Resources Usage

With the mon option, gpuxray even no taken resources on the GPU server.

perf and resource usage by mon

When tracing memory leaks using the memtrace option for a specific PID, I used a Python script to generate more than 2,000 malloc/free calls per second on the GPU and observed resource usage, gpuxray consumed only about ~8% of a single CPU core (on a server with 32 CPU cores and 125GB RAM).

memtrace perf and resource usage

This is impressive because ~2,000 malloc/free operations per second is not a typical real-world workload. As a result, we don’t need to worry about performance or resource overhead when using gpuxray.

Feel free to explore the project, try it out, and contribute your ideas:
https://github.com/vuvietnguyenit/gpuxray

Top comments (0)