Overview
As AI workloads continue to grow, having proper GPU support is essential. AMD’s ROCm platform provides an open-source solution to fully leverage AMD GPUs. However, when i tried to install ROCm from official AMD docs, my system and display completely crashed (I had to reset the PC). In this guide, I’ll walk you through how I set up ROCm on Ubuntu 24.04 with an RX 6700 XT.
Installation Steps
Step 1 — Update your system and install essentials
Before installing ROCm, make sure your system is up to date and has the necessary packages:
sudo apt update
sudo apt upgrade
sudo apt install synaptic python3 python3-venv python3-pip git
This ensures a clean baseline for ROCm installation.
Step 2 — Install ROCm and HIP libraries
Set the GFX version and install all the required ROCm packages:
export HSA_OVERRIDE_GFX_VERSION=10.3.0
sudo apt install libamd-comgr2 libhsa-runtime64-1 librccl1 librocalution0 librocblas0 librocfft0 librocm-smi64-1 \
librocsolver0 librocsparse0 rocm-device-libs-17 rocm-smi rocminfo hipcc libhiprand1 libhiprtc-builtins5 radeontop
sudo usermod -aG render,video $USER
sudo reboot
Note: After reboot, confirm ROCm is installed by running:
rocminfo
The first line should show ROCk module loaded. Radeontop is optional for GPU monitoring.
Installing PyTorch
Once ROCm is set up, you can install PyTorch with ROCm support.
Package Installation
Activate your Python virtual environment and download the PyTorch package by following the official instructions:
PyTorch Get Started Guide
Test GPU Availability
After installation, verify that PyTorch can detect your GPU:
python3
import torch
print(torch.cuda.is_available())
If it returns True, PyTorch is correctly configured to use your AMD GPU.
Conclusion
Setting up ROCm on Ubuntu 24.04 with an RX 6700 XT allows you to leverage your GPU for AI workloads efficiently. This guide walks you through the essentials, from installing ROCm to verifying PyTorch GPU support.
Top comments (0)