Tested on: ASUS ROG Flow Z13 GZ302EA with AMD Ryzen AI Max 390
Introduction
This guide provides a comprehensive walkthrough for installing and testing AMD Ryzen AI Neural Processing Unit (NPU) support on Fedora 43. It is tailored for the ROG Flow Z13 laptop with Ryzen AI Max 390 processor featuring the XDNA 2 NPU (50 TOPS).
You will learn how to install kernel drivers, user space runtimes, verify hardware detection, and leverage frameworks like GAIA for AI workloads.
Prerequisites
- Fedora 43 Workstation Edition (Linux kernel 6.17.5 tested)
- AMD Ryzen AI Max 390 with integrated 8050s graphics
- BIOS updated with NPU enabled (Advanced settings → IPU/NPU → Enabled)
- Disable Secure Boot in BIOS for driver compatibility
- Command line basics, sudo privileges
Step 1: Verify Kernel and Firmware
- Check your kernel version:
uname -r
# Should be 6.14.x or newer (Fedora 43 uses 6.17.5)
- Validate AMD NPU firmware is installed:
ls -la /usr/lib/firmware/amdnpu/
Should contain folders like 1502_00, 17f0_10, etc.
- Check kernel messages for AMD NPU driver:
sudo dmesg | grep -i amdxdna
Should indicate that amdxdna driver enabled the device and initialized.
- Verify device node creation:
ls -la /dev/accel/
# Should show device like accel0 owned by render group
Step 2: Install User Space XRT and XDNA Drivers
AMD’s Xilinx Runtime (XRT) and XDNA user-space drivers enable interaction with the NPU:
- Enable the experimental Fedora Copr repository and install:
sudo dnf copr enable xanderlent/amd-npu-driver
sudo dnf install xrt xdna-driver tcsh
- Source the correct setup script (note: on Fedora, it’s in
/usr/xrt):
source /usr/xrt/setup.sh
- Fix library symlink issue if needed:
sudo ln -sf ../lib64/libxrt_core.so.2.19.0 /usr/xrt/lib/libxrt_core.so.2
export LD_LIBRARY_PATH=/usr/xrt/lib:/usr/xrt/lib64:$LD_LIBRARY_PATH
Add above export permanently by appending to ~/.bashrc.
- Test NPU detection:
sudo -E /usr/xrt/bin/xrt-smi examine
Output should list your RyzenAI-npu5 device, confirming detection.
Step 3: Setup Python Environment for ONNX Runtime and GAIA
- Use Python 3.11 or 3.12 for compatibility:
sudo dnf install python3.11
python3.11 -m venv ~/npu-env
source ~/npu-env/bin/activate
pip install --upgrade pip
- Install ONNX Runtime and dependencies for CPU inference:
pip install numpy onnxruntime onnx opencv-python
- Install GAIA for LLM inference tool:
pip install gaia-ai
gaia -v # Should return 0.12.1 or latest
Step 4: Testing Workloads and NPU Usage
ONNX Runtime
- AMD currently does not provide a stable Linux ONNX Runtime with Vitis AI (NPU) execution provider.
- Use CPU backend for initial tests and model development.
- For NPU acceleration on Linux, building ONNX Runtime from source with Vitis AI support is required but complex.
GAIA Toolkit
- GAIA CLI is installed and can run LLM queries on CPU.
- Interactive chat requires Lemonade server backend, which is not bundled on Linux and must be installed separately.
- You can test GAIA and your NPU with provided Python test scripts that check for hardware and software readiness.
Real-time AI Workloads (Speech-to-Text, Webcam Effects)
- Linux support for these workloads on AMD NPU is still under development.
- Use CPU or GPU-based general frameworks (e.g., NoiseTorch for noise cancellation) for now.
- Custom development with ONNX Runtime or other inference engines required for NPU acceleration in these domains.
Troubleshooting
- If
xrt-smiis not found: confirmsource /usr/xrt/setup.shexecuted. - If library errors occur, check symlinks and
LD_LIBRARY_PATH. - Use
sudo -Eto preserve environment variables when running sudo commands using XRT. - If NPU is not detected, ensure BIOS settings, kernel modules, and firmware are properly installed.
- ONNX Runtime providers for AMD NPU might not appear yet. CPU fallback is normal on Linux currently.
Useful Commands Summary
# Check kernel & firmware
uname -r
sudo dmesg | grep -i amdxdna
ls -la /usr/lib/firmware/amdnpu/
# Enable Copr and install drivers
sudo dnf copr enable xanderlent/amd-npu-driver
sudo dnf install xrt xdna-driver tcsh
# Source environment
source /usr/xrt/setup.sh
# Fix libraries and paths if needed
sudo ln -sf ../lib64/libxrt_core.so.2.19.0 /usr/xrt/lib/libxrt_core.so.2
export LD_LIBRARY_PATH=/usr/xrt/lib:/usr/xrt/lib64:$LD_LIBRARY_PATH
# Test NPU detection
sudo -E /usr/xrt/bin/xrt-smi examine
# Setup Python 3.11 environment
python3.11 -m venv ~/npu-env
source ~/npu-env/bin/activate
pip install numpy onnxruntime onnx opencv-python gaia-ai
# Run GAIA version test
gaia -v
# Run GAIA test script for NPU & LLM
python test_gaia_npu.py
Final Notes
- The AMD Ryzen AI NPU on Fedora 43 is now functional and detected.
- The software ecosystem for Linux NPU acceleration is evolving; Windows currently offers fuller out-of-the-box AMD Ryzen AI support.
- Use CPU-based inference for development and prepare quantized models for future Linux NPU utilization.
- GAIA provides a CLI for LLMs but requires a separate Lemonade server backend for full chat functionality on Linux.
Top comments (0)