DEV Community

Cover image for Setting Up NVIDIA Drivers and CUDA for ML/DL on Ubuntu 22.04
Abraham Audu
Abraham Audu

Posted on

Setting Up NVIDIA Drivers and CUDA for ML/DL on Ubuntu 22.04

Let's face it, Windows is getting really stressful to work with as a developer in 2026. If you've tried to work with your GPU on TensorFlow for deep learning projects, you probably have discovered that you're either stuck with CPU-based compute, dated CUDA versions for older TensorFlow versions, or the hell that is setting up WSL2.

Why not just switch to a Linux native environment like Ubuntu? Yeah, that's easily the safe and sane choice in 2026. Moreover, most of your production workloads will be on Linux servers, so might as well make the jump on your local setup too.

I've struggled in time past to configure all the moving parts. This is because the information is scattered across and and people really do be saying random things on the internet. It can easily take hours to figure it out.

And before you run off to ChatGeePeeDee to show you all the steps, I need you to understand that LLMs hallucinate and Linux will allow you run ANY command, including "removing the French language pack". So be careful what you run on your system from LLMs.

This is an attempt to make it easy, especially as more people dump Windows for Linux for the sake of sanity away from the MicroSlop ecosystem.

This setup was done for an Nvidia 3060 Laptop edition GPU (Use this as context to safety-check the steps as you go along).

Let's just jump right in!

Nvidia Drivers (v595) and CUDA 12.1 Setup for Ubuntu 22.04 x86

Flush old installation

sudo nvidia-uninstall
sudo apt purge -y '^nvidia-*' '^libnvidia-*'
sudo rm -r /var/lib/dkms/nvidia
sudo apt -y autoremove
sudo update-initramfs -c -k `uname -r`
sudo update-grub2
read -p "Press any key to reboot... " -n1 -s
sudo reboot
Enter fullscreen mode Exit fullscreen mode

Update Packages

sudo apt update
Enter fullscreen mode Exit fullscreen mode

Install Kernel Headers and Build Tools

sudo apt install -y linux-headers-$(uname -r) build-essential dkms
Enter fullscreen mode Exit fullscreen mode

Install Nvidia Cuda Keyring

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
Enter fullscreen mode Exit fullscreen mode

Pin Driver Branch (595 in this case, update as necessary)

sudo apt install -y nvidia-driver-pinning-595
Enter fullscreen mode Exit fullscreen mode

Install the Driver

sudo apt install -y nvidia-open
Enter fullscreen mode Exit fullscreen mode

Reboot System

sudo reboot
Enter fullscreen mode Exit fullscreen mode

Verify Installation

nvidia-smi
Enter fullscreen mode Exit fullscreen mode

You now have NVIDIA Drivers setup, Let's proceed to CUDA and cuDNN installation

TL;DR: CUDA 12.1 with cuDNN 8.9.x is the most stable version for TensorFlow and PyTorch simultaneously, so it's generally best to go with this version combination for now.

Flush Old CUDA Installation

sudo apt remove --purge -y 'cuda*' 'libcudnn*'
sudo rm -rf /usr/local/cuda*
Enter fullscreen mode Exit fullscreen mode

Install CUDA with NVIDIA Network Repo

# Download CUDA 12.1 package keyring
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update

# Install CUDA 12.1 toolkit
sudo apt install -y cuda-toolkit-12-1
Enter fullscreen mode Exit fullscreen mode

Install cuDNN

  1. Download cuDNN 8.9 for CUDA 12.1 from NVIDIA Developer:

    [https://developer.nvidia.com/rdp/cudnn-archive]
    Choose Linux x86_64 / tar package for CUDA 12.1

  2. Extract and copy files

tar -xvf download-path/cudnn-file-name.tar.xz
cd extracted-folder-path
sudo cp -P include/cudnn*.h /usr/local/cuda/include/
sudo cp -P lib/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
Enter fullscreen mode Exit fullscreen mode

Set Environment variables

1.

nano ~/.bashrc
Enter fullscreen mode Exit fullscreen mode
  1. Replace or Paste these environment variables for CUDA:
export CUDA_HOME=/usr/local/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
Enter fullscreen mode Exit fullscreen mode
  1. Apply Changes
source ~/.bashrc
Enter fullscreen mode Exit fullscreen mode

Reboot

sudo reboot
Enter fullscreen mode Exit fullscreen mode

Verfiy installations

nvcc --version
echo $CUDA_HOME
echo $PATH
echo $LD_LIBRARY_PATH
Enter fullscreen mode Exit fullscreen mode

N.B: nvidia-smi will show the highest compatible CUDA version based on the installed NVIDIA driver version, whilst nvcc --version will show the currently installed CUDA version, so if they differ, this is totally fine.

Verify visibility to python frameworks

  1. Create a python virtual environment
  2. Paste this in a .py file:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
Enter fullscreen mode Exit fullscreen mode
Output: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Enter fullscreen mode Exit fullscreen mode

Conclusion

If you made it this far and followed all the steps, you should now be able to run your TensorFlow and PyTorch (and indeed other GPU-based workloads) on your NVIDIA GPU in your Ubuntu environment.

Follow for more tech content around machine learning and data science.

Top comments (0)