<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abraham Audu</title>
    <description>The latest articles on DEV Community by Abraham Audu (@the_abrahamaudu).</description>
    <link>https://dev.to/the_abrahamaudu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/the_abrahamaudu"/>
    <language>en</language>
    <item>
      <title>Setting Up NVIDIA Drivers and CUDA for ML/DL on Ubuntu 22.04</title>
      <dc:creator>Abraham Audu</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:57:28 +0000</pubDate>
      <link>https://dev.to/the_abrahamaudu/setting-up-nvidia-drivers-and-cuda-for-mldl-on-ubuntu-2204-15c4</link>
      <guid>https://dev.to/the_abrahamaudu/setting-up-nvidia-drivers-and-cuda-for-mldl-on-ubuntu-2204-15c4</guid>
      <description>&lt;p&gt;Let's face it, Windows is getting really stressful to work with as a developer in 2026. If you've tried to work with your GPU on TensorFlow for deep learning projects, you probably have discovered that you're either stuck with CPU-based compute, dated CUDA versions for older TensorFlow  versions, or the hell that is setting up WSL2.&lt;/p&gt;

&lt;p&gt;Why not just switch to a Linux native environment like Ubuntu? Yeah, that's easily the safe and sane choice in 2026. Moreover, most of your production workloads will be on Linux servers, so might as well make the jump on your local setup too.&lt;/p&gt;

&lt;p&gt;I've struggled in time past to configure all the moving parts. This is because the information is scattered across and and people really do be saying random things on the internet. It can easily take hours to figure it out.&lt;/p&gt;

&lt;p&gt;And before you run off to ChatGeePeeDee to show you all the steps, I need you to understand that LLMs hallucinate and Linux will allow you run ANY command, including "removing the French language pack". So be careful what you run on your system from LLMs.&lt;/p&gt;

&lt;p&gt;This is an attempt to make it easy, especially as more people dump Windows for Linux for the sake of sanity away from the MicroSlop ecosystem.&lt;/p&gt;

&lt;p&gt;This setup was done for an Nvidia 3060 Laptop edition GPU (Use this as context to safety-check the steps as you go along). &lt;/p&gt;

&lt;p&gt;Let's just jump right in!&lt;/p&gt;

&lt;h3&gt;
  
  
  Nvidia Drivers (v595) and CUDA 12.1 Setup for Ubuntu 22.04 x86
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Flush old installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nvidia-uninstall
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt purge &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="s1"&gt;'^nvidia-*'&lt;/span&gt; &lt;span class="s1"&gt;'^libnvidia-*'&lt;/span&gt;
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /var/lib/dkms/nvidia
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nt"&gt;-y&lt;/span&gt; autoremove
&lt;span class="nb"&gt;sudo &lt;/span&gt;update-initramfs &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;update-grub2
&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Press any key to reboot... "&lt;/span&gt; &lt;span class="nt"&gt;-n1&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Update Packages
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install Kernel Headers and Build Tools
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; linux-headers-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; build-essential dkms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install Nvidia Cuda Keyring
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; cuda-keyring_1.1-1_all.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Pin Driver Branch (595 in this case, update as necessary)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nvidia-driver-pinning-595
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install the Driver
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nvidia-open
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Reboot System
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Verify Installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nvidia-smi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  You now have NVIDIA Drivers setup, Let's proceed to CUDA and cuDNN installation
&lt;/h3&gt;

&lt;p&gt;TL;DR: CUDA 12.1 with cuDNN 8.9.x is the most stable version for TensorFlow and PyTorch simultaneously, so it's generally best to go with this version combination for now.&lt;/p&gt;

&lt;h4&gt;
  
  
  Flush Old CUDA Installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt remove &lt;span class="nt"&gt;--purge&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="s1"&gt;'cuda*'&lt;/span&gt; &lt;span class="s1"&gt;'libcudnn*'&lt;/span&gt;
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /usr/local/cuda&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install CUDA with NVIDIA Network Repo
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download CUDA 12.1 package keyring&lt;/span&gt;
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; cuda-keyring_1.1-1_all.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="c"&gt;# Install CUDA 12.1 toolkit&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; cuda-toolkit-12-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install cuDNN
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download cuDNN 8.9 for CUDA 12.1 from NVIDIA Developer:&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://developer.nvidia.com/rdp/cudnn-archive" rel="noopener noreferrer"&gt;https://developer.nvidia.com/rdp/cudnn-archive&lt;/a&gt;]&lt;br&gt;
Choose Linux x86_64 / tar package for CUDA 12.1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extract and copy files&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xvf&lt;/span&gt; download-path/cudnn-file-name.tar.xz
&lt;span class="nb"&gt;cd &lt;/span&gt;extracted-folder-path
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-P&lt;/span&gt; include/cudnn&lt;span class="k"&gt;*&lt;/span&gt;.h /usr/local/cuda/include/
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-P&lt;/span&gt; lib/libcudnn&lt;span class="k"&gt;*&lt;/span&gt; /usr/local/cuda/lib64/
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /usr/local/cuda/include/cudnn&lt;span class="k"&gt;*&lt;/span&gt;.h /usr/local/cuda/lib64/libcudnn&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Set Environment variables
&lt;/h4&gt;

&lt;p&gt;1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nano ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Replace or Paste these environment variables for CUDA:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CUDA_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/cuda
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$CUDA_HOME&lt;/span&gt;/bin:&lt;span class="nv"&gt;$PATH&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LD_LIBRARY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$CUDA_HOME&lt;/span&gt;/lib64:&lt;span class="nv"&gt;$LD_LIBRARY_PATH&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Apply Changes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Reboot
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Verfiy installations
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nvcc &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$CUDA_HOME&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$PATH&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$LD_LIBRARY_PATH&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;N.B: &lt;code&gt;nvidia-smi&lt;/code&gt; will show the highest compatible CUDA version based on the installed NVIDIA driver version, whilst &lt;code&gt;nvcc --version&lt;/code&gt; will show the currently installed CUDA version, so if they differ, this is totally fine.&lt;/p&gt;

&lt;h4&gt;
  
  
  Verify visibility to python frameworks
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Create a python virtual environment &lt;/li&gt;
&lt;li&gt;Paste this in a .py file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_physical_devices&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GPU&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Output: &lt;span class="o"&gt;[&lt;/span&gt;PhysicalDevice&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/physical_device:GPU:0'&lt;/span&gt;, &lt;span class="nv"&gt;device_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'GPU'&lt;/span&gt;&lt;span class="o"&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;If you made it this far and followed all the steps, you should now be able to run your TensorFlow and PyTorch (and indeed other GPU-based workloads) on your NVIDIA GPU in your Ubuntu environment.&lt;/p&gt;

&lt;p&gt;Follow for more tech content around machine learning and data science.&lt;/p&gt;

</description>
      <category>nvidia</category>
      <category>cuda</category>
      <category>ubuntu</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
