A few weeks back, I tried running Ollama on my main Windows 11 rig. It should have been effortless, but it quickly turned into a nightmare of system freezes and cryptic errors. The issues vanished only after I completely wiped Ollama, leaving the root cause a mystery.
The Workstation Rig (Initial Attempt):
- Processor: Intel Core i7-14700K (20 Cores, 28 Threads, 3400 MHz)
- Memory: 32GB RAM
- Storage: 512GB NVMe + 1TB SSD
- Graphics: NVIDIA RTX 3060 Series
- OS: Windows 11
Instead of wrestling with my Windows workstation, I’ve decided to pivot. I’m repurposing my old laptop, MSI GE65 Raider to serve as a dedicated Linux-based AI node. It’s time to get closer to the metal and build a stable environment where I can experiment without crashing my main workflow.
The Hardware: MSI GE65 Raider
- CPU: Intel Core i7-9750H
- GPU: NVIDIA GeForce RTX 2070 (Essential for those CUDA cores)
- Memory: 16GB RAM
- Storage: 2x 512GB NVMe + 1TB SSD + 1TB HDD (Plenty of room for LLM weights)
The OS Choice: Why Pop!_OS?
For a local AI rig, there are some high-level engineering reasons why it beats standard Ubuntu or Windows:
Native NVIDIA Integration: Unlike other distros where you "install" drivers, Pop!_OS treats NVIDIA as a first-class citizen. The dedicated ISO comes with a vertically integrated stack that avoids the "black screen" or stuttering issues common with laptop GPU switching.
Rust-Powered COSMIC Desktop: It’s 2026, and the new COSMIC DE (written in Rust) is a game-changer. It’s memory-safe, incredibly lightweight, and highly efficient with system resources—exactly what you want when you're pushing a GPU to its limits.
System76 Scheduler & Power Management: It includes a custom scheduler that prioritizes the active process. When a model is running, the OS ensures the LLM gets the CPU/GPU cycles it needs without background bloat interference.
Tensor Management (Tensorman): Pop!_OS includes specialized tools like tensorman to manage toolchains in containers, making it one of the most "plug-and-play" environments for CUDA-based development.
The Installation Process
To keep things efficient, I used Ventoy to create a multi-boot drive—honestly, easiest way to handle ISOs these days. I targeted one of the 512GB NVMe drives for the OS install to ensure lightning-fast swap and boot times.
Once the desktop loaded, I went straight to the terminal to prep the environment.
Standard system refresh
sudo apt update && sudo apt full-upgrade -yGrabbing essential media codecs and Microsoft fonts
sudo apt install ubuntu-restricted-extras -y
_👀 Preview for Day 2: The Ollama Deployment
The next day, we move from "Fresh OS" to "AI Server." I’ll be walking through the Essential OS Conditions for a stable Ollama install:
NVIDIA Kernel Verification: Ensuring the OS actually "sees" the RTX 2070 via nvidia-smi.
CUDA Toolkit Prep: Why you need it even if the driver is pre-installed.
The One-Liner: Deploying Ollama and verifying the systemd service.
The big question: Can a laptop from a few years ago outperform a 2026 Windows workstation in raw AI stability?_
Top comments (0)