SEO Title:
Earn From Your Spare GPU: Step‑by‑Step Guide to Setting Up Golem on a Linux Workstation
Intro – The Budget Problem (Hook)
You’ve got a spare GPU humming in the back of your office or home lab, and you’re constantly hearing about “cloud AI services” that bill by the hour. What if you could flip that idle card into a small income stream without paying a cent to Amazon Web Services or Google Cloud?
I built an entire AI‑ready workstation for $800 last year: a mid‑range CPU, 16 GB of RAM, a fast NVMe SSD, and a single NVIDIA RTX 3060. The whole kit (including the GPU) was under $300 – perfect for a hobbyist or a small lab that can’t justify a dedicated data center.
In this video/script we’ll walk through every step from selecting the right parts to installing Ubuntu, drivers, CUDA, LM Studio, and finally Golem. By the end you’ll be running your own GPU‑powered node and earning back your hardware costs in just a few months.
1️⃣ Pick Your Hardware (5–7 Minutes)
| Component | Recommendation | Why it Works |
|---|---|---|
| CPU | AMD Ryzen 5 5600X or Intel i5‑13400F | 6 cores, great single‑thread performance for training small models. |
| Motherboard | B550 (AMD) / B660 (Intel) with PCIe 4.0 support | Enough lanes for GPU and future expansion. |
| RAM | 16 GB DDR4/DDR5 (3200 MHz or faster) | Sufficient for most AI experiments; upgradeable later. |
| Storage | NVMe SSD, 500 GB | Fast read/write for datasets and model checkpoints. |
| GPU | NVIDIA RTX 3060 | CUDA‑capable, 12 GB VRAM, affordable. |
| Power Supply | 650W 80+ Gold | Reliable power with headroom for GPU + future upgrades. |
| Case & Cooling | Mid‑tower with good airflow | Keeps temperatures low during long training runs. |
Affiliate Placeholder:
• [AFF: Amazon RTX 3060 (~$300)] – Great price‑to‑performance ratio.
• [AFF: NVMe SSD] – Fast storage for your datasets.
Tip: If you’re building a homelab, consider a case with a built‑in fan controller so you can tweak airflow without opening the box every time.
2️⃣ Install Ubuntu (10–15 Minutes)
-
Download ISO
- Go to the official Ubuntu website and grab the latest LTS release (22.04 or newer).
- Burn it to a USB stick with Rufus or balenaEtcher.
-
Boot from USB
- Reboot, press
F12/Escto enter boot menu → choose your USB.
- Reboot, press
-
Installation Wizard
- Select “Install Ubuntu”.
- When asked about installation type, choose Erase disk and install Ubuntu (or use a custom partition if you already have Windows).
- Set your time zone, keyboard layout, username, password.
Post‑install Updates
sudo apt update && sudo apt upgrade -y
-
Optional: Install an Ubuntu Reference Book
- A handy guide for beginners can be found on Amazon or the official Ubuntu documentation.
- [AFF: Ubuntu book] – Great for troubleshooting.
3️⃣ NVIDIA Drivers & CUDA Toolkit (15–20 Minutes)
- Add Graphics Drivers PPA
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
-
Install Latest Driver
- Replace
470with the recommended driver for RTX 3060.
- Replace
sudo ubuntu-drivers autoinstall
- Verify Installation
nvidia-smi
You should see your GPU listed with its driver version.
-
Install CUDA Toolkit (Optional but Recommended)
- Download the installer from NVIDIA’s site (choose the one that matches your Ubuntu version).
- Run it following the on‑screen instructions – just accept defaults unless you have a custom setup.
Set Environment Variables
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
-
Test CUDA
- Compile a simple sample program or run
nvcc --version.
- Compile a simple sample program or run
4️⃣ Install LM Studio (10–15 Minutes)
LM Studio is a lightweight, open‑source framework that lets you develop and deploy machine learning models locally.
- Download the Latest Release
wget https://github.com/LM-Studio/LM-Studio/releases/download/vX.Y.Z/LMStudio-linux-x86_64.AppImage
chmod +x LMStudio-linux-x86_64.AppImage
- Run it
./LMStudio-linux-x86_64.AppImage
- On first run, it will download the necessary dependencies.
-
Create a Test Model
- In LM Studio, click “New Project” → choose a simple model (e.g., GPT‑Neo).
- Train or fine‑tune on a small dataset to confirm that the GPU is being used (
nvidia-smiwill show activity).
5️⃣ Set Up Golem (20–30 Minutes)
Golem is a decentralized network where you can rent out your GPU compute. Here’s how to join as a node.
- Install Go
sudo apt install golang-go -y
- Download Golem CLI
go install github.com/golemfactory/golem-cli@latest
export PATH=$PATH:$HOME/go/bin
-
Create a Wallet
- Golem uses its own cryptocurrency (GNT).
gollem wallet create
- Save the mnemonic securely; you’ll need it to recover your wallet.
- Register Your Node
gollem node start --gpu 0
- The
--gpu 0flag tells Golem which GPU index to expose (usenvidia-smito confirm). - If you have multiple GPUs, add more flags (
--gpu 1, etc.).
-
Configure Node Settings
- Edit the config file (~/.golem/node/config.yaml) to set your desired price per hour and other parameters.
- Example snippet:
gpu: price_per_hour: 0.10 # USD Start Listening for Tasks
gollem node listen
- The node will now advertise its availability to the Golem network.
-
Monitor Earnings
- Use the CLI or the web dashboard (
http://localhost:8080) to see active tasks and payouts. - Payouts are automatically sent to your GNT wallet; you can later convert them to fiat via exchanges that support GNT.
- Use the CLI or the web dashboard (
6️⃣ Optimize & Maintain (5–10 Minutes)
| Tip | Action |
|---|---|
| Keep Drivers Updated |
sudo apt update && sudo apt upgrade regularly. |
| Monitor Temperature | Install nvtop or nvidia-smi --query-gpu=temperature.gpu --format=csv. |
| Backup Configs | Store your Golem wallet mnemonic and node config in a password manager. |
| Scale Up | When you need more compute, add another GPU and run gollem node start --gpu 1 etc. |
Outro – Call to Action
That’s it! You’ve built an AI‑ready Linux workstation for under $800, installed all the necessary software, and now your spare RTX 3060 is earning you real money on the Golem network.
What’s next?
- Try running a small training job in LM Studio while your node is live – you’ll see how the GPU usage balances between local tasks and rented compute.
- Experiment with different models or datasets to maximize earnings per hour.
👇 Drop a comment below with your own build, any questions, or tips that helped you get started. Don’t forget to hit Subscribe for more beginner‑friendly guides on Linux, AI, and homelabs. Until next time—happy mining!andrew@echo-X570-Taichi:~/Echo$
Top comments (0)