I Expanded My GPU Rental Fleet to 6 Cards — Here's What Happened to My Earnings
A few weeks ago I wrote about renting out my single RTX 3060 on Vast.ai for passive income. The experiment worked better than I expected, so I did what any reasonable person would do: I went and dug out the five other GPUs sitting in my storage room.
This is the honest follow-up. What actually happened when I went from 1 GPU to 6.
The Backstory
I had a bunch of GPUs from an older setup — two RTX 3070s, one RTX 3080, and two more RTX 3060s. They were collecting dust. The PC they came from got upgraded, the cards went into cardboard boxes, the boxes went under a shelf.
Total VRAM across all six: around 62GB. Combined retail value when new: probably $3,000+. Current value sitting in boxes: $0/month.
The math wasn't complicated.
What the Expansion Actually Took
Here's what I underestimated: it's not just "plug cards in, profit."
The hardware side
You can't just stack 6 GPUs into a regular PC case. I had to think about:
- PCIe slots and bandwidth. A standard ATX board has maybe 2-3 real x16 slots. For 6 cards, you're looking at risers, which means a mining-style open frame or a server chassis.
- Power. Each card pulls 150-250W under load. Six cards = potentially 1,200-1,500W just in GPU power. Plus CPU, drives, RAM. My existing 850W PSU was not going to cut it.
- Cooling. Cards in a tight case thermal-throttle each other. Open frame was the answer.
I ended up using an open-air mining frame I found used for cheap, two PSUs daisy-chained (a sketchy-but-common approach in the mining world), and PCIe risers.
Setup time: about a full weekend.
The software side
Getting all six cards recognized wasn't plug-and-play either. I run Windows on the main PC (easier driver support for NVIDIA), and Vast.ai has a Windows daemon that mostly works — except when it doesn't.
A few issues I hit:
- Two risers were flaky and caused cards to drop off
- One 3070 had a driver conflict until I did a clean DDU reinstall
- Vast.ai's host dashboard showed 5 GPUs after setup; took me an hour to figure out the sixth wasn't being detected
Total debugging time before everything was stable: another weekend.
The Earnings Comparison
| Setup | Cards | VRAM | Weekly Earnings |
|---|---|---|---|
| Before (1 card) | RTX 3060 | 12GB | ~$12-18 |
| After (6 cards) | 3060 × 3, 3070 × 2, 3080 × 1 | 62GB | ~$65-95 |
Not exactly linear scaling. Here's why:
Demand is unpredictable. Sometimes 4 of my 6 cards are rented simultaneously. Sometimes 1. The RTX 3080 gets picked up more often than the 3060s — higher VRAM matters for LLM inference jobs that need room to load bigger models.
Not all hours are equal. Utilization spikes during US business hours and drops overnight (Turkey time). I'm in a timezone where "overnight for me" overlaps with "peak US working hours," which actually helps.
Pricing matters more than I thought. I dropped my per-card price slightly and saw utilization go up noticeably. A few cents per hour makes a real difference when renters are comparing a dozen similar options.
Current Monthly Run Rate
Across all six cards, I'm averaging around $280-340/month before electricity.
Power costs are real. Six GPUs under load is serious wattage. My electricity bill went up — I haven't calculated the exact delta yet because my bill is shared (I'm not the only one using power in my building), but I'd estimate $40-60/month in additional costs.
Net: roughly $220-280/month in real passive income.
Is that life-changing? No. Is it meaningful for money that was doing nothing? Absolutely.
What I'd Do Differently
1. Start with a proper open-frame rig, not a cobbled-together case.
The mining frame was cheap but took time to source. If I were doing this again I'd budget for it from day one.
2. Get a proper high-wattage PSU setup.
Running two PSUs linked together works but it's inelegant. A server PSU with the right adapter is cleaner and safer.
3. Test each card individually before combining them.
I wasted time troubleshooting "which card is the problem" when I could've confirmed each one worked before building the full rig.
4. Set minimum job duration.
Short jobs (under an hour) rack up overhead — container spin-up time, handshaking — without much earnings. I set a minimum of 2 hours and earnings-per-hour improved.
The Unexpected Part
I expected this to be a boring passive income setup. It mostly is. But I've learned a surprising amount about how the AI inference market actually works by watching what gets rented and when.
Most renters are running:
- Fine-tuning jobs (need sustained GPU hours)
- LLM inference (need VRAM more than raw compute)
- Image generation (FLUX, Stable Diffusion variants)
- Dev environments (people testing stuff without committing to a cloud contract)
Watching the demand patterns is actually interesting data about what the AI dev community is building right now. The 3080 almost always goes first — 10GB VRAM hits a sweet spot for smaller Llama and Mistral models.
Is It Worth It?
Depends on your situation.
Yes, if: You already have the GPUs and they're sitting idle. The marginal cost of setting this up is mostly your time, and the monthly return is real.
Maybe, if: You'd have to buy the GPUs. At current used-market prices, payback period is 6-12 months depending on utilization. That's not terrible but it's not obvious.
No, if: You're renting out your daily-driver GPU. The rental platform can grab your card at inconvenient times. Keep at least one card reserved for your own use.
What's Next
I'm looking at adding the Ubuntu server I have running as a CPU-only Vast.ai host for smaller workloads. Less money per unit but zero additional hardware cost.
Also thinking about whether it makes sense to eventually get into the dedicated hosting side rather than the rental marketplace — more stable income, more setup required. Still researching.
For now, 6 cards, ~$250/month net, and a weekend's worth of setup. I'll take it.
Questions about the rig setup or Vast.ai specifics? Drop them in the comments.
→ Check out my automation work on Fiverr
→ Follow along on Telegram
Top comments (0)