I had an RTX 3060 sitting on a shelf.
Not broken. Not old. Just... not doing anything. My Windows PC runs models when I need them, but most of the time it's idle. The fans spin, the power draw ticks along, and that 12GB of VRAM just sits there.
A week ago I connected it to Vast.ai — a GPU marketplace where people rent compute time. No code required. You install a daemon, set a price, and wait for someone to rent your machine.
Here's what actually happened.
Why I Didn't Just Mine Crypto
First thing people ask: "Why not just mine?"
Short answer: it's 2026, the margins are brutal, and I didn't want to deal with it. GPU compute rental is different — you're renting raw processing power, and the demand right now is AI inference and training. People building LLMs, running diffusion models, doing batch jobs.
The upside: no mining pool setup, no daily coin price anxiety, no special software. Your machine runs Docker containers, gets paid per second of use, you get a payout.
The Setup (Genuinely About 90 Minutes)
- Created a Vast.ai account
- Installed the host daemon on Windows (it's a one-click installer)
- Set my RTX 3060 12GB at $0.15/hour
- Went to bed
That's it. No configuration rabbit holes, no drivers to hunt down. The daemon manages everything — spinning up containers, cleaning up after renters, reporting uptime.
I set the minimum rental duration to 1 hour so I wouldn't get hit with a dozen 5-minute jobs.
The First Week Numbers
| Day | Hours Rented | Earnings |
|---|---|---|
| Day 1 | 3.2h | $0.48 |
| Day 2 | 11.5h | $1.73 |
| Day 3 | 0h | $0.00 |
| Day 4 | 16.8h | $2.52 |
| Day 5 | 9.1h | $1.37 |
| Day 6 | 22.0h | $3.30 |
| Day 7 | 14.4h | $2.16 |
Week 1 total: ~$11.56
Annualized naively? About $600/year. Which would be great except day 3 was $0 and utilization is inconsistent.
A more realistic steady-state: $50–130/month depending on demand.
What People Actually Rent It For
Vast.ai shows you the jobs (anonymized). Mine has been used for:
- Running
vllminference servers (Mistral, Qwen, LLaMA variants) - Stable Diffusion batch jobs
- Some kind of PyTorch training run that lasted 8 hours
The 3060 with 12GB VRAM is actually sweet for inference — fits most 7B–13B models at 4-bit quantization without breaking a sweat. It's not the fastest card, but it's affordable to rent, which means demand is there.
The Honest Downsides
You can't use your GPU while it's rented. Sounds obvious, but the practical implication: if you need your machine for local inference and someone's rented it, tough luck. I started routing heavy tasks to my Mac Mini during rental periods.
Electricity. My RTX 3060 at load pulls about 150W. At Turkish electricity rates, that's roughly $8–15/month in power at typical utilization. So the net is lower than the gross numbers above.
It's genuinely passive but not predictable. Day 3 was $0. Day 6 was near-full utilization. There's no way to forecast demand.
Payouts have a minimum. Vast.ai pays out once you hit a threshold. Nothing to worry about, just something to know going in.
What I'm Going to Try Next
The obvious play is adding more GPUs. I have a few more in storage — an RTX 3080 and some older 3060s. If I rack those up, the math gets interesting:
| GPU | Rate | Monthly (50% util) |
|---|---|---|
| RTX 3060 (current) | $0.15/h | ~$54 |
| RTX 3080 10GB | $0.20/h | ~$72 |
| 2x RTX 3070 8GB | $0.16/h | ~$115 |
That's ~$240/month without doing anything after setup. At 70% utilization: ~$340.
The real work is physical — pulling GPUs from storage, getting them into a rig, managing thermals. But the software side is almost zero maintenance.
Should You Try This?
If you have a spare GPU collecting dust: yes, probably. The setup is low friction, the risk is near-zero (worst case, you uninstall the daemon and move on), and even modest earnings beat $0.
If you're thinking about buying a GPU specifically for this: do the math carefully. At current rates, an RTX 3060 costs ~$300–350 used. Payback period at $50/month is 6–7 months, which is fine — but don't expect to fund your retirement from a single card.
The real value for me isn't the income (yet). It's that I now have a system running, I understand the demand patterns, and I know the path to scale looks viable.
The Setup Summary
- Platform: Vast.ai (there's also RunPod if you want alternatives)
- Time to set up: ~90 minutes
- Technical skill required: Know how to install software on Windows
- Ongoing maintenance: Almost none
- Realistic earnings: $50–130/month per mid-tier GPU
Happy to answer questions if you try this and run into something weird. The daemon is pretty solid but there's always an edge case or two.
I write about running AI locally, automation side projects, and occasionally making money from hardware that would otherwise just collect dust. If any of this is useful, feel free to follow.
Top comments (0)