The Problem
Most personal computers sit idle 90% of the time. Meanwhile, ML training and gaming workloads cost a fortune on cloud GPU instances. I wanted to bridge that gap — turn idle hardware into useful compute.
What I Built
ComputePool — a hub-and-spoke distributed compute grid. Your Zo Computer acts as the control plane. Idle laptops and PCs become worker nodes that poll for jobs, execute workloads, and earn credits.
Architecture
Node Agent (Python) ← polls → Hub API ← dispatches → Worker Pool
↓
Credit Ledger
↓
Cashout System
Node Agent (node-agent/node_agent.py):
- Polls hub every 30s for available jobs
- Reports GPU tier (RTX 4090 = 3x credit multiplier)
- Streams results back on completion
Hub (hub/hub.ts):
- FastAPI backend on Railway
- Job queue with priority based on GPU tier
- Credit ledger per node
- Regional multipliers (Indian region: 0.7x)
Dashboard (frontend/):
- Next.js 14 on Vercel
- Real-time job status, credit balance, node management
- Live at man44.zo.space/pool
Credit Economy
- Workers earn credits per job completed
- GPU tiers: RTX 4090 (3x), RTX 3080 (2x), GTX 1080 (1x)
- Indian region: 0.7x base rate
- 20% platform fee on all earnings
- Minimum cashout: ₹500
Key Design Decisions
- Pull-based job distribution — Nodes poll, hub doesn't push. Eliminates NAT traversal issues.
- GPU-tiered pricing — Higher-end GPUs earn more credits, incentivizes quality hardware.
- Regional multipliers — Adjusts for purchasing power parity in different markets.
Stack
- Backend: FastAPI + PostgreSQL
- Frontend: Next.js 14 + Tailwind CSS
- Node Agent: Python 3.10+ with Docker
- Deployment: Railway (backend) + Vercel (frontend)
GitHub
https://github.com/amsach/compute-pool
Feedback Wanted
- Is the credit economy balanced for casual node operators?
- Would you run a node agent on your machine? Why or why not?
- Any security concerns with the pull-based model?
Top comments (0)