The "which cloud is cheapest" question is the wrong question
I've been building whichvm.com on the side — it pulls live pricing from AWS, Azure, and GCP APIs into a single table. After months of staring at this data while picking instances for actual work, here's what I keep coming back to. No vendor fluff, just numbers.
Everyone asks which cloud is cheapest. The real answer is it depends on the instance family, not the cloud. On April 15 I ran a snapshot across 3,000+ instance types, and the cheapest provider flips by category.
General purpose: AWS and GCP are basically a tie
The classic 2 vCPU / 8 GB comparison. m5.large in us-east-1 is $0.0960/hr. n2-standard-2 in us-central1 is $0.0971/hr. That's a tenth of a cent apart. If you're picking between these two on price, you're optimizing noise — pick whichever ecosystem your team already speaks.
Where it gets interesting is ARM. m7g.large sits at $0.0816/hr in us-east-1 vs $0.0960 for m5.large. About $0.0144/hr cheaper. That sounds like nothing until you do the monthly math: $0.0144 × 730 hours ≈ **$10.50/mo per instance. A fleet of 20 always-on services saves you ~$210/month by moving to Graviton3. Most modern web workloads (Node, Python, Go, Java on a JIT) just run. No code changes needed.
And that's before you look at c7g and r7g — same 20%-ish discount, same story.
Memory-optimized: closer than I expected
r6i.large (AWS, $0.1260/hr) vs n2-highmem-2 (GCP, $0.1310/hr). About 4% apart for the same 2 vCPU / 16 GiB shape. Coin flip on price — the decision comes down to which ecosystem you want to live in.
Where AWS actually wins: the x2g and x2i families. The cost-per-GB-of-RAM numbers on those are the best I've found across any of the three clouds. If you're running something memory-bound past 256 GB — in-memory caches, SAP, giant Python processes — GCP and Azure don't have an equivalent at the same $/GB. That's the clearest single "AWS is cheaper" case in my data.
GPU: T4 is a wash, A100 is the wild west
For T4 inference, g4dn.xlarge vs Azure NC4as_T4_v3 — Azure is usually a few percent cheaper on-demand. Both are fine. If you want pre-built images and drivers, AWS's DLAMI ecosystem is deeper and saves a day of setup pain.
One thing worth flagging: the same GPU SKU can swing ~15% between AWS regions. Check before you commit a training fleet to a region.
A100 pricing I'm not going to quote here. Capacity is too volatile right now and numbers I'd give this week might be wrong by next week. Pull it fresh from whichvm.com/compare the day you're actually buying.
What I actually do with this
Originally I built WhichVM because every time I needed to pick an instance for a new service, I'd burn half an hour clicking around three different pricing calculators. Now my flow is:
- Set a vCPU + RAM floor in the filter.
- Sort by price.
- Look at the top three rows.
- Pick Graviton if the workload is ARM-compatible.
For stateless web services and background workers, that covers maybe 90% of picks. I want to be honest though — it's not a universal flow. It breaks down in a few places:
- Databases and anything stateful. IOPS, EBS throughput, NUMA layout all matter more than $/hr. Cheapest-by-vCPU will steer you wrong.
- GPU workloads. The question is which VRAM tier your model fits in, not which card is cheapest per hour. A100 vs H100 isn't a price decision.
- Legacy stacks on ARM. Most modern runtimes are fine. But if you've got native C/C++ deps, older Python ML wheels, or Node modules with prebuilt x86 binaries, Graviton can bite you. Test before you commit a fleet.
- Burstable vs steady. Spiky workloads (dev environments, low-traffic APIs) often land cheaper on t-series or e2 than on anything a price-sorted list surfaces, because the burstable pricing model isn't what gets shown.
- Existing commitments. If you're already sitting on Savings Plans, CUDs, or RIs, the "cheapest on-demand" answer is actively misleading. Your marginal cost is zero inside the commitment.
- Compliance and data residency. Obvious one, but worth saying — region gets locked before price enters the conversation.
- Networking-heavy stuff. Check bandwidth and PPS, not just vCPU and RAM. Some cheap instances will throttle you.
- Spot. Different game entirely. Interruption rate matters more than list price, and the rankings look nothing like on-demand.
So: 90% for the stateless web-service case, more like 60% across all real infra decisions. If you're in one of the above buckets, the tool is still useful as a starting point — you're just filtering on more than price.
All pricing updates daily from the official provider APIs. No signup, no paywall — whichvm.com.
Top comments (0)