<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jayanth</title>
    <description>The latest articles on DEV Community by Jayanth (@jayanth_369).</description>
    <link>https://dev.to/jayanth_369</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jayanth_369"/>
    <language>en</language>
    <item>
      <title>I compared 3,000+ cloud VMs so you don't have to — here's what actually surprised me</title>
      <dc:creator>Jayanth</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:04:27 +0000</pubDate>
      <link>https://dev.to/jayanth_369/i-compared-3000-cloud-vms-so-you-dont-have-to-heres-what-actually-surprised-me-m14</link>
      <guid>https://dev.to/jayanth_369/i-compared-3000-cloud-vms-so-you-dont-have-to-heres-what-actually-surprised-me-m14</guid>
      <description>&lt;p&gt;&lt;strong&gt;The "which cloud is cheapest" question is the wrong question&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I've been building &lt;a href="https://whichvm.com" rel="noopener noreferrer"&gt;whichvm.com&lt;/a&gt; on the side — it pulls live pricing from AWS, Azure, and GCP APIs into a single table. After months of staring at this data while picking instances for actual work, here's what I keep coming back to. No vendor fluff, just numbers.&lt;/p&gt;

&lt;p&gt;Everyone asks which cloud is cheapest. The real answer is it depends on the instance family, not the cloud. On April 15 I ran a snapshot across 3,000+ instance types, and the cheapest provider flips by category.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;General purpose: AWS and GCP are basically a tie&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The classic 2 vCPU / 8 GB comparison. &lt;strong&gt;m5.large&lt;/strong&gt; in us-east-1 is &lt;strong&gt;$0.0960/hr&lt;/strong&gt;. &lt;strong&gt;n2-standard-2&lt;/strong&gt; in us-central1 is &lt;strong&gt;$0.0971/hr&lt;/strong&gt;. That's a tenth of a cent apart. If you're picking between these two on price, you're optimizing noise — pick whichever ecosystem your team already speaks.&lt;/p&gt;

&lt;p&gt;Where it gets interesting is ARM. &lt;strong&gt;m7g.large sits at $0.0816/hr&lt;/strong&gt; in us-east-1 vs $0.0960 for &lt;strong&gt;m5.large. About $0.0144/hr cheaper. That sounds like nothing until you do the monthly math: $0.0144 × 730 hours ≈ **$10.50/mo per instance&lt;/strong&gt;. A fleet of 20 always-on services saves you ~$210/month by moving to Graviton3. Most modern web workloads (Node, Python, Go, Java on a JIT) just run. No code changes needed.&lt;/p&gt;

&lt;p&gt;And that's before you look at c7g and r7g — same 20%-ish discount, same story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory-optimized: closer than I expected&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;r6i.large&lt;/strong&gt; (AWS, $0.1260/hr) vs &lt;strong&gt;n2-highmem-2&lt;/strong&gt; (GCP, $0.1310/hr). About 4% apart for the same 2 vCPU / 16 GiB shape. Coin flip on price — the decision comes down to which ecosystem you want to live in.&lt;/p&gt;

&lt;p&gt;Where AWS actually wins: the &lt;strong&gt;x2g&lt;/strong&gt; and &lt;strong&gt;x2i&lt;/strong&gt; families. The cost-per-GB-of-RAM numbers on those are the best I've found across any of the three clouds. If you're running something memory-bound past 256 GB — in-memory caches, SAP, giant Python processes — GCP and Azure don't have an equivalent at the same $/GB. That's the clearest single "AWS is cheaper" case in my data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GPU: T4 is a wash, A100 is the wild west&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For T4 inference, &lt;strong&gt;g4dn.xlarge&lt;/strong&gt; vs &lt;strong&gt;Azure NC4as_T4_v3&lt;/strong&gt; — Azure is usually a few percent cheaper on-demand. Both are fine. If you want pre-built images and drivers, AWS's DLAMI ecosystem is deeper and saves a day of setup pain.&lt;/p&gt;

&lt;p&gt;One thing worth flagging: the same GPU SKU can swing ~15% between AWS regions. Check before you commit a training fleet to a region.&lt;/p&gt;

&lt;p&gt;A100 pricing I'm not going to quote here. Capacity is too volatile right now and numbers I'd give this week might be wrong by next week. Pull it fresh from &lt;a href="https://whichvm.com/compare" rel="noopener noreferrer"&gt;whichvm.com/compare&lt;/a&gt; the day you're actually buying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I actually do with this&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Originally I built WhichVM because every time I needed to pick an instance for a new service, I'd burn half an hour clicking around three different pricing calculators. Now my flow is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set a vCPU + RAM floor in the filter.&lt;/li&gt;
&lt;li&gt;Sort by price.&lt;/li&gt;
&lt;li&gt;Look at the top three rows.&lt;/li&gt;
&lt;li&gt;Pick Graviton if the workload is ARM-compatible.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For stateless web services and background workers, that covers maybe 90% of picks. I want to be honest though — it's not a universal flow. It breaks down in a few places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Databases and anything stateful. IOPS, EBS throughput, NUMA layout all matter more than $/hr. Cheapest-by-vCPU will steer you wrong.&lt;/li&gt;
&lt;li&gt;GPU workloads. The question is which VRAM tier your model fits in, not which card is cheapest per hour. A100 vs H100 isn't a price decision.&lt;/li&gt;
&lt;li&gt;Legacy stacks on ARM. Most modern runtimes are fine. But if you've got native C/C++ deps, older Python ML wheels, or Node modules with prebuilt x86 binaries, Graviton can bite you. Test before you commit a fleet.&lt;/li&gt;
&lt;li&gt;Burstable vs steady. Spiky workloads (dev environments, low-traffic APIs) often land cheaper on t-series or e2 than on anything a price-sorted list surfaces, because the burstable pricing model isn't what gets shown.&lt;/li&gt;
&lt;li&gt;Existing commitments. If you're already sitting on Savings Plans, CUDs, or RIs, the "cheapest on-demand" answer is actively misleading. Your marginal cost is zero inside the commitment.&lt;/li&gt;
&lt;li&gt;Compliance and data residency. Obvious one, but worth saying — region gets locked before price enters the conversation.&lt;/li&gt;
&lt;li&gt;Networking-heavy stuff. Check bandwidth and PPS, not just vCPU and RAM. Some cheap instances will throttle you.&lt;/li&gt;
&lt;li&gt;Spot. Different game entirely. Interruption rate matters more than list price, and the rankings look nothing like on-demand.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So: 90% for the stateless web-service case, more like 60% across all real infra decisions. If you're in one of the above buckets, the tool is still useful as a starting point — you're just filtering on more than price.&lt;/p&gt;

&lt;p&gt;All pricing updates daily from the official provider APIs. No signup, no paywall — &lt;a href="https://whichvm.com" rel="noopener noreferrer"&gt;whichvm.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>azure</category>
      <category>gcp</category>
      <category>cloudcomputing</category>
    </item>
  </channel>
</rss>
