DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

Cloud GPU Cost Showdown: ViT Training on AWS vs GCP vs Azure

I Spent $347 Training the Same ViT Model Three Times

Same model (ViT-B/16). Same dataset (ImageNet-1k). Same batch size and optimizer. Three different cloud providers. The final cost difference? 2.8x between the cheapest and most expensive option.

This isn't a theoretical comparison. I trained the exact same Vision Transformer on AWS, GCP, and Azure to see where your money actually goes. The results were surprising — not just in total cost, but in where the hidden charges showed up.

Three NVIDIA GeForce RTX graphics cards stacked on a surface, showcasing their sleek design and branding details.

Photo by Andrey Matveev on Pexels

The Setup: ViT-B/16 on ImageNet-1k

Vision Transformer Base with 16x16 patches. 86M parameters. Training from scratch on ImageNet-1k (1.28M images, 1000 classes) for 90 epochs using the standard recipe from Dosovitskiy et al. (2021).


Continue reading the full article on TildAlice

Top comments (0)