DEV Community

Vincent HETRU
Vincent HETRU

Posted on

Cheapest CUDA-Compatible Cloud GPU Options in 2023

So you need to train some big deep learning models, but your RTX card isn't quite up to the task?

This is because you're trying to fine-tune FlanT5 XXL, which requires a lot of GPU power.

You're on a budget? No problem. Your best option is probably Lambdalabs, which offers access to a cloud instance with an A100 GPU and 40 GB VRAM for $1.10 per hour.

Is there a cheaper option? Yes, there is. You could use Google Colab, which offers a 16GB T4 GPU for free, but with the drawback of having to periodically respond to human verifications.

If you prefer to access the GPU through SSH, this option may not be practical for you.

Another option that you may find interesting is spot instances. They're often cheaper than traditional instances, but come with a catch - they can be hibernated at any time without warning, which can be disruptive. However, your data won't be lost, so you can continue the training process from where you left off.

I recently tried spot instances at Datacrunch, and I was able to run an instance for 8 hours without any interruptions. Note that the spot instance option may not be listed on their pricing page as it's still in beta, and its stability may be limited.

For an A100 with 40GB VRAM, you'll pay $0.45/hour, and storage will cost an additional $0.15/hour for 550 GB. This brings the total to $0.60/hour.

Datacrunch is not the only provider of spot instances. Other options include Jarvislabs and Runpod, among others.

In conclusion, if you're on a budget but need some serious GPU power for deep learning training, the cloud has got you covered. Whether it's Lambdalabs, Google Colab, or the daring option of spot instances, you'll find an option that fits your needs and budget. Just remember, your work can get interrupted at any time if you go with spot instances. But hey, that's the price you pay for a good deal. So get your FlanT5 models ready for fine tuning!

Top comments (0)