DEV Community

Severin Ibarluzea
Severin Ibarluzea

Posted on

AWS Lambda, but with GPUs?

Many advanced machine learning/deep learning tasks require the use of a GPU. You can rent out these GPUs on services like AWS but even the cheapest GPUs will cost over $600/mo. Elastic GPUs help, but only give a limited amount of memory. I don't know about you, but most of the time I'm doing research, I want quick results and have a ton of idle time otherwise.

Some providers seem to be nullifying the need to think about a GPU with cloud machine learning services (e.g. GCP's Cloud ML Engine). This is nice for production, but for training it seems to be a bit of a hassle, I don't want to have to package my "trainer application".

What I'd really like to see is the context from my local machine (vars and data) moved onto a machine with a powerful GPU, quickly run, then the output put back in my environment.

So what do you think? Are we going to start seeing GPU functions as a service for research?

Top comments (2)

Collapse
 
andrewlucker profile image
Andrew Lucker • Edited

So you aren't using Tensorflow? What dependencies or even what language/OS are you talking about? Some environments drag more dependencies along with them than others; for example anything Apple is often hardware bound compared to lightweight environments like Domain Specific Languages.

Your example of AWS Lambda only supports very few languages for this reason. Also each of those languages are totally stripped of many builtins and adding libraries is hard.

Collapse
 
ben profile image
Ben Halpern

I agree with this theory, but definitely lack the expertise to be super critical.