Many advanced machine learning/deep learning tasks require the use of a GPU. You can rent out these GPUs on services like AWS but even the cheapest GPUs will cost over $600/mo. Elastic GPUs help, but only give a limited amount of memory. I don't know about you, but most of the time I'm doing research, I want quick results and have a ton of idle time otherwise.
Some providers seem to be nullifying the need to think about a GPU with cloud machine learning services (e.g. GCP's Cloud ML Engine). This is nice for production, but for training it seems to be a bit of a hassle, I don't want to have to package my "trainer application".
What I'd really like to see is the context from my local machine (vars and data) moved onto a machine with a powerful GPU, quickly run, then the output put back in my environment.
So what do you think? Are we going to start seeing GPU functions as a service for research?