DEV Community

Julien Simon
Julien Simon

Posted on • Originally published at julsimon.Medium on

Optimize the prediction latency of Transformers with a single Docker command!

Transformer models are great. Still, they’re large models, and prediction latency can be a problem. This is the problem that Hugging Face Infinity solves with a single Docker command.

In this video, I start from a pre-trained model hosted on the Hugging Face hub. Using an AWS CPU instance based on the Intel Ice Lake architecture (c6i.xlarge), I optimize my model using the Infinity Multiverse Docker container.

Then, I push the model back to the Hugging Face hub, and I deploy it on a prediction API running in an Infinity container on my AWS instance.

Finally, I predict with the optimized model and get a 5x speedup compared to the original model.

Original model: https://huggingface.co/juliensimon/autonlp-imdb-demo-hf-16622767

Code: https://huggingface.co/juliensimon/imdb-demo-infinity/tree/main/code

New to Transformers? Check out the Hugging Face course at https://huggingface.co/course

Top comments (0)