
Using Docker to run Large Language Models (LLMs) locally? Yes, you heard that right. Docker is now much more than just running a container image. W...
For further actions, you may consider blocking this person and/or reporting abuse
How is it better than using LLM Studio?
Performance-wise, they both wrap llama.ccp (so we should expect similar performances), but LM Studio is more mature with multiple engines support and more OS support so far. The Model Runner will get there, but it will require some time.
The integration with the Docker technology (compose but also the Docker Engine for pipelines) is going to come soon, and it will give the model runner an advantage to developers to be well integrated in their development lifecycle.
🚀
Let's go!
I would say this is true for a low managed machine, or a box that you will just used to expose LLMs, where installing ollama or llmstudio and maintaining the latest versions are not possible.
Back in the day, Docker did the same thing with Kubernetes. You could just run K8s right there in Docker Desktop - it worked. Very handy for beginners.
I guess it's the same in this case too. 😄
100%. Thank you.
I'm not sure Ollama will give better perf since both Ollama and Docker Model Runner are running natively on the hardware.
The performances are very similar (of course). One of our captains compared both in a blog, you can check it out: connect.hyland.com/t5/alfresco-blo....
Thanks for the comment!