Most artificial intelligence (AI) tools are either closed-source or require a server to function. However, with the help of Ollama, it is possible ...
For further actions, you may consider blocking this person and/or reporting abuse
Tom, this is a stellar walkthrough on setting up a local AI writing assistant with Ollama. It's refreshing to see AI tools being developed with privacy and local execution in mind.
Thank you, it still feels so weird that Meta is the one to help with privacy 😄
Meta is pretty good on the developer side. Their maintenance of React and GraphQL have been great.
True, but I did not see a LLM as a dev tool. A specially if you see what open ai get for a revenue from a chat bot 😯
Setting up local stuff has got som much easier lately. As a Java developer I tested the setup of my own custom web app UI with Ollama and it takes like 20 lines of UI code and 2 commands to get Mistral running locally in a Docker container. Now I can build local AI systems for anything! But my question is: Is there way to use GPUs in local containers to speed up LLMs?