DEV Community

Cover image for Locally hosted AI writing assistant

Locally hosted AI writing assistant

Tom Nijhof on February 10, 2024

Most artificial intelligence (AI) tools are either closed-source or require a server to function. However, with the help of Ollama, it is possible ...
Collapse
 
benajaero profile image
ben ajaero

Tom, this is a stellar walkthrough on setting up a local AI writing assistant with Ollama. It's refreshing to see AI tools being developed with privacy and local execution in mind.

Collapse
 
wagenrace profile image
Tom Nijhof

Thank you, it still feels so weird that Meta is the one to help with privacy 😄

Collapse
 
benajaero profile image
ben ajaero

Meta is pretty good on the developer side. Their maintenance of React and GraphQL have been great.

Thread Thread
 
wagenrace profile image
Tom Nijhof

True, but I did not see a LLM as a dev tool. A specially if you see what open ai get for a revenue from a chat bot 😯

Collapse
 
samiekblad profile image
Sami Ekblad

Setting up local stuff has got som much easier lately. As a Java developer I tested the setup of my own custom web app UI with Ollama and it takes like 20 lines of UI code and 2 commands to get Mistral running locally in a Docker container. Now I can build local AI systems for anything! But my question is: Is there way to use GPUs in local containers to speed up LLMs?