DEV Community

Cover image for Local AI Tools: Getting Started with Ollama (Tool 1)
Alexander Razvodovskii
Alexander Razvodovskii

Posted on

Local AI Tools: Getting Started with Ollama (Tool 1)

Introduction

When discussing accessible local AI tools, it’s hard not to begin with one of the most popular options available today — Ollama. This tool has gained widespread recognition thanks to its simplicity, reliability, and availability across all major operating systems.

Easy Installation for Everyone

Ollama stands out because, unlike some other local AI solutions, it offers a fully functional desktop version. You can simply visit the official Ollama website, download the installer for your operating system, and launch it like any other application. Desktop versions are available for Windows, Linux, and macOS, making Ollama accessible to practically any user.

Ollama Desktop

Once installed, Ollama provides access to a collection of pre-trained AI models. A complete list of these models can be found on the official website, and many of them can be downloaded directly through the installer. You are free to choose which models run locally on your machine, and Ollama also supports the option to use certain models remotely if needed.

Ollama with installed model

I won’t go into the installation process here — it’s straightforward and follows the standard steps for your operating system. The only additional requirement is to complete a simple registration so the desktop app can connect with the Ollama service.

Please note that Ollama has a paid subscription. If you're focusing on cloud-based use, you may need to upgrade your subscription.

Beyond Desktop: Running Ollama in a Container

Although the desktop version is the easiest entry point, Ollama is not limited to that. The tool can also be deployed in a Docker container, which is especially convenient for developers and users who want more control over their environment or plan to integrate Ollama into automated workflows.

The documentation provides clear instructions for installing and running the Docker version using a single command. Once deployed, Ollama runs as a local server accessible from your machine.

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Enter fullscreen mode Exit fullscreen mode
docker exec -it ollama ollama run llama3.2
Enter fullscreen mode Exit fullscreen mode
http://localhost:11434/
Enter fullscreen mode Exit fullscreen mode

Ollama is running

However, unlike some other tools we’ll explore later in this series, Ollama does not include a web interface when running inside a container. Interaction with the system is performed exclusively through the API, which is thoroughly documented on the website. The API allows you to:

  • List available models
  • Submit a prompt
  • Receive responses programmatically

It should be noted here that the model's response is sent in stream form.

async function askModel() {
  const response = await fetch("http://localhost:11434/api/generate", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      model: "llama3.2",
      prompt: "Who are you?"
    })
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder("utf-8");

  // output a stream response
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    console.log(decoder.decode(value));
  }
}

askModel();
Enter fullscreen mode Exit fullscreen mode
{
    "model": "llama3.2",
    "created_at": "2025-12-09T10:20:54.622934272Z",
    "response": "I",
    "done": false
}{
    "model": "llama3.2",
    "created_at": "2025-12-09T10:20:54.761152451Z",
    "response": "'m",
    "done": false
}
...
Enter fullscreen mode Exit fullscreen mode

Communication happens through standard POST requests to the local server, and the API format will feel intuitive to anyone with basic experience integrating services.

Installing Ollama Through a Dockerfile

To install Ollama inside a container, we begin by creating a Dockerfile. This file defines how the container will be built, including downloading and installing Ollama.

Additionally, for convenience, we can preload a specific AI model during build time. This saves time later and ensures that the required model is already available when the container starts.

Here’s an example structure:

  • Dockerfile
  • entrypoint.sh (the bash script that pulls the model)

Dockerfile

FROM ollama/ollama:latest
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 11434
ENTRYPOINT ["/entrypoint.sh"]
Enter fullscreen mode Exit fullscreen mode

entrypoint.sh

#!/bin/bash
ollama serve &
sleep 3
ollama pull llama3.1
wait
Enter fullscreen mode Exit fullscreen mode

This allows us to include Ollama as a component of any project — web service, automation pipeline, or local development setup — by simply running:

docker build -t my-ollama .
docker run -p 11434:11434 my-ollama
Enter fullscreen mode Exit fullscreen mode

With this setup, Ollama becomes a plug-and-play module that can be easily distributed, reused, and integrated into more complex systems.

Integrations

The documentation also includes examples for connecting Ollama to modern development and workflow tools such as n8n, Visual Studio Code, JetBrains IDEs, Codex, and others — making it easier to fit Ollama into broader automation or development pipelines.

Strengths and Limitations

In my view, the main advantage of Ollama is its simplicity. It’s easy to install, widely available, and accessible even to users who are not deeply technical. For those who want to experiment with AI models on their own machine without complicated setup, it’s an excellent starting point.

That said, as we will explore in the next articles, Ollama is not always the most efficient or flexible solution — especially for developers who need deeper customization, advanced configuration, or who plan to integrate AI services heavily into complex workflows. In such scenarios, alternative tools may provide more performance or better integration capabilities.

Top comments (0)