DEV Community

Jasdeep Singh Bhalla
Jasdeep Singh Bhalla

Posted on

Running Claude Code with Docker

A beginner-friendly setup for private, on-device coding agents

Modern AI coding tools are powerful—but many developers don’t want their source code leaving their machine. The good news is that Claude Code can run entirely locally, powered by Docker.

This guide shows how to connect Claude Code to Docker Model Runner so you can use a local language model for agentic coding, with minimal setup and full control over your data.


What This Setup Does

Instead of sending prompts to a hosted API, Claude Code talks to a local model running inside Docker.

High-level flow:

Claude Code (terminal)
        ↓
Docker Model Runner
        ↓
Local LLM container
Enter fullscreen mode Exit fullscreen mode

Your repository stays local. No API keys required.


Prerequisites

Make sure you have:

  • Docker Desktop installed and running
  • Claude Code installed on your system
  • At least 16 GB of RAM (recommended)

Step 1: Install Claude Code

macOS / Linux

curl -fsSL https://claude.ai/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

Windows (PowerShell)

irm https://claude.ai/install.ps1 | iex
Enter fullscreen mode Exit fullscreen mode

Verify installation:

claude --version
Enter fullscreen mode Exit fullscreen mode

Step 2: Enable Docker Model Runner

Docker Model Runner lets Docker pull, run, and serve large language models locally.

If you’re using Docker Desktop, enable TCP access:

docker desktop enable model-runner --tcp
Enter fullscreen mode Exit fullscreen mode

Once enabled, the local API will be available at:

http://localhost:12434
Enter fullscreen mode Exit fullscreen mode

You only need to do this once.


Step 3: Pull a Local Model

Download a model using Docker’s model CLI. For example:

docker model pull gpt-oss
Enter fullscreen mode Exit fullscreen mode

List available models:

docker model ls
Enter fullscreen mode Exit fullscreen mode

Step 4: Increase the Context Window

For real-world repositories, a larger context window helps significantly.

Create a new model variant with a bigger context (example: 32K tokens):

docker model package   --from ai/gpt-oss   --context-size 32000   gpt-oss:32k
Enter fullscreen mode Exit fullscreen mode

This creates a new tagged model without modifying the original.


Step 5: Connect Claude Code to Docker

Claude Code supports custom API endpoints via an environment variable.

Run Claude Code like this:

ANTHROPIC_BASE_URL=http://localhost:12434 claude --model gpt-oss:32k "Summarize this repository."
Enter fullscreen mode Exit fullscreen mode

Claude Code now sends all requests to your local Docker model.


Step 6: Make the Setup Persistent

To avoid setting the environment variable every time, add it to your shell profile.

# ~/.bashrc or ~/.zshrc
export ANTHROPIC_BASE_URL=http://localhost:12434
Enter fullscreen mode Exit fullscreen mode

Now you can simply run:

claude --model gpt-oss:32k "Explain the main service logic."
Enter fullscreen mode Exit fullscreen mode

Optional: Inspect Claude Code Requests

You can inspect the raw requests sent by Claude Code:

docker model requests --model gpt-oss:32k | jq .
Enter fullscreen mode Exit fullscreen mode

This is useful for debugging, learning prompt structure, and understanding how agentic coding tools work.


Why Run Claude Code Locally?

  • Full control over your source code
  • No API keys or usage-based billing
  • Offline-friendly development
  • Reproducible setup with Docker
  • Custom models and context sizes

Next Steps

  • Try different local coding models
  • Tune context size for large repositories
  • Pair with Docker sandboxes for safe execution
  • Use this setup as a foundation for custom coding agents

Happy hacking 🚀

Top comments (0)