DEV Community

Cover image for Easy private AI assistant with Goose and Docker Model Runner
Oleg Šelajev
Oleg Šelajev

Posted on

Easy private AI assistant with Goose and Docker Model Runner

Using Goose and Docker Model Runner

Goose is an innovative CLI assistant designed to automate development tasks using AI models. Docker Model Runner simplifies deploying AI models locally with Docker. Combining these technologies, you get a powerful local environment with advanced AI assistance, ideal for coding and automation.

Install Goose CLI on macOS

Install Goose via the curl2sudo oneliner technique:

curl -fsSL https://github.com/block/goose/releases/download/stable/download_cli.sh | bash
Enter fullscreen mode Exit fullscreen mode

Enable Docker Model Runner

First, ensure you have Docker Desktop installed, then configure Docker Model Runner with your model of choice. Go to Settings -> Beta features and check the checkboxes for Docker Model Runner.

By default it’s not wired to be available from your host machine, as a security precaution, but we want to simplify the setup, and enable the TCP support as well. The default port for that would be 12434, so the base URL for the connection would be: http://localhost:12434

Docker Desktop configuration to enable Docker Model Runner

Now we can pull the models from Docker Hub: hub.docker.com/u/ai and run the models

docker model pull ai/qwen3:30B-A3B-Q4_K_M
docker model run ai/qwen3:30B-A3B-Q4_K_M
Enter fullscreen mode Exit fullscreen mode

This command starts the interactive chat with the model.

Configure Goose for Docker Model Runner

Edit your Goose config at ~/.config/goose/config.yaml:

GOOSE_MODEL: ai/qwen3:30B-A3B-Q4_K_M
GOOSE_PROVIDER: openai
extensions:
  developer:
    display_name: null
    enabled: true
    name: developer
    timeout: null
    type: builtin
GOOSE_MODE: auto
GOOSE_CLI_MIN_PRIORITY: 0.8
OPENAI_API_KEY: irrelevant
OPENAI_BASE_PATH: /engines/llama.cpp/v1/chat/completions
OPENAI_HOST: http://localhost:12434
Enter fullscreen mode Exit fullscreen mode

The OPENAI_API_KEY is irrelevant as Docker Model Runner does not require authentication because the model is run locally and privately on your machine.

We provide the base path for the OpenAI compatible API, and choose the model GOOSE_MODEL: ai/qwen3:30B-A3B-Q4_K_M that we have pulled before.

Testing It Out

Try Goose CLI by running goose in the terminal. You can see that is automatically connects to the correct model, and when you ask for something, you’ll see the GPU spike as well.

Goose CLI powered by Docker Model Runner

Now, we also configure Goose to have the Developer extension enabled. It allows it to run various commands on your behalf, and makes it a much more powerful assistant with access to your machine than just a chat application.

You can additionally configure the custom hints to goose to tweak its behaviour using the .goosehints file.

And what’s even better, you can script Goose to run tasks on your behalf a simple one-liner:

goose run -t "your instructions here" or goose run -i instructions.md

where instructions.md is the file with what to do.

On macos you have access to crontab for scheduling recurrent scripts, so you can automate Goose with Docker Model Runner to activate repeatedly and act on your behalf. For example,
crontab -e , will open the editor for the commands you want to run, and a like like:

5 8 * * 1-5 goose run -i fetch_and_summarize_news.md
Enter fullscreen mode Exit fullscreen mode

Will make Goose run at 8:05 am every workday and follow the instructions in the fetch_and_summarize_news.md file. For example to skim the internet and prioritize news based on what you like.

Conclusion

All in all integrating Goose with Docker Model Runner creates a simple but powerful setup for using local AI for your workflows.
You can make it run custom instructions for you or easily script it to perform repetitive actions intelligently.
It is all powered by a local model running in Docker Model Runner, so you don't compromise on privacy either.

Top comments (0)