DEV Community

Fabien Rogeret
Fabien Rogeret

Posted on

Using GPT-Engineer with Ollama and Docker on macOS

#ai

Step 1: Set Up Ollama to Accept Docker Requests

First, allow Ollama to accept connections from Docker:



launchctl setenv OLLAMA_HOST "0.0.0.0"
ollama run codellama


Enter fullscreen mode Exit fullscreen mode

Step 2: Run GPT-Engineer with the Local Model

Start GPT-Engineer using Docker, pointing to the local Ollama model:



docker run -it --rm \
  -e OPENAI_API_BASE="http://<local_ip>:11434/v1/" \
  -e OPENAI_API_KEY="NOTHING_HERE" \
  -e MODEL_NAME="codellama" \
  -v ./your-project:/project gpt-engineer


Enter fullscreen mode Exit fullscreen mode

Replace <local_ip> with your machine's IP address.

Use a Different Model

To switch models (e.g., gemma2):



ollama run gemma2
docker run ... -e MODEL_NAME="gemma2"


Enter fullscreen mode Exit fullscreen mode

Customize Prompts for Local Models

Local models need prompt customization. Use --use-custom-preprompts to extract prompts for tuning:



docker run -it --rm \
  -e OPENAI_API_BASE="http://<local_ip>:11434/v1/" \
  -e OPENAI_API_KEY="NOTHING_HERE" \
  -e MODEL_NAME="codellama" \
  -v ./your-project:/project gpt-engineer \
  --use-custom-preprompts -i


Enter fullscreen mode Exit fullscreen mode

This generates a preprompts/ for custom instructions that you can edit and feed to the model.

With -i, you give your instructions via a prompt file.

Top comments (0)