DEV Community

Gowsiya Syednoor Shek
Gowsiya Syednoor Shek

Posted on • Edited on

From Save to Serve: Boost LLM Dev Speed with Docker Compose Watch

Next-Gen Docker for AI Developers: Series Intro

Welcome to my 6-part series exploring how Docker's latest features streamline AI and LLM-based application development.

Each part will show how to:

  • Accelerate development
  • Reduce manual steps
  • Improve containerized workflows for modern AI use cases

Part 1: Why Docker Compose Watch Is a Game Changer

As an AI developer working with LLM apps (LangChain, Flask APIs, etc.), I often deal with repetitive edit-rebuild-restart cycles. This gets especially frustrating while:

  • Tuning prompts
  • Testing temperature settings
  • Tweaking model configs
  • Updating dependencies

Even small changes can become time-consuming in a Dockerized workflow.


What Docker Compose Watch Promises

Docker Compose Watch solves this by:

  • Watching files or directories
  • Triggering live sync into the container (sync)
  • Or rebuilding when key files change (rebuild)
  • Supporting sync+restart for fast config reloading
  • Letting you stay in your flow — no manual restart or rebuilds

It sounded perfect for speeding up my LangChain + Flask API dev loop.


My Setup

I containerized a Flask API using LangChain to answer LLM queries.

Instead of manually rebuilding every time I changed a prompt or package, I enabled:

  • sync for Python code
  • rebuild for requirements.txt

Everything was orchestrated with docker-compose.override.yml using the new develop.watch feature.

🔗 GitHub Repo:

https://github.com/gowsiyabs/docker-llm-compose-watch


My Real-World Findings

Here’s what I observed after enabling Compose Watch for my LangChain + Flask setup:

Command used
docker compose -f docker-compose.yml -f docker-compose.override.yml up --build

Python code changes synced instantly

Once I saved the file, changes reflected in the running container without needing to restart. This was a big win for fast iteration.

Rebuild didn't work at first for requirements.txt

Initially, even after modifying requirements.txt, no rebuild was triggered. The console just kept showing the logs from the first run.

Fix: Enable watch support explicitly in your terminal

On Windows, I had to explicitly enable Docker’s watch support when using Git Bash or PyCharm's terminal. Once enabled, rebuilds started working as expected.

📌 Tip: Use a terminal that supports Docker watch triggers or set up your IDE to allow file system notifications to propagate correctly.

Lesson: Even though docker compose watch is powerful, it may require terminal or IDE-level configuration depending on your OS and environment.


Why This Matters for AI Developers

Compose Watch is ideal if you are:

  • Experimenting rapidly with LLM APIs
  • Testing prompts, chains, or model parameters
  • Tired of typing docker compose down && up every few minutes
  • Wanting a fast inner loop without sacrificing containerization

Summary

Docker Compose Watch takes us closer to hot-reload for containers, especially for interpreted languages like Python.

While rebuild actions still feel a bit rough around the edges, sync-based workflows are already a productivity booster for LLM, GenAI, and Flask-based AI apps.


🔜 Next in the Series

In Part 2, I will explore Docker Model Runner — a new way to run LLMs locally with Docker, without relying on cloud APIs or billing. Its available here now!

Follow me here on Dev.to to stay updated!

Top comments (1)

Collapse
 
salma_aga profile image
Salma Aga Shaik

I’m just starting to learn Docker, and this post helped me a lot. I didn’t know about Docker Compose Watch before, but you explained it so clearly and step by step. It was easy to follow and understand. Really appreciate you sharing this. Loved it