Working on multiple Odoo projects, I often ran into the same problems — different library versions, different Python interpreters, and different PostgreSQL setups. If you only work with one Odoo version (say, Odoo 14), that’s not a big deal. But once you have to maintain Odoo 14, 15, 17, and 18 — each requiring its own Python and PostgreSQL — the environment setup quickly becomes a mess.
On top of that, onboarding new developers was painful. Explaining how to set up each project, which Python to use, which dependencies to install, and how to connect to the right database — all of that took time and nerves.
For development I use VS Code, because it lets me work with different technologies in one consistent interface. After reading about Dev Containers, I realized they could solve exactly these issues — reproducible environments, isolated dependencies, and simple onboarding.
So I built a two-repository system:
- one for a reusable base Docker image,
- and another for a developer project template using VS Code Dev Containers.
Together, they give me a single command to spin up a full Odoo environment — Odoo + Postgres + mounted code + debugger — ready to work.
In this article, I’ll explain the reasoning behind this setup, the structure of both repositories, and how you can use the same approach for your own projects.
Architecture overview
My setup is split into two repositories:
- Base image repository (github.com/BorovlevAS/dev_odoo_base_docker) — builds a Docker image per Odoo version (for example, 17.0) with all system dependencies, Python, wkhtmltopdf, Node LTS, postgres client, and a non-root user, etc.
-
DevContainer template repository (github.com/BorovlevAS/odoo_devcontainer_template) — a project skeleton where everything lives under
/workspace: Odoo sources, extra addons, configs, scripts, and the.devcontainerfolder.
The environment runs inside Visual Studio Code using the official Dev Containers extension — it automatically builds the defined container, mounts the workspace, and connects your VS Code session into it.
One command, and the developer is ready to work.
Principle: build heavy once → reuse across many projects.
Why I didn’t use the official Odoo image
The official odoo/docker image is great for running containers in production, but it’s not ideal for day-to-day development. It hides too much under the hood — system libraries, Python environment, and even where Odoo itself is installed. When you need to debug or patch something, you quickly hit invisible walls.
So I built my own base image optimized for development. Here’s what it gives me:
- Full control over environment — I choose the OS (Ubuntu/Debian), Python and Node versions, PostgreSQL client, wkhtmltopdf, and all required system libraries. No “magic” from upstream.
-
Unified dev experience — developer tools are pre-installed:
bash-completion,git,sudo,pre-commit,nodejs, andlinters. No more “just install this manually”. - Direct Odoo source debugging — the full Odoo source is mounted inside the container, so I can browse, patch, or set breakpoints directly in VS Code.
- Multi-version setup — versions 14/15/16/17/18 share the same structure and principles; only the base tag changes.
- Speed & caching — each image contains a pre-built virtualenv and cached Python wheels, so Dev Containers start up fast without re-installing gigabytes of dependencies.
- Proper permissions — runs as a non-root user with correct UID/GID for VS Code volumes. No permission surprises.
-
Predictable folder structure —
/workspace,/opt/odoo/venv, and/usr/local/lib/node_modules/always follow the same layout, which makes it easy to pre-configure VS Code and extensions.
Base image: decisions & ingredients (Odoo 18 as an example)
The base image represents the infrastructure layer of the development environment.
Its responsibility is limited and explicit:
- provide a stable OS base
- install system-level dependencies
- prepare a Python runtime with an isolated virtual environment
- establish a predictable filesystem layout for mounted projects.
It does not contain application code or business logic. Projects consume this image; they do not shape it.
OS base
For the current setup, the image is built on top of ubuntu:noble.
A full Ubuntu base simplifies:
- installing and maintaining system libraries
- working with .deb packages
- keeping the environment close to common Linux setups used by development teams.
Image size is not the primary concern here — stability and predictability are.
System-level dependencies
The image installs a fixed set of system packages that rarely change between projects:
- compiler toolchain and build essentials
- common runtime libraries
- fonts and rendering-related packages
- client tools for external services (e.g. databases)
Installing these dependencies once in the base image avoids repeating the same setup across multiple projects and repositories.
Python runtime and virtual environment
Python is treated as a runtime platform, not as a project-specific concern.
The image creates a dedicated virtual environment located at:
/opt/odoo/venv
The virtual environment is added to PATH, so all Python tools inside the container use it by default.
At this layer, the image installs:
- base Python tooling (pip, setuptools, wheel)
- development utilities (debugpy, pre-commit)
- Python dependencies required by the application framework, pinned to a specific major version.
Application source code itself is not part of the image.
Developer tooling (Node.js)
Node.js is included solely to support developer tooling.
In practice, it is used to run formatting and automation tools such as Prettier and its plugins.
It is not part of the runtime stack and is not involved in application execution.
By placing these tools in the base image, formatting behavior becomes consistent across all environments without requiring per-project setup.
User and permissions model
The image ensures the presence of a non-root user with UID/GID 1000, matching typical host user IDs.
This avoids permission issues when project files are mounted into the container and allows developers to work without elevated privileges.
Filesystem conventions
The image establishes a convention that all project files are mounted under:
/workspace
The base image does not populate this directory. It only guarantees that the environment expects application code, configuration, and scripts to appear there at runtime.
What this layer intentionally excludes
The infrastructure image deliberately does not include:
- application source code
- custom extensions or plugins
- environment-specific configuration.
This separation keeps the base image reusable and stable, while allowing projects to evolve independently.
Unified /workspace structure
On top of the base image, every project follows the same filesystem convention: all project-related files are mounted under a single directory — /workspace.
This is not an implementation detail; it is a design decision.
From the editor’s point of view, /workspace is the project. There are no secondary mounts, hidden paths, or split workspaces.
A typical layout looks like this:
/workspace
├── .devcontainer/
├── conf/
├── docker/
├── extra_addons/
├── odoo/
└── scripts/
One workspace, one mental model
Using a single top-level directory simplifies several things at once:
- the editor works with a single workspace root
- relative paths in configuration files remain stable
- debugging and navigation behave predictably.
There is no need to remember where the core framework lives, where custom code is mounted, or how paths are stitched together inside the container.
Clear separation of concerns
Each directory under /workspace has a narrow, explicit role:
-
.devcontainer/Dev Container configuration: how the environment is started and integrated with VS Code. -
conf/Runtime configuration files (application config, database config, tooling config). -
docker/Docker Compose files and optional project-level Docker overrides. -
extra_addons/Project-specific extensions and custom modules. -
odoo/Application framework source code, tracked as a Git submodule. -
scripts/Helper scripts for common development tasks (initialization, updates, tests).
This structure keeps infrastructure, framework, and project code clearly separated, while still being visible in a single place.
Why the framework lives inside the workspace
Keeping the framework source code inside /workspace is intentional.
It allows developers to:
- inspect and debug framework code
- step through execution without jumping between unrelated paths
- apply temporary patches when needed during development.
At the same time, the framework remains isolated from the base image and can be versioned, updated, or replaced at the project level.
Configuration stays explicit
Configuration files are part of the project repository and live under conf/.
Nothing is hidden inside the image, and no configuration is generated implicitly. What the application runs with is always visible and version-controlled.
This makes environments easier to reason about and easier to reproduce.
Why this matters in practice
A unified /workspace layout reduces friction in daily work:
- fewer assumptions about paths
- fewer “where does this file live?” moments
- less editor and debugger configuration.
Developers can focus on code and behavior, not on reconstructing the environment in their heads.
Dev Containers as the entry point
The Dev Container is the entry point into the development environment.
It connects the infrastructure layer (base image + Docker Compose) with the developer’s editor and defines how the project is opened, initialized, and used on a daily basis.
The configuration lives in .devcontainer/devcontainer.json and describes three key things:
- how containers are started
- which service the editor attaches to
- what should happen when the environment is created.
Docker Compose as the runtime definition
Instead of defining a single container, the Dev Container setup relies on Docker Compose.
This allows the development environment to be described as a small system rather than a standalone container:
- an application container
- a database container
- optional supporting services.
The Dev Container configuration references existing Compose files rather than replacing them.
This keeps the runtime definition declarative and reusable outside of VS Code if needed.
Attaching the editor to a running service
The editor attaches directly to the application service defined in Docker Compose.
From the editor’s perspective:
-
/workspacebecomes the workspace root - the container user is a non-root developer user
- all tools run inside the container context.
There is no special “editor container” and no duplicated environment.
Environment initialization
When the container is created for the first time, a project-specific initialization script is executed.
Typical responsibilities at this stage include:
- generating local environment files
- preparing configuration defaults
- performing lightweight sanity checks.
This step is intentionally kept explicit and script-driven.
Nothing happens implicitly, and nothing is hidden inside the image.
Editor integration
The Dev Container configuration also defines the editor-side environment:
- required extensions
- interpreter paths
- formatting and tooling settings
- port forwarding for local access.
These settings are part of the repository and shared by the team.
As a result, every developer opens the project with the same editor configuration and the same tooling behavior.
What a developer actually does
From a developer’s point of view, the workflow is reduced to:
- Clone the repository
- Open it in the editor
- Reopen in container
At that point, the environment is fully operational:
- containers are running
- the workspace is mounted
- tools are available
- configuration is in place.
The Dev Container does not add new concepts; it formalizes and automates existing ones.
Day-to-day developer workflow
Once the environment is up and running, daily development happens entirely inside the container.
There is no distinction between “local” and “container” workflows — the container is the development environment.
Running the application
The application is started directly from the editor using a predefined launch configuration.
This allows:
- running the server in a controlled way
- attaching the debugger immediately
- restarting the process without rebuilding containers.
From the developer’s perspective, this feels no different from working in a local virtual environment — except the environment is fully reproducible.
Working with code
All code lives under /workspace and is immediately visible to the editor.
Typical tasks include:
- implementing features in
extra_addons/ - inspecting or temporarily adjusting framework code under
odoo/ - navigating configuration in
conf/.
There is no need to switch contexts or open multiple folders.
Everything relevant to development is available in a single workspace.
Updating the database and modules
Common repetitive tasks are handled through project scripts.
Updating modules or applying changes to the database is done via explicit commands rather than ad-hoc shell invocations. This keeps workflows consistent and reduces accidental mistakes.
Scripts are version-controlled alongside the project, making behavior transparent and reviewable.
Debugging
Debugging is one of the main reasons for keeping everything inside the same workspace.
Breakpoints can be set:
- in project code
- in framework code
- across multiple modules.
Because the editor, runtime, and source code all live in the same environment, stepping through execution is predictable and stable.
Running tests
Tests are executed inside the same container and against the same environment used for development.
This avoids the common situation where:
- tests pass locally but fail elsewhere
- behavior differs due to mismatched dependencies.
No hidden state
All state that matters is either:
- in version-controlled files
- or in explicitly managed volumes.
Restarting containers does not break the workflow, and rebuilding the environment does not require manual reconfiguration.
The environment is designed to be disposable, not fragile.
Trade-offs and scope
This setup is intentionally opinionated.
It optimizes for development experience, reproducibility, and onboarding speed, not for minimalism or production deployment.
A few important boundaries to be aware of:
- This environment is meant for development, not production. Security hardening, resource tuning, and deployment concerns are deliberately out of scope.
- Docker Compose is used as a pragmatic runtime definition.This is not a Kubernetes-first setup and does not try to simulate production orchestration.
- Images are not minimal. Stability, debuggability, and predictable tooling take priority over image size.
- The approach assumes a long-lived codebase. For quick experiments or throwaway prototypes, this structure may be unnecessarily heavy.
These trade-offs are explicit.
The goal is not to cover every possible scenario, but to provide a reliable default for teams working on real projects over time.
Conclusion
This setup emerged from a practical need: working on multiple projects, across multiple Odoo versions, without constantly rebuilding development environments from scratch.
By separating concerns clearly:
- a reusable infrastructure image
- a consistent project layout under /workspace
- and Dev Containers as the entry point
the development environment becomes predictable and disposable rather than fragile and stateful.
The biggest gain is not Docker itself, but consistency:
- consistent tooling
- consistent paths
- consistent workflows.
Once established, the environment fades into the background and lets developers focus on code instead of setup.
Both repositories are intentionally simple and open to adaptation.
If this approach resonates with your workflow, feel free to reuse it, adjust it, or build your own variant on top of the same principles.
Top comments (0)