I recently experimented with C++ development inside Docker containers on my Linux machine. Here's why I use Docker:
- Isolated environments with self-contained C++ compilers and libraries
- Reproducible builds that can be easily replicated on fresh systems
- Easy cleanup of project-related files when they're no longer needed
Docker images
Since I develop C++ projects for various target systems, I created multiple Docker images, each dedicated to a specific target:
- cppdev: Essential build tools and programs (gcc, cmake, git, etc.) for native Linux compilation
- mingwdev: Essential build tools for cross-compiling to Windows using mingw-w64
- devkitpro: Complete devkitPro installation for homebrew development on consoles like the Nintendo DS or Nintendo Switch
- n64dev: Development environment for the Nintendo 64 using libdragon
For most images, I differentiate between two tags: "essential" (containing only the compiler and essential programs) and "latest" (which includes commonly used packages like SDL2 pre-installed).
This is my Dockerfile for cppdev:essential:
# docker build -t cppdev:essential .
FROM ubuntu
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
cmake \
curl \
git \
nano \
vim \
wget \
&& rm -rf /var/lib/apt/lists/*
And this is my Dockerfile for cppdev:latest, based on the image above:
# docker build -t cppdev:latest .
FROM cppdev:essential
RUN apt-get update && apt-get install -y --no-install-recommends \
libfmt-dev \
libglm-dev \
libsdl2-dev \
&& rm -rf /var/lib/apt/lists/*
I follow a similar approach for all other target systems. Some Dockerfiles are more complex when they require downloading, compiling, and installing dependencies from source. Here's my setup for MinGW-W64:
# docker build -t mingwdev:essential .
FROM ubuntu
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
cmake \
curl \
git \
mingw-w64 \
nano \
vim \
wget \
&& rm -rf /var/lib/apt/lists/*
# docker build -t mingwdev:latest .
FROM mingwdev:essential
RUN cd /usr/src \
&& git clone https://github.com/libsdl-org/SDL.git \
&& cd SDL \
&& git checkout release-2.32.10 \
&& mkdir /usr/build \
&& cd /usr/build \
&& cmake -DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32 -DCMAKE_TOOLCHAIN_FILE=/usr/src/SDL/build-scripts/cmake-toolchain-mingw64-x86_64.cmake /usr/src/SDL \
&& cmake --build . --parallel \
&& cmake --install . \
&& cd / \
&& rm -rf /usr/src/SDL /usr/build
RUN mkdir -p /usr/local/share/cmake/toolchains
COPY MinGW.cmake /usr/local/share/cmake/toolchains/MinGW.cmake
RUN cd /usr/src \
&& git clone https://github.com/g-truc/glm.git \
&& cd glm \
&& git checkout 1.0.2 \
&& mkdir /usr/build \
&& cd /usr/build \
&& cmake -DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32 -DCMAKE_TOOLCHAIN_FILE=/usr/local/share/cmake/toolchains/MinGW.cmake \
-DCMAKE_BUILD_TESTS=OFF -DBUILD_SHARED_LIBS=OFF /usr/src/glm \
&& cmake --build . --parallel \
&& cmake --install . \
&& cd / \
&& rm -rf /usr/src/glm /usr/build
This is my MinGW.cmake toolchain file I put next to the Dockerfile for the mingwdev:latest image:
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_SYSROOT "/usr/x86_64-w64-mingw32")
set(CMAKE_FIND_ROOT_PATH "${CMAKE_SYSROOT}")
set(CMAKE_INSTALL_PREFIX "${CMAKE_SYSROOT}")
find_program(CMAKE_C_COMPILER NAMES x86_64-w64-mingw32-gcc REQUIRED)
find_program(CMAKE_CXX_COMPILER NAMES x86_64-w64-mingw32-g++ REQUIRED)
find_program(CMAKE_RC_COMPILER NAMES x86_64-w64-mingw32-windres windres REQUIRED)
To configure, build, and install an existing CMake-based project via the CLI, I typically follow these steps:
cd path/to/project
docker run -it --rm --mount type=bind,src="$(pwd)",dsts=/usr/src -w /usr/src mingwdev:latest
# inside docker
mkdir /usr/build
cd $_
cmake -DCMAKE_TOOLCHAIN_FILE=/usr/local/share/cmake/toolchains/MinGW.cmake /usr/src
cmake --build . --parallel
cmake --install .
You may also want to mount a local folder to the build directory inside Docker, or copy the relevant build artifacts to your host system using another method.
Visual Studio Code + dev containers
Visual Studio Code has an extension that enables developing inside a container. In my projects, I typically configure multiple dev containers for different targets.
# .devcontainer/mingwdev-container/Dockerfile
FROM mingwdev:latest
# RUN ... if you want to install certain programs or libraries that are only relevant for this specific project
# The base image happens to provide a user "ubuntu" with UID 1000,
# this happens to be the UID of my local user.
# Do this, if you want to avoid all new files to be owned by root.
USER ubuntu
// .devcontainer/mingwdev-container/devcontainer.json
{
"name": "MinGW-W64",
"build": {
"dockerfile": "Dockerfile"
},
"customizations": {
"bbenoist.Doxygen",
"GitHub.copilot-chat",
"GitHub.copilot",
"ms-vscode.cmake-tools",
"ms-vscode.cpptools",
"slevesque.shader",
"sumneko.lua",
"xaver.clang-format",
"ZixuanWang.linkerscript"
}
}
Note that many VS Code extensions must be installed inside the container—this is what the customizations section handles. Adjust it according to your needs.
Once the dev containers are configured, you can reopen your project in a container (F1 or Ctrl+Shift+P > Dev Containers: Reopen in Container). The first time you reopen the project in a container, it may take a moment to prepare the image.
VSCode C/C++ configuration
One final consideration, especially when targeting multiple platforms: VS Code needs to know about your various configurations (include paths, defines, etc.) for IntelliSense to work properly. To set this up, create a .vscode/c_cpp_properties.json file and add all relevant configurations. For example:
{
"configurations": [
{
"name": "Windows (MinGW-W64)",
"includePath": [
"${workspaceFolder}"
],
"defines": [],
"compilerPath": "/usr/bin/x86_64-w64-mingw32-gcc",
"cStandard": "c11",
"cppStandard": "c++17",
"intelliSenseMode": "windows-gcc-x64"
}
]
}
With multiple configurations, you can switch the active configuration in the bottom-right corner of VS Code when you have a C/C++ file open.
Top comments (0)