DEV Community

Paul Clegg
Paul Clegg

Posted on • Originally published at clegginabox.co.uk on

Demystifying Docker

Demystifying Docker

Fairly frequently I stumble across a post in /r/PHP that's asking a question. I answer the question in my head before the comments have loaded. The top comment typically has the same word that's in my head - Docker.

The most recent question asked - how do you run multiple versions of PHP in a project?

Docker was the most upvoted comment. The alternatives ranged from installing all versions locally, Laravel Herd, Laragon, custom bash scripts and web server configurations.

None of those alternatives appeal to me personally - but I'm also not here to tell you you're wrong. If your setup works, it works.

These frequent questions made me wonder if what looks like an aversion to Docker is really just a lack of familiarity. The recent outcry over Sail being removed from Laravel reinforced this (Sail isn't much more than a Dockerfile and a thin CLI wrapper around docker compose). If people were upset about losing that, maybe Docker itself isn't the problem - it's just that nobody's shown them what's behind the curtain.

So allow me to explain why "Docker" is so often the answer.

But it works on my machine

Demystifying Docker

Back when I started this job - the only viable option to run a project locally was to install everything locally. There were tools like WAMP and LAMP that would get you most of the way there, but if you needed multiple versions of PHP or MySQL - good luck to you.

If you had one project that used Apache, one that used NGINX, both bound to port 80 - you'd have to remember to turn one off and the other one on.

If your project had any kind of worker or consumer, you'd have to remember to start it up every morning, and remind your colleagues that the reason "it's not working" is because they forgot to do the same.

Onboarding a new developer meant a full day (if you were lucky) of "just follow the README" followed by "oh yeah, you also need to do this, and this, and this."

When it came to working with other developers? "It works on my machine". Someone would push a new feature, your setup would break and it would take you half a day to find the cause - a different version of a PHP extension you installed 3 years ago.

Eventually your machine would be such a maze of dependencies and versions you'd wipe it all, start again from scratch and pray you could remember every step you needed to take to get it working again.

I'm sure Laragon and Laravel Herd have come a long way since those days. But one problem remains: they're still abstractions that hide complexity.

If you're using Laragon, you're developing on Windows. Your application almost certainly deploys to Linux. Laragon teaches you nothing about that environment (ask me how I know). It won't tell you that your code is about to break production because of a case-sensitive filename mismatch. It won't catch that you've installed a new PHP extension on your machine but not in the cloud. "It works on my machine" is solved - but what happens when your machine isn't the one that matters?

What you really want is parity. When your local environment matches staging, matches production, matches every other developer's machine, an entire category of problems simply disappears.

Virtualisation

Demystifying Docker

Virtual Machines have been around since the 1960s. If you run one yourself, it's a bit like operating system inception: a computer inside a computer. (If you spin up an EC2 instance or a Droplet, you're essentially doing this on a remote server).

However, using VMs for local development didn't strictly become the standard until Vagrant came along.

You could run an actual Linux VM on your Windows or Mac machine, define it in a Vagrantfile, and suddenly everyone on the team had the same environment. Pair it with something like Ansible and you could automate the provisioning. Setting up a new project moved from a day of frustration to typing vagrant up.

If you used the same Ansible playbooks for staging and production, you finally had environment parity.

But it was slow. You were booting a full OS inside your own OS. It required gigabytes of RAM per project and minutes to start up. Every provisioning tweak meant waiting five minutes just to see if it worked.

Containerisation

Demystifying Docker
Photo by Tim G / Unsplash

The next step in this evolution was containers. You can think of containers as lightweight VMs. They are typically used to package up an application with a minimal version of an operating system.

By minimal I mean tiny. Debian ~50MB, Ubuntu ~30MB and Alpine Linux is under 5MB.

$ docker run -it --name my-alpine alpine:latest /bin/sh
$ docker stats my-alpine

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
e1348853906b my-alpine 0.01% 1MiB / 19.56GiB 0.00% 1.13kB / 126B 639kB / 0B 1
Enter fullscreen mode Exit fullscreen mode

Running a shell on Alpine Linux on a Mac - A 4MB download, less than 0.01% CPU usage and 1MB RAM. It started in under a second. No waiting for an OS to boot. No provisioning. Just running.

To me, the most impressive thing? I just ran a shell inside another operating system with a single command.

$ cat /etc/os-release
cat: /etc/os-release: No such file or directory

$ docker run --rm alpine:latest cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.23.2
PRETTY_NAME="Alpine Linux v3.23"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Enter fullscreen mode Exit fullscreen mode

⚠️

Calling containers lightweight VMs isn't technically accurate but it works for a mental model. Click here to read an in-depth explanation of the differences.

Because containers are so fast and so lightweight they open up new ways of working.

Remember the question of running multiple PHP versions?

# docker.php

<?php
echo "Hello from PHP " . phpversion() . "\n";
Enter fullscreen mode Exit fullscreen mode

I created the file above on my local machine.

$ docker run --rm \
  -v "$PWD":/app \
  -w /app \
  php:7.4-cli php docker.php

Hello from PHP 7.4.33 
Enter fullscreen mode Exit fullscreen mode
$ docker run --rm \
  -v "$PWD":/app \
  -w /app \
  php:8.5-cli php docker.php

Hello from PHP 8.5.1
Enter fullscreen mode Exit fullscreen mode

The PHP images are larger: around 180MB. So not as instant on a slow connection. But I didn't install PHP 7.4. I didn't install PHP 8.5. No config changes, no messing with my shell. I asked Docker to run my script with those versions, then throw the containers away (--rm).

For those unfamiliar with Docker, let's demystify what just happened.

  • docker run: Hey Docker, start a container
  • --rm: When it's done, remove it, don't clutter up my machine
  • -v "$PWD":/app: Map the current directory on my local machine to the /app directory inside the container. This is the key bit - my machine has the file to run, the container has PHP
  • -w /app: Set the working directory inside the container
  • php:8.5-cli: Use this specific image, if I don't already have it - download it.
  • php docker.php: The command to run (from the /app directory)

You can repeat this same process with loads of things.

Run composer install even without PHP installed on a machine.

$ docker run --rm \
    -v "$PWD":/app \
    composer/composer:latest install
Enter fullscreen mode Exit fullscreen mode

Have a project that only works with older versions of Node?

$ docker run --rm \
  -v "$PWD":/app \
  -w /app \
  node:18-alpine npm run build
Enter fullscreen mode Exit fullscreen mode

You don't need to install ffmpeg just to convert a MOV to an MP4.

$ docker run --rm \
    -v "$PWD":/tmp \
    mwader/static-ffmpeg:latest \
    -i demo.mov demo.mp4
Enter fullscreen mode Exit fullscreen mode

Anyone reading this article could create the same docker.php and use the same docker run commands and they will get precisely the same output, every time. Not all that impressive for a single echo statement but the fact holds, regardless of a single echo or an entire framework.

This is great if someone has already built a container image with what you need. That's not always the case though. Thankfully it's easy enough to make your own.

Dockerfile

There's nothing inherently special about any of the images I used above.

You could create your own version of php:8.5-cli with a single Dockerfile

FROM debian:trixie

# Install the compilers and libraries we need
RUN apt update && apt install -y pkg-config build-essential autoconf bison re2c libxml2-dev libsqlite3-dev

# Download the PHP 8.5 source code
ADD https://www.php.net/distributions/php-8.5.1.tar.gz /tmp/php.tar.gz

# Extract, Configure, Compile, and Install
RUN tar -xf /tmp/php.tar.gz -C /tmp \
    && cd /tmp/php-8.5.1 \
    && ./configure --disable-all --enable-cli --with-sqlite3 \
    && make -j $(nproc) \
    && make install
Enter fullscreen mode Exit fullscreen mode

⚠️

Note: This is a drastic oversimplification for educational purposes. The official images on Docker Hub are maintained by experts - use them.

$ docker build -t my-php .
$ docker run my-php php -v

PHP 8.5.1 (cli) (built: Jan 9 2026 19:50:18) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.5.1, Copyright (c) Zend Technologies
    with Zend OPcache v8.5.1, Copyright (c), by Zend Technologies
Enter fullscreen mode Exit fullscreen mode

Unlike Vagrant, where your starting point was always a blank operating system that you had to provision, Docker containers can be anything: from a minimal Alpine Linux OS to a full AI inference system like Whisper.

Installing PHP from source isn't something you're likely to be doing. So a more realistic use case.

The official PHP images don't come with every possible extension installed (for obvious reasons I hope). If you're using PHP in a project, you'll likely need to install some yourself.

If you try to install extensions manually, you often hit a wall of missing system libraries. You want gd? You need libpng-dev first. You want zip? You need libzip-dev first.

Thankfully mlocati/php-extension-installer makes it simple.

FROM php:8.5-cli

# https://github.com/mlocati/docker-php-extension-installer
COPY --from=mlocati/php-extension-installer /usr/bin/install-php-extensions /usr/local/bin/

# Install PHP extensions
RUN install-php-extensions bcmath opcache zip intl pcntl sockets pdo_pgsql redis curl
Enter fullscreen mode Exit fullscreen mode

Run docker build or reference the Dockerfile from your compose.yaml and you've extended the official PHP 8.5 image by installing some extensions.

There is one other massive benefit to building your own images: Time.

If you have ever tried to install the grpc extension, you know the pain—it can take 15+ minutes to compile on a modern machine. Nobody needs to sit around waiting for that.

The beauty of Docker is that you can build the image once, push it to a registry (like Docker Hub), and then pull the finished product instantly.

I actually got so fed up with waiting for grpc to compile that I created a repository to pre-build it for every PHP version. Now, instead of waiting 15 minutes, I just change one line:

# Instead of compiling from scratch...
# FROM php:8.4-cli

# I use my pre-built image with gRPC already inside:
FROM clegginabox/php-grpc:8.4-cli
Enter fullscreen mode Exit fullscreen mode

15 minutes saved every time I set up a project or edit the Dockerfile.


So why does any of this matter?

When you commit a Dockerfile to a repository, you aren't just saying "here's my code, good luck running it." You're saying "here is my code, and here is the exact environment it runs in."

If you've lived through the pain of inconsistent environments - the wasted hours, the "but it works on my machine," the slow VMs –Docker is the payoff.

So next time someone says "just use Docker," they're not being dismissive. They probably just remember what it was like before.

Top comments (0)