Running a DevOps bootcamp in Africa taught me that the gap between knowing Docker commands and understanding containers is wider than most curricula admit.
I co-run a training program called CloudOps Academy. Over the past year, we've taken about thirty engineers through a structured DevOps curriculum. Most of them come from software development backgrounds. They can write code. They can deploy to Heroku or Railway. Some of them have even "used Docker before."
And yet, when we get to the Docker module, almost every single one hits the same walls. Not because they're not smart. Because the way Docker is typically taught — run this command, here's a Dockerfile, now build and push — skips the mental model that makes it all click.
I want to walk through the three places where my students consistently stumble, because if you're learning Docker or teaching it, you're probably hitting these too.
Stumbling Block 1: They Think Containers Are Lightweight VMs
This is the most common misconception, and it's the most damaging. Almost every student comes in with the idea that a container is a small virtual machine. They've read blog posts that describe it that way. They've seen diagrams with neat boxes drawn around containers, looking exactly like VM diagrams but thinner.
The problem with this mental model is that it leads to wrong decisions. If you think a container is a VM, you'll try to SSH into it. You'll install debugging tools inside it. You'll treat it as a persistent environment where state accumulates. You'll run multiple processes inside one container because "that's what you do in a VM."
I spend the first hour of the Docker module on a single idea: a container is a process. It's a regular Linux process with a restricted view of the system. It can only see its own filesystem (thanks to mount namespaces), its own network stack (network namespaces), and its own process tree (PID namespaces). But it's still just a process running on the host kernel.
When this clicks, everything else follows. You stop treating containers as places you log into. You start treating them as things you throw away and recreate. You understand why data needs to live in volumes, not inside the container filesystem. You understand why a container that runs two processes is an antipattern, not a convenience.
Stumbling Block 2: They Can Write a Dockerfile But Can't Debug a Build
Copy-pasting a Dockerfile from a tutorial is easy. Knowing what to do when the build fails on step 7 of 12 is a completely different skill.
Most of my students, when a Docker build fails, do one of two things: they Google the error message and try the first Stack Overflow answer, or they start over with a different base image. Neither approach builds understanding.
What I teach instead is layer thinking. Every line in a Dockerfile creates a layer. When a build fails, I want my students to identify which layer failed, run a shell in the last successful layer, and manually execute the failing command inside that environment. This is how you figure out that the apt-get install failed because you forgot to run apt-get update first in the same RUN instruction. Or that the npm install failed because the WORKDIR wasn't set and you're installing into the root directory.
The exercise I give them is deliberately broken. I hand them a Dockerfile with four bugs: a missing dependency, a wrong COPY path, a port mismatch between EXPOSE and what the app actually listens on, and a CMD that references a file that doesn't exist in the container. They have to fix all four without Googling. Just docker build, read the error, get into the layer, poke around, fix it, and rebuild.
It takes most students about 90 minutes. After that, they stop being afraid of build failures.
Stumbling Block 3: Networking Feels Like Black Magic
Docker networking confuses everyone. I've been doing this professionally for years and I still occasionally get tripped up by it.
The specific thing my students can't wrap their heads around: when you run two containers, they can't talk to each other by default. This surprises developers who are used to running services on localhost. In their mental model, everything on the same machine can talk to everything else. But containers have isolated network namespaces. Each one gets its own localhost.
I teach networking through a concrete scenario: a Node.js API that needs to talk to a Redis container. I walk them through three stages. First, I show them that using localhost:6379 from the Node container fails. Then I show them that putting both containers on the same Docker bridge network and using the container name as a hostname works. Then I show them the same setup in a docker-compose.yml, where the networking is implicit.
The aha moment is always the second stage. When they see that redis-server as a hostname resolves to the Redis container's IP because Docker's internal DNS maps container names to IPs on the same network — that's when the fog lifts. From that point, they can reason about multi-container setups without memorizing commands.
What This Taught Me About Teaching
The gap between "I can run docker build" and "I understand what Docker is doing" takes about two weeks of focused work to close. Not two hours. Not a weekend tutorial.
The biggest mistake I see in DevOps education is treating tools as checklists. Run this command. Get this output. Move to the next tool. That produces engineers who can follow instructions but freeze when something unexpected happens.
I'd rather produce engineers who can fix a broken Dockerfile in 90 minutes than engineers who can recite the docker CLI from memory. One of those skills survives contact with production. The other doesn't.
Akum Blaise Acha is a Senior DevOps and Platform Engineer at Gozem and co-founder of CloudOps Academy, a DevOps training program building cloud engineering talent across Africa. He writes from Buea, Cameroon.
Top comments (0)