Have you ever wondered what actually happens under the hood when you run docker run? How do containers "talk" to each other while staying isolated?
Lately, I've been taking a "from scratch" approach to understanding container networking. I didn't start with Docker. I started with the Linux Kernel.
Here is the story of my journey through the layers of modern infrastructure.
ποΈ Phase 1: The Hard Way (Linux Primitives)
Before touching a single Dockerfile, I built a microservices environment using raw Linux commands.
The Ingredients:
- Network Namespaces: To create isolated network stacks for each service.
- Veth Pairs: Think of these as virtual "ethernet cables" connecting namespaces.
- Linux Bridges: Acting as virtual switches to manage traffic between services.
- Iptables & Routing: Manually configuring NAT and firewall rules to secure the frontend, backend, and database tiers.
The Lesson: Building the network manually teaches you exactly how much "magic" Docker does for us. Itβs brittle and complex, but the performance is incredibly raw and fast.
π³ Phase 2: Optimization with Docker
Next, I migrated the stack to Docker. But I didn't just wrap the apps; I optimized them for production-grade reliability.
Key Highlights:
- Multi-Stage Builds: Using python:3.11-slim to keep image sizes small (under 150MB).
- Embedded Healthchecks: Ensuring the API Gateway only routes traffic to "ready" services.
- Resource Constraints: Pinning CPU and Memory (e.g., 0.5 vCPU, 128MB RAM) to prevent noisy neighbor scenarios.
- Internal DNS: Leveraging Docker's embedded DNS for seamless service discovery.
π Phase 3: The Distributed Horizon (Multi-Host Networking)
The final challenge was scaling across two separate Linux hosts. This is where things got really interesting.
The Strategy:
- VXLAN Overlay: I manually established a VXLAN tunnel to bridge two different VM network segments.
- Docker Swarm: I initialized a Swarm cluster and deployed our stack using the overlay network driver.
The Result: A resilient, self-healing system. Even when my Manager node faced storage issues, Swarm's orchestration automatically failed over the services to the healthy Worker node. That is the power of distributed systems.
π Performance Benchmark: Linux vs. Docker
I ran load tests using Apache Bench (AB) and the results were eye-opening:
- Linux Namespaces: ~54 Requests Per Second (RPS)
- Docker Compose: ~29 Requests Per Second (RPS)
While the raw Linux setup was ~46% faster, Docker's portability, ease of scaling, and management tools make it the clear winner for modern software engineering.
π‘ Key Takeaway
Understanding the "low-level" doesn't mean you should always build from scratchβit means you know exactly what to do when your high-level tools break.
If you're a DevOps or Backend Engineer, I highly recommend spending a day building a bridge and a namespace from scratch. It will change how you look at your docker-compose.yml forever.
I've documented the entire process, including setup scripts and architecture diagrams, in my GitHub repo.
Check out the full project here: GitHub




Top comments (0)