DEV Community

POTHURAJU JAYAKRISHNA YADAV
POTHURAJU JAYAKRISHNA YADAV

Posted on

🐳 Containerizing a Python FastAPI Application with Docker (and Solving ARM vs x86 Architecture Issues)

When working with containerized applications, Docker usually makes deployments predictable and consistent. Once everything is packaged inside a container, the expectation is simple: “If it works on my machine, it should work everywhere.”

However, real-world environments sometimes introduce subtle issues — especially when applications are built on machines with different CPU architectures, such as x86 (AMD64) and ARM64.

Recently, I was containerizing a small Python FastAPI application, and I noticed something interesting. The Docker image worked perfectly on my Ubuntu server but behaved differently on another machine. After some investigation, the root cause turned out to be architecture differences between systems.

In this article, I'll walk through the entire process:

  • Containerizing a FastAPI application with Docker
  • Running the application using Docker Compose
  • Understanding ARM vs x86 architecture differences
  • Troubleshooting Docker daemon issues
  • Building multi-architecture images

If you work with Python, FastAPI, Docker, or cloud deployments, this guide will help you avoid some common pitfalls.

📁 Project Structure

For this example, we'll use a simple API service called task-api-service.

The project structure looks like this:

task-api-service

├── Dockerfile
├── docker-compose.yml
├── requirements.txt
├── main.py
└── app/

The FastAPI application exposes a REST API running on port 8080.

🐍 Step 1 — Writing the Dockerfile

The first step is creating a Dockerfile to containerize the application.

We start with the official Python slim image, which provides a lightweight base environment.

FROM python:3.10-slim

ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

WORKDIR /app

RUN apt-get update && apt-get install -y \
    libgl1 \
    libglib2.0-0 \
    build-essential \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .

RUN pip install --upgrade pip \
    && pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8080

CMD ["uvicorn","main:app","--host","0.0.0.0","--port","8080"]
Enter fullscreen mode Exit fullscreen mode

Why these optimizations?

There are a few small improvements here that make the container more production-friendly.

1️⃣ Prevent Python cache files

PYTHONDONTWRITEBYTECODE=1

Enter fullscreen mode Exit fullscreen mode

This prevents .pyc files from being created inside the container.

2️⃣ Better logging

PYTHONUNBUFFERED=1

Enter fullscreen mode Exit fullscreen mode

This ensures logs are immediately visible in Docker logs.

3️⃣ Removing package cache

rm -rf /var/lib/apt/lists/*

Enter fullscreen mode Exit fullscreen mode

This reduces the final image size.

🏗 Step 2 — Building the Docker Image

Once the Dockerfile is ready, we can build the image.

docker build -t task-api-service .

Enter fullscreen mode Exit fullscreen mode

After building, verify the image:

docker images

Enter fullscreen mode Exit fullscreen mode

You should see the new image listed.

🚀 Step 3 — Running the Container

Now we can run the container.

docker run -p 8088:8080 task-api-service

Enter fullscreen mode Exit fullscreen mode

The application will now be accessible at:

http://localhost:8088

Enter fullscreen mode Exit fullscreen mode

Docker maps:

Host Port 8088 → Container Port 8080

Enter fullscreen mode Exit fullscreen mode

⚙️ Step 4 — Using Docker Compose

Managing containers manually becomes inconvenient as applications grow.

This is where Docker Compose becomes useful.

Create a file called docker-compose.yml.

version: "3.9"

services:
  fastapi-app:
    container_name: fastapi-app
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8088:8080"
    volumes:
      - .:/app
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

📂 Why Use Volumes?

volumes:
  - .:/app

Enter fullscreen mode Exit fullscreen mode

This mounts the local project directory into the container.

Benefits include:

  • Instant reflection of code changes
  • Faster development workflow
  • No need to rebuild images repeatedly

If you run Uvicorn with the --reload flag, the server automatically restarts when files change.

▶️ Running with Docker Compose

Start the application:

docker compose up --build

Enter fullscreen mode Exit fullscreen mode

Run it in the background:

docker compose up -d

Enter fullscreen mode Exit fullscreen mode

Stop the containers:

docker compose down

Enter fullscreen mode Exit fullscreen mode

⚠️ The Architecture Problem (ARM vs x86)

Everything worked perfectly on my Ubuntu machine.

To check the system architecture, I ran:

uname -m

Enter fullscreen mode Exit fullscreen mode

Output:

x86_64

Enter fullscreen mode Exit fullscreen mode

This means the system uses AMD64 architecture.

However, many modern systems now use ARM architecture, including:

  • Apple Silicon Macs
  • AWS Graviton instances
  • Raspberry Pi servers

Running the same command on those systems usually returns:

arm64

Enter fullscreen mode Exit fullscreen mode

This difference can cause compatibility issues when Docker images are built on one architecture and run on another.

🔧 Forcing a Specific Docker Platform

Docker allows specifying the target architecture during builds.

For example:

docker build --platform linux/arm64 -t task-api-service .

Enter fullscreen mode Exit fullscreen mode

This forces Docker to build an ARM-compatible image.

You can also define the platform inside Docker Compose.

services:
  fastapi-app:
    build:
      context: .
    platform: linux/arm64
    ports:
      - "8088:8080"
Enter fullscreen mode Exit fullscreen mode

🛠 Troubleshooting Docker Daemon Issues

During testing, I encountered an error like this:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock

Enter fullscreen mode Exit fullscreen mode

This usually means the Docker daemon is not running.

To verify Docker status:

sudo systemctl status docker

Enter fullscreen mode Exit fullscreen mode

If the service is inactive, restart it:

sudo systemctl restart docker

Enter fullscreen mode Exit fullscreen mode

Then confirm Docker is working:

docker ps

Enter fullscreen mode Exit fullscreen mode

🔐 Fixing Docker Permission Issues

Another common issue occurs when Docker commands require sudo.

This happens because the user is not part of the Docker group.

To fix it:

sudo usermod -aG docker $USER

Enter fullscreen mode Exit fullscreen mode

Then reload the shell:

newgrp docker

Enter fullscreen mode Exit fullscreen mode

Now Docker commands can run without sudo.

🌍 Best Practice: Multi-Architecture Docker Images

Instead of building separate images for ARM and x86 systems, Docker allows multi-architecture builds.

Using Docker Buildx:

docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myrepo/task-api-service:latest \
--push .

Enter fullscreen mode Exit fullscreen mode

This creates a single image compatible with multiple architectures.

Official Docker images such as:

  • Python
  • Node.js
  • Nginx

all use this approach.

🧠 Final Thoughts

Docker makes it easy to package and deploy applications consistently, but architecture differences can sometimes introduce unexpected issues.

A few key lessons from this experience:

  • Always check system architecture using uname -m
  • Use Docker Compose for easier container management
  • Ensure the Docker daemon is running before troubleshooting builds
  • Consider multi-architecture images for production deployments

By understanding these concepts, you can ensure your containerized applications run smoothly across different environments, cloud platforms, and development machines.

Top comments (0)