DEV Community

Cover image for How I Reduced Docker Pull Time from 3 Minutes to 3 Seconds
SandeepKomal
SandeepKomal

Posted on

How I Reduced Docker Pull Time from 3 Minutes to 3 Seconds

Optimizing Docker performance often starts with small changes that lead to massive results. Slow image pulls can severely impact developer productivity, CI/CD efficiency, and deployment times. In my environment, pulling a single image used to take close to three minutes. After a series of systematic optimizations, I reduced that time to just three seconds.

This article walks through both the practical steps I followed and general best practices that can help any team achieve similar improvements.

Understanding the Bottleneck

When analyzing CI/CD pipelines, I discovered that over 30% of the total job time was spent pulling Docker images from a remote registry. The image was over 1 GB in size and included build dependencies, unused tools, and large OS layers.

Running:

docker pull myregistry.com/app/backend:latest --verbose
Enter fullscreen mode Exit fullscreen mode

showed sequential layer downloads and poor caching utilization. The problem wasn’t Docker itself but rather image design, caching strategy, and network latency.

Step 1: Optimize the Docker Image (Java Example)

The original Dockerfile was straightforward but inefficient:

FROM openjdk:11
COPY . /app
WORKDIR /app
RUN ./gradlew build
CMD ["java", "-jar", "build/libs/app.jar"]
Enter fullscreen mode Exit fullscreen mode

This setup embedded both build and runtime dependencies in a single layer. The result was a 1.1 GB image.

The optimized version used a multi-stage build with a minimal runtime image:

# Stage 1: Build the application
FROM gradle:7.6-jdk17 AS builder
WORKDIR /app
COPY . .
RUN gradle clean build -x test

# Stage 2: Run the application
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
COPY --from=builder /app/build/libs/app.jar .
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
Enter fullscreen mode Exit fullscreen mode

Impact: Image size reduced to ~240 MB.
Approach: Separated build tools from runtime, reduced layers, and used a lightweight base image.

Step 2: Enable Layer Caching in CI/CD Pipelines

Next, I enabled caching for image layers across builds. Using Docker Buildx in GitHub Actions:

- name: Set up Docker Buildx
  uses: docker/setup-buildx-action@v3

- name: Build and push
  uses: docker/build-push-action@v6
  with:
    push: true
    tags: myregistry.com/app/backend:latest
    cache-from: type=registry,ref=myregistry.com/app/backend:cache
    cache-to: type=registry,ref=myregistry.com/app/backend:cache,mode=max
Enter fullscreen mode Exit fullscreen mode

This allowed CI/CD pipelines to reuse existing layers instead of redownloading or rebuilding identical ones. Pull time dropped from 180 seconds to roughly 20 seconds.

Step 3: Set Up a Local Registry Mirror

To eliminate external network latency, I configured a local Docker registry mirror:

docker run -d -p 5001:5000 \
  -v /opt/registry-mirror:/var/lib/registry \
  --name registry-mirror \
  registry:2
Enter fullscreen mode Exit fullscreen mode

Then updated /etc/docker/daemon.json:

{
  "registry-mirrors": ["http://localhost:5001"]
}
Enter fullscreen mode Exit fullscreen mode

From this point, frequently used images were fetched locally rather than from remote registries. Pull time dropped to three seconds.

Step 4: Apply Docker Image Optimization Best Practices

Beyond specific optimizations, several universal Dockerfile best practices contribute to faster image builds, smaller layers, and better cache utilization.

1. Choose a Minimal Base Image

Use small base images such as Alpine or Scratch instead of large distributions.

Example:

# Bad (large)
FROM ubuntu:20.04

# Good (small)
FROM alpine:3.19
Enter fullscreen mode Exit fullscreen mode

Alpine is roughly 5 MB, compared to Ubuntu’s 70–100 MB footprint.

2. Use Multi-Stage Builds

Compile your application in one stage and copy only the final output into a minimal runtime stage.

Example:

# Stage 1: Build
FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp .

# Stage 2: Minimal runtime
FROM alpine:3.19
WORKDIR /app
COPY --from=builder /app/myapp .
CMD ["./myapp"]
Enter fullscreen mode Exit fullscreen mode

Result: Only the compiled binary is included in production, not build tools or dependencies.

3. Remove Unnecessary Files

Clear package caches and temporary files after installation to prevent bloat.

Example (Alpine):

RUN apk add --no-cache git && rm -rf /var/cache/apk/* /tmp/*
Enter fullscreen mode Exit fullscreen mode

Example (Debian/Ubuntu):

RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
Enter fullscreen mode Exit fullscreen mode

4. Combine RUN Instructions

Each RUN command creates a new layer. Combine related operations to reduce layers and image size.

Example:

# Bad
RUN apt-get update
RUN apt-get install -y git
RUN rm -rf /var/lib/apt/lists/*

# Good
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
Enter fullscreen mode Exit fullscreen mode

5. Use .dockerignore

Prevent unnecessary files (e.g., .git, logs, and node_modules) from being sent to the Docker build context.

Example .dockerignore:

.git
node_modules
*.log
*.tmp
Enter fullscreen mode Exit fullscreen mode

This ensures a smaller build context and faster build times.

Summary of Best Practices

  • Use small base images (e.g., Alpine, slim, scratch).
  • Apply multi-stage builds.
  • Remove caches and temporary files.
  • Use .dockerignore effectively.
  • Combine related commands to minimize layers.
  • Avoid copying unnecessary files into the image.

Step 5: (Optional) Use CDN Acceleration

For distributed or multi-region teams, host your registry (ECR, Harbor, or Artifactory) behind a CDN such as AWS CloudFront or Cloudflare. This ensures geographically optimized image pulls for global developers.

Final Results

After implementing all these optimizations:

  • Pull time reduced from 3 minutes to 3 seconds.
  • CI/CD execution time improved by over 60%.
  • Network utilization became predictable and efficient.
  • Developer feedback loops became almost instantaneous.

Conclusion

Docker image optimization is often undervalued, but its impact is significant. Through smaller base images, efficient multi-stage builds, cache utilization, and registry mirroring, I reduced image pull time from minutes to seconds.

These practices don’t just improve performance — they scale across environments, pipelines, and teams, forming the foundation of efficient containerized systems.

Top comments (0)