DEV Community

Cover image for Optimizing Docker Images for Size and Security: A Comprehensive Guide
Akshat Gautam
Akshat Gautam

Posted on

Optimizing Docker Images for Size and Security: A Comprehensive Guide

Docker is a powerful tool that enables developers to containerize their applications and ensure consistency across various environments.

However, without careful consideration, Docker images can become bloated, slow, and vulnerable to security risks. In this guide, I’ll walk you through the strategies to optimize Docker images for both size and security, ensuring efficient and safe deployments.


Optimizing Docker Images for Size

The size of your Docker image directly affects how quickly it can be pulled and deployed, which will significantly reduce the pipeline run-time and artifact storage costs, so reducing the image size is crucial for performance and resource efficiency.

At the end of this section, I will show you my portfolio website's image size being reduced by almost 96%!

Here’s how you can minimize your image size:

1) Use Official Minimal Base Images

When building Docker images, always start with an official base image. Instead of using a full-sized OS image like ubuntu, opt for lightweight versions like alpine or debian-slim. These minimal images contain only the essentials, significantly reducing the image size.

Taking an example for node image, Here are the image sizes for node:latest vs node:alpine:

Image description

That's almost 7 times bigger !

By using minimal base images, you avoid unnecessary packages, leading to faster builds and smaller images.

2) Minimize Layers

Each instruction in your Dockerfile (RUN, COPY, etc.) creates a new layer in the final image. Combining related commands into a single layer reduces the number of layers and therefore the image size.

  • Instead of doing this


RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*


Enter fullscreen mode Exit fullscreen mode
  • Do this


RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*


Enter fullscreen mode Exit fullscreen mode

3) Exclude Unnecessary Files with '.dockerignore'

When building Docker images, Docker copies the entire context (everything in your project directory) into the image unless you specify otherwise. To prevent unnecessary files from being included, create a .dockerignore file.

  • Example .dockerignore


node_modules
.git
logs
tmp


Enter fullscreen mode Exit fullscreen mode

This file works similarly to .gitignore

4) Use Static Binaries and the 'scratch' Base Image

If your application can be compiled into a static binary, you can use the scratch base image, which is essentially an empty image. This leads to extremely small final images.

  • Example


FROM scratch
COPY myapp /
CMD ["/myapp"]


Enter fullscreen mode Exit fullscreen mode

Works well for applications that don’t need operating system-level dependencies.

5) Multi Stage Builds (Most Effective)

Multi-stage builds allow you to separate the build process from the runtime environment. This is especially useful when your application requires tools for compiling but doesn’t need them in the final image.

  • Example


# Stage 1: Build
FROM golang:1.16-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp .

# Stage 2: Runtime
FROM alpine:latest
WORKDIR /app
COPY --from=builder /app/myapp .
CMD ["./myapp"]


Enter fullscreen mode Exit fullscreen mode

Quantitative Comparison

My Portfolio Website which was built using React was previously built using node:14-alpine image which was still a smaller image than the node:latest image.

  • The Dockerfile went like:


# Use an official Node runtime as a parent image
FROM node:14-alpine

# Set the working directory
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the rest of the application code to the working directory
COPY . .

# Build the React app
RUN npm run build

# Install a lightweight HTTP server to serve the app
RUN npm install -g serve

# Set the default command to serve the build folder
CMD ["serve", "-s", "build"]

# Expose the port the app will run on
EXPOSE 3000


Enter fullscreen mode Exit fullscreen mode
  • The image built was of size:

Image description

Much later after this I learnt about Multi-Stage Builds and redesigned my Dockerfile.

  • The new Dockerfile looked like:


# Build environment (Stage - I)
FROM node:14-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# Production environment (Stage - II)
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]


Enter fullscreen mode Exit fullscreen mode

Astonishingly, The new image size was ...

Image description

The application worked exactly as before and was much faster to spin up this version !

The difference created was of ~1079 MBs which is a decrease of almost 96% !

This is an illustration of the effect of Multi Stage Builds


Optimizing Docker Images for Security

1) Use Trusted and Official Base Images

Always use official base images from trusted sources like Docker Hub or your organization’s trusted registries. These images are regularly updated and are more secure compared to custom or unofficial images. Keep your base images up-to-date to mitigate any vulnerabilities.

2) Run Containers as Non-Root Users

Running containers as root can expose your host system to security risks. Create a non-root user inside the Dockerfile and configure your container to run under that user.

  • Example:


RUN adduser --disabled-password myuser
USER myuser


Enter fullscreen mode Exit fullscreen mode

Such simple change reduces the attack surface and improves security by limiting access to system resources.

3) Scan Images for Vulnerabilities

Regularly scan your Docker images for known vulnerabilities using tools like:

  • Trivy: An open-source vulnerability scanner.
  • Docker Scan: Built into the Docker CLI.
  • Clair: A static analysis tool for discovering vulnerabilities.

These tools scan your images for outdated or insecure packages and alert you to potential threats.

4) Limit Network Exposure

Limit the network exposure of your container by restricting the ports and IP addresses it listens on. By default, Docker exposes ports to all interfaces. Bind them to localhost if external access is unnecessary.

  • Example:


docker run -p 127.0.0.1:8080:8080 myimage


Enter fullscreen mode Exit fullscreen mode

This restricts access to the container’s services to the local machine only, preventing external access.

4) Secrets Management

Avoid hardcoding sensitive information like API keys or passwords directly into your Dockerfile or environment variables. Instead, use Docker secrets or external secrets management tools like AWS Secrets Manager or HashiCorp Vault.

  • Example Using Docker Secrets:


docker secret create my_secret secret.txt


Enter fullscreen mode Exit fullscreen mode

Docker secrets ensure that sensitive data is only available to services that need it, without leaving traces in the container filesystem.


Conclusion

By following these strategies, you can build Docker images that are both lightweight and secure. Optimizing for size helps reduce deployment times, save resources, reduce costs and improve performance, while security best practices protect your application and infrastructure from vulnerabilities.

Remember, containerization offers many advantages, but it also introduces new challenges. With thoughtful image optimization, you can leverage Docker to its full potential while maintaining a robust security posture.

Drop a like if you found the article helpful.
Follow me for more such content.

Happy Learning !


Exclusively authored by,

👨‍💻 Akshat Gautam

Google Certified Associate Cloud Engineer | Full-Stack Developer

Feel free to connect with me on LinkedIn.

Top comments (19)

Collapse
 
denys_bochko profile image
Denys Bochko

Hey.
Nice article in general.
I disagree with you about stages and image size. I don't think what you are comparing is fair since you are using two different web servers: serve vs nginx.

The staged approach provides a way to better organize your dockerfile for better readability and to pull files from other build stages that have already been processed, so it saves time for let's say production deployment. I don't think it's about sizes (and docker documentation does not mention it either).
Your staged approach is doing exactly that, you are building the app in stage one and then injecting it into your nginx container. So, if you need to create another container with let's say httpd server, you can use the same files from stage one, making your deployment faster due to already compiled app.

Your original dockerfile doing the same thing except for the usage of serve server. I just checked nginx-alpine image size is 5Mb i am pretty sure that's much less in space than the installed serve server. Your size has decreased since you are smartly using a much lighter server in a bare image.

Collapse
 
akshat_gautam profile image
Akshat Gautam

Thank you for your feedback and for bringing up these important points!

You’re absolutely right—multi-stage builds are primarily about organizing the Dockerfile for better readability, modularity, and deployment efficiency. However, If used smartly you can also reduce the size of your Docker image.

For instance, I built two images based on the same base image: one using serve and another without it:
Image description

The use of serve created a difference of only ~8 MBs

It's not just about the size difference between serve and nginx; the real impact comes from removing the build environment and keeping only the environment necessary to serve the application. The size reduction from that alone makes the serve vs. nginx difference negligible in comparison.

Ultimately, it depends on the type of application you're containerizing. Spend some time experimenting with different images to find the best setup for your specific needs.

Stay Smart
Happy Learning !

Collapse
 
denys_bochko profile image
Denys Bochko

Absolutely. I am currently primarily dealing with lamp stack images, but shortly will need to do more digging on node and vue containers.
I find that working with compiled apps is more difficult at first because of how the dev env works, but then you learn it and optimize it.

Thread Thread
 
akshat_gautam profile image
Akshat Gautam

Thank you for sharing your experience! I really appreciate your insights. Working with compiled apps is definitely a learning curve. Your guidance is invaluable, and I’m grateful to learn from you. 🙏😊

Thread Thread
Collapse
 
srbhr profile image
Saurabh Rai

Hey @akshat_gautam, this is a nice article. I have a docker image for resume matcher over here that I need some help with. Can you help me with that?
It's open source and used for building resumes.
🔗: github.com/srbhr/Resume-Matcher

Collapse
 
akshat_gautam profile image
Akshat Gautam

Hey @srbhr

Thank you for your words, Glad you liked it !

I just looked the project repo and would love to contribute to it !

Collapse
 
srbhr profile image
Saurabh Rai

💖

Thread Thread
Collapse
 
bobbyiliev profile image
Bobby Iliev

Great article! Well done 👏

If you have the time, feel free to contribute to the open-source Docker ebook:

GitHub logo bobbyiliev / introduction-to-docker-ebook

Free Introduction to Docker eBook

💡 Introduction to Docker

This is an open-source introduction to Docker guide that will help you learn the basics of Docker and how to start using containers for your SysOps, DevOps, and Dev projects. No matter if you are a DevOps/SysOps engineer, developer, or just a Linux enthusiast, you will most likely have to use Docker at some point in your career.

The guide is suitable for anyone working as a developer, system administrator, or a DevOps engineer and wants to learn the basics of Docker.

🚀 Download

To download a copy of the ebook use one of the following links:

📘 Chapters

Collapse
 
akshat_gautam profile image
Akshat Gautam

Thank you so much!
I’m glad you liked the article. I’d love to contribute to the open-source Docker ebook—sounds like a great opportunity! I’ll definitely check it out. 😊

Collapse
 
bobbyiliev profile image
Bobby Iliev

Thanks! I've also pushed your article to Daily.Dev so you could get some extra visibility:

Thread Thread
 
akshat_gautam profile image
Akshat Gautam

Oh that's so nice of yours.

Thank You :)

Collapse
 
chiefy profile image
Christopher "Chief" Najewicz

Important to note if you're going to use Alpine, it uses MUSL vs. GLIBC. Most developers won't care or need to know about this, but depending on what you're doing it can make a difference.
musl-libc.org/faq.html

Collapse
 
akshat_gautam profile image
Akshat Gautam

Yes you said that right, Most developer won't need this as or many basic applications, especially those written in higher-level languages (like Node.js, Python, or Go), the difference between MUSL and GLIBC doesn't matter.

But if you're working with low-level system code, some performance-intensive applications, or specific libraries that expect GLIBC, you may encounter issues with Alpine using MUSL.

Solution: You may need to switch back to a GLIBC-based distribution (like Debian or Ubuntu)

Thanks for the shared information.

Happy Learning !

Collapse
 
martinbaun profile image
Martin Baun

Good job man. I personally use watchtower.
Just make sure there's an environment variable

 WATCHTOWER_CLEANUP 
Enter fullscreen mode Exit fullscreen mode

set to

true 
Enter fullscreen mode Exit fullscreen mode

and it does the cleanup for you.

Collapse
 
akshat_gautam profile image
Akshat Gautam

Thank you for your words, Glad you liked it.

This is a nice piece of information. For anyone not knowing what Watchtower is, it is a tool for automating Docker container updates. Watchtower automatically monitors running containers for image updates and redeploys them if a new version is available. It's commonly used in production to keep containers up to date.

The WATCHTOWER_CLEANUP environment variable being set to true ensures that when Watchtower updates a container, it automatically removes old, unused images that are left behind.

Explore more at Official Docs

Happy Learning !

Collapse
 
thegrumpyopsdude profile image
JB

Solid article!

Collapse
 
akshat_gautam profile image
Akshat Gautam • Edited

Thank you very much @thegrumpyopsdude

Glad you liked it.

Happy Learning !

Collapse
 
jjkaduppil profile image
HighThinker • Edited

Good one!

Collapse
 
akshat_gautam profile image
Akshat Gautam

Thank You, Glad you loved it !

Happy Learning :)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.