DEV Community

Cover image for Dockerizing Java and Python Applications
Altair Lage
Altair Lage

Posted on

Dockerizing Java and Python Applications

Docker has transformed how we develop, test, distribute, and deploy applications across different execution environments. This article is a practical guide on how to Dockerize Java and Python applications, enabling you to build and deploy your applications from conception to the production environment. Here, I present all the fundamental concepts of Docker, the technical details of creating an image with your application, how to run it, and even how to optimize your deployment for your production environment.

What is Docker?

Docker is a platform that allows you to package applications into lightweight, portable containers, which facilitates creation, testing, and deployment.
A container is like an isolated "box" that holds everything your application needs to run: code or executables, libraries, dependencies, and configurations, ensuring it runs in any environment consistently and immutably. Regardless of where it's running—on your machine, on a server, or on another computer—using containers, your application will work the same way everywhere.

Before Docker, problems like “it works on my machine, but not on the server” were usual. Docker emerged as the definitive solution to this dilemma, introducing standards and configurations to ensure no differences exist between environments. Furthermore, containers operate as a viable, low-cost alternative to hypervisor-based virtual machines, allowing more applications and workloads to run on the same hardware.

Fundamental Concepts

Docker Image: It is like a "class" in object-oriented programming. It represents the application package, along with its dependencies, libraries, code, and configurations. An image is created from a set of instructions defined in a Dockerfile and serves as the basis for execution.

Container: A running instance of a Docker image. It is like an "object" created from the "class" (image). It provides a lightweight execution environment, isolated from the host system and other containers. More than one container can be run, at the same time, using the same image.

Docker Hub: A public registry where you can store and share Docker images. Registries function as centralized libraries or repositories for storing and sharing Docker images. Docker Hub is the public registry maintained by Docker, but companies can also maintain private registries for their own images. A registry allows you to store the application's "binary artifacts".

Dockerfile: A text file with instructions for building a Docker image. It is the backbone of any containerization process. It is like a "recipe" that describes all the necessary steps to create the application environment. Each instruction in the Dockerfile creates a new layer in the image, a concept crucial for optimization.

The main instructions in a Dockerfile are:

  • FROM: Defines the base image upon which the new image will be built.
  • WORKDIR: Defines the working directory inside the container for subsequent commands, such as COPY and RUN.
  • COPY: Copies files or directories from the host system to the container.
  • EXPOSE: A documentation instruction that informs Docker which port the container will listen on at runtime.
  • CMD and ENTRYPOINT: Define the default command that will be executed when the container is started. The subtle difference between them is that CMD can be overridden when running the container (docker run ... command), while ENTRYPOINT is a fixed command.

Basic Docker Workflow: Build, Run, and Push

The lifecycle of a Docker image and container follows a predictable workflow.
The first step is building the image, performed with the docker build command. This command reads the Dockerfile and generates the image, which can be named and tagged using the -t flag. For example, docker build -t my-app:1.0 builds an image named my-app with the tag 1.0.

After building, the image can be instantiated and run as a container using the docker run command.

For the application to be externally accessible, the container's ports must be mapped to the host system's ports using the -p flag. For example, docker run -p 8080:8080 my-app:1.0 runs the application (container) and makes it accessible on port 8080 of the host.

Finally, to share the image with other developers or for deployment in production environments, it must be sent to a registry using the docker push command.

Data Persistence through Volumes

Containers are, by definition, ephemeral and disposable. This means that any data created inside a container is lost when it is stopped or removed. While this ephemeral nature is ideal for scaling stateless services, it presents a challenge for applications that need to persist data, such as databases or file systems.

The solution to this problem is the use of volumes. Volumes are the mechanism offered by Docker to manage persistent data. The VOLUME instruction in the Dockerfile informs Docker about the mount points to maintain the state. At runtime, the docker run -v command maps a directory on the host system to a directory inside the container, ensuring that data remains intact even if the container is recreated.

The use of volumes resolves a central contradiction in Docker's philosophy. While containers are designed to be disposable, many applications require persistent data. By decoupling data storage from the container's lifecycle, volumes allow the application to maintain its state while the container itself remains fully immutable and easily recreatable—a fundamental characteristic for large-scale deployment.

Dockerizing Java Applications

The starting point for containerizing a Java application is creating an executable JAR file, to ensure that the entire execution environment, including the Java version, is consistently replicable, thereby eliminating environment issues—the classic "works on my machine" problem.

Using Docker for Java facilitates the configuration of isolated environments for both development and production, in addition to being essential for automated testing and CI/CD (Continuous Integration / Continuous Delivery).

Most modern frameworks, such as Spring Boot, simplify this process. A Spring Boot application can be easily generated from the Spring Initializr.
With the project configured, Maven, a build automation system, can be used to compile the source code and package the application into a "fat JAR" or "uber JAR," which includes the application code and all its dependencies. This is done with the command mvn clean package, which generates the executable file in the project's target folder.

A good project structure is essential to facilitate maintenance, testing, and Dockerization. Always follow Java best practices to achieve a good project structure.

This is an example of a typical Java project structure with Maven:

my-spring-boot-app/
├── src/
│   ├── main/
│   │   └── java/
│   └── test/
├── pom.xml
├── Dockerfile
└── target/ (generated after the build)
Enter fullscreen mode Exit fullscreen mode

Add a Rest Controller class in the main/java package if you want a test endpoint.

package com.example.demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class MyController {

    @GetMapping("/hello")
    public String sayHello() {
        return "Hello from Spring Boot!";
    }
}
Enter fullscreen mode Exit fullscreen mode

To ensure your application works locally:

# Compile the project
mvn clean package

# Execute the generated JAR
java -jar target/my-spring-boot-app.jar
Enter fullscreen mode Exit fullscreen mode

Your application should now be available at http://localhost:8080. Accessing http://localhost:8080/hello through your browser should display the message "Hello from Spring Boot!".

Basic Dockerfile for Java

This would be the most basic Dockerfile example for a Java application:

FROM openjdk:17-jdk-alpine  
WORKDIR /app  
COPY target/my-spring-boot-app.jar /app/app.jar  
EXPOSE 8080  
CMD ["java", "-jar", "/app/app.jar"]
Enter fullscreen mode Exit fullscreen mode

In this example, the FROM openjdk:17-jdk-alpine instruction defines the base image, which contains a lightweight version of the Java Development Kit (JDK).
WORKDIR /app defines the container's working directory, and COPY moves the JAR file generated by the Maven compilation into the container.
The line EXPOSE 8080 documents the port being used by Spring Boot in the project, and the final CMD defines the command to execute the Java application.

In this case, the application needs to be compiled first using an IDE or Maven commands; only then can we execute Docker commands to create the image.

The image can be built with the command docker build \-t my-spring-boot-app .. The -t flag assigns a name to the image, while the dot . specifies the build context.

To run the application, the container is instantiated with docker run -p 8080:8080 my-spring-boot-app. The port mapping -p 8080:8080 ensures that traffic from port 8080 of the host system is directed to port 8080 inside the container. After execution, the application can be accessed via http://localhost:8080.

Multi-Stage Build Dockerfile

The previous example has a flaw: the application needs to be compiled first using an IDE or Maven commands; only then can we execute Docker commands to create the image. In other words, the machine creating the image (build) needs to be perfectly configured with all prerequisites (Java, Maven) to compile the application.
In this way, we don't leverage Docker's main advantage: ensuring a consistent and immutable environment, regardless of where it's running.

The solution to this problem is the multi-stage build technique. This method uses multiple FROM commands within a single Dockerfile, allowing developers to separate the build environment from the runtime environment. The first stage uses a "fat" image with all compilation tools, while the final stage uses a minimal image, copying only the necessary artifacts.

# ======== Initial Stage ========
# Uses a base image with maven and JDK for build
FROM maven:3.8.7-openjdk-18-slim AS build

WORKDIR /app

# Copy only dependency files first
COPY pom.xml .
RUN mvn dependency:go-offline -B

# Copy the source code and build the application
COPY src src
RUN mvn clean package -DskipTests

# ======== Final Stage - Production Image ========
FROM openjdk:17-jre-alpine

# Argument for the JAR name
ARG JAR_NAME=app.jar

# Copy the JAR from the build stage
COPY --from=build /app/target/*.jar ${JAR_NAME}

# Expose the application port
EXPOSE 8080

# Command to execute the application
ENTRYPOINT ["java", "-jar", "/app.jar"]
Enter fullscreen mode Exit fullscreen mode

The first stage uses a "fat" image with all compilation tools, while the final stage uses a minimal image, copying only the necessary artifacts.

  • Drastic Image Size Reduction: The build stage can use a complete image, such as Maven, but the final stage is restricted to a lighter image, such as openjdk:17-jdk-slim or alpine, discarding all unnecessary layers.
  • Enhanced Security: Excluding build tools minimizes the attack surface, as it removes resources that are not needed for application execution.
  • Better Cache Utilization: Docker can cache each stage independently. If only the application's source code changes, the build stage can be reused, significantly speeding up the process in CI/CD pipelines.

The necessity of compiling Java code to generate an executable makes the multi-stage build a fundamentally valuable technique for this language. The inherent separation between the environment that compiles the code (which requires a JDK and a build system) and the environment that executes it (which only needs a JRE) is a direct consequence of the language's compiled architecture, making this optimization a design pattern.

Optimized Multi-Stage Dockerfile for Production

We can further optimize the process to bring advantages in the production environment:

# === STAGE 1: BUILD ===
FROM maven:3.8.7-openjdk-18-slim AS build

WORKDIR /app

# Copy only dependency files first
COPY pom.xml .
RUN mvn dependency:go-offline -B

# Copy the source code and build
COPY src src
RUN mvn clean package -DskipTests

# === STAGE 2: PRODUCTION ===
FROM openjdk:17-jre-alpine AS production

# Create a non-root user for security
RUN addgroup -g 1001 -S appgroup && \
    adduser -S appuser -u 1001 -G appgroup

WORKDIR /app

# Copy only the necessary JAR
COPY --from=build /app/target/*.jar app.jar

# Switch to the non-root user
USER appuser

# Optimized JVM Configurations
ENV JAVA_OPTS="-Xmx512m -Xms256m -XX:+UseG1GC"

EXPOSE 8080

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8080/actuator/health || exit 1

# Configure the application entrypoint
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar app.jar"]
Enter fullscreen mode Exit fullscreen mode

This example brings the following improvements:

  • Creation and use of a non-root user (appuser): Creates a non-root user inside the container and runs the application as that user, instead of the default Docker root user. This practice increases security by reducing risk if the container is compromised, as the non-root user has restricted permissions.
  • Custom JVM Definitions (JAVA_OPTS): Configures environment variables for JVM (Java Virtual Machine) options, such as memory limits and garbage collector usage. These optimizations help improve application performance and stability in production environments by better controlling resource usage.
  • HEALTHCHECK Configuration: Includes a HEALTHCHECK command, which periodically checks the application's status by accessing a specific HTTP endpoint to ensure the application is running correctly. This facilitates monitoring and can allow orchestrators like Kubernetes or Docker Swarm to automatically restart the container if it's having problems.
  • ENTRYPOINT with shell: Executes the jar with a shell, including JAVA_OPTS environment variables for greater flexibility during initialization. Using sh -c "java $JAVA_OPTS -jar app.jar" allows parameters to be passed dynamically via the environment without altering the container.

Docker Commands for the Java Container

# Build the image
docker build -t my-spring-boot-app:latest .

# Run the container
docker run -d -p 8080:8080 --name my-app my-spring-boot-app:latest

# Check logs
docker logs my-app

# Run in interactive mode for debug
docker run -it -p 8080:8080 my-spring-boot-app:latest
Enter fullscreen mode Exit fullscreen mode

Dockerizing Python Applications

The Dockerization of Python applications follows similar principles to Java, but with an emphasis on the language's dynamism and runtime dependencies. Unlike Java, in Python, the interpreter needs to find modules and packages correctly at runtime.
Ensuring the entire Python environment—including the interpreter version, libraries, and system packages—is consistently available is essential to eliminate environment problems.

Python Project Structure

A good project structure is essential to facilitate maintenance, testing, and Dockerization.

A common pattern is to separate the source code within an app/ directory, centralize dependencies in requirements.txt and requirements-dev.txt files, and keep tests in their own tests/ folder. Separate the application code, dependency files, startup scripts, tests, and configurations into distinct directories. This ensures clarity, prevents import conflicts, and allows Docker to copy only what is necessary at build time.

Another essential practice is the use of the __init__.py file. Even if empty, it turns a folder into a recognizable Python package for the interpreter, ensuring that modules are imported correctly within the container. This small convention prevents problems like ModuleNotFoundError when the application is run in the isolated Docker environment.

Example:

my-python-project/
├── app/
│   ├── __init__.py
│   └── main.py
├── requirements.txt
├── requirements-dev.txt
├── Dockerfile
└── tests/
Enter fullscreen mode Exit fullscreen mode

This arrangement allows only relevant files to be copied during the build and reduces the chance of "dirtying" the image with development artifacts, making it smaller, more secure, and more performant.

Practical Example: Flask Application

1. Basic Flask Application

Flask is perfect for demonstrating the fundamentals of a Dockerized Python application. As a microframework, it delivers only the essentials: HTTP routing, request/response handling, and integration with middleware without imposing a specific project structure or including integrated features, such as database abstraction layers or form validation, common in more comprehensive frameworks like Django. This gives us the freedom to build simple APIs that can already run in production containers.
Flask is a popular, lightweight, open-source web framework.

In the example below, we have a minimal application with two endpoints:

  • / → responds with a JSON message.
  • health → returns the health status, useful for Docker or Kubernetes health checks.
# app/main.py
from flask import Flask, jsonify

app = Flask(__name__)

@app.route('/')
def hello():
    return jsonify({
        "message": "Hello from Python Docker!",
        "status": "running"
    })

@app.route('/health')
def health():
    return jsonify({"status": "healthy"})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=False)
Enter fullscreen mode Exit fullscreen mode

Some important technical points here:

  • host='0.0.0.0': Configures the application to accept connections from any address. Without this, the app would only respond to local requests within the container.
  • debug=False:: In production, debug mode should be disabled to prevent exposure of sensitive information.
  • health:: This endpoint is a market standard for automatic checks in orchestrators (health probes).

This simple structure is enough to bring up a functional Docker container ready to be accessed via a browser or testing tool like Postman.
This pattern allows exposing testable environments for orchestrators like Kubernetes, which can continuously check the service status via /health.

2. requirements.txt file

The requirements.txt file is the heart of reproducibility in Python. In the Python ecosystem, the requirements.txt file is similar to Maven's pom.xml or Node.js's package.json: it centralizes all the dependencies needed to run the application. In it, declare all libraries used—with versions locked to avoid future incompatibilities.

In Dockerized environments, this file is even more important, as it ensures the image is built with specific package versions, avoiding the classic problem of “it worked yesterday, but today it broke” due to unexpected updates. The recommended practice is to fix the exact versions of each library using ==. This makes the environment predictable and reproducible, especially when builds are run in CI/CD pipelines.

Example:

flask==2.3.3
gunicorn==21.2.0
requests==2.31.0
Enter fullscreen mode Exit fullscreen mode
  • Flask: Lightweight framework for creating the API.
  • Gunicorn: Production WSGI server (detailed next).
  • Requests: Library for external HTTP calls, widely used in integrations.

During the image build, Docker executes:

pip install --no-cache-dir -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

The --no-cache-dir flag prevents local caching of packages, reducing the final image size.

After installing the dependencies, it's possible to execute the application with the command python app/main.py. Thus, the application will be available at http://localhost:5000/.

3. Basic Dockerfile for Python

The basic Dockerfile uses official Python images, configures dependencies, and runs the app:

# Use official Python image
FROM python:3.11-slim

# Define the working directory
WORKDIR /app

# Install system dependencies if necessary
RUN apt-get update && apt-get install -y \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Copy and install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the application code
COPY app/ ./app/

# Expose the application port
EXPOSE 5000

# Command to execute the application
CMD ["python", "app/main.py"]
Enter fullscreen mode Exit fullscreen mode

The slim variant used in the example is a minimal version of the official Python image. It removes unnecessary packages and significantly reduces the final image size (often from hundreds of MB to less than half). When we need heavier native dependencies (like gcc, libpq, etc.), we can use the full version of the image or add only the necessary packages in RUN apt-get install.

Docker builds the image in layers. If we copy the entire project before installing dependencies, any change in a code file would force the reinstallation of all libraries. To avoid this, we follow the order:

  1. Copy only requirements.txt.
  2. Install the dependencies.
  3. Only then copy the application code.

This flow ensures a clean environment, copied in stages to maximize the cache, and exposes the correct port for execution.

The image can be built with the command docker build -t my-app-python .. The -t flag assigns a name to the image, while the dot . specifies the build context.

To run the application, the container is instantiated with docker run -d -p 5000:5000 --name my-app my-app-python. The parameter -p 5000:5000 ensures that port 5000 of the host is mapped to port 5000 of the container, making the application accessible at http://localhost:5000.

This is the most basic starting point for Dockerizing Python applications. In production, however, we replace CMD ["python", "app/main.py"] with Gunicorn, ensuring more robustness and the ability to handle multiple simultaneous requests.

4. Multi-Stage Python Dockerfile ready for production

For production images, use multi-stage builds, separating build dependencies from runtime ones, as in the example below.
The multi-stage build separates what is necessary to compile/install dependencies from what is actually needed to run the application. In practice, this means: compiling ( or resolving) everything in a "builder" container and copying only the final artifacts to the "production" container. The result is a smaller image, with fewer system packages, fewer CVEs, and a more predictable startup.

Flask has its own built-in server (app.run()), but it's only designed for development. In production, it doesn't scale well, doesn't handle multiple processes efficiently, and can compromise security. This is where Gunicorn (Green Unicorn) comes in, a highly performant WSGI (Web Server Gateway Interface) server, developed specifically to run Python applications in production environments.

Key advantages of Gunicorn:

  • Multi-process: Supports multiple workers, better utilizing CPU cores.
  • Robustness: Efficiently handles concurrent connections, preventing bottlenecks.
  • Integration with Docker/Kubernetes: Allows health checks and automatic worker restarts.
  • Flexibility: Can be integrated with Nginx or Traefik, acting as a reverse proxy.

With Gunicorn, the same Flask application that was previously executed with:

python app/main.py
Enter fullscreen mode Exit fullscreen mode

is now initialized in production like this:

gunicorn --bind 0.0.0.0:5000 --workers 4 app.main:app
Enter fullscreen mode Exit fullscreen mode
  • --bind 0.0.0.0:5000: Exposes the service on port 5000 to the external world.
  • --workers 4: Creates four independent processes to handle simultaneous requests.
  • app.main:app: Indicates the module (app/main.py) and the Flask object (app) to be served.

This approach aims for stability, scalability, and resilience in running the application within containers.

Example:

# === STAGE 1: BUILD AND DEPENDENCIES ===
FROM python:3.11-slim AS builder

# Avoid prompts and reduce log noise
ENV DEBIAN_FRONTEND=noninteractive \
    PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

# Build dependencies (removed at the end of the stage)
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential gcc curl \
  && rm -rf /var/lib/apt/lists/*

# Create an isolated virtual environment (easier to copy later)
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

WORKDIR /build

# Install Python dependencies
COPY requirements.txt .
RUN pip install --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt


# === STAGE 2: PRODUCTION ===
FROM python:3.11-slim AS production

# Avoid prompts and reduce log noise
ENV DEBIAN_FRONTEND=noninteractive \
    PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

# Install only necessary runtime dependencies
RUN apt-get update && apt-get install -y \
    curl \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

# Copy the virtual environment from the builder stage
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Create a non-root user
RUN groupadd -r appgroup && useradd -r -g appgroup appuser

WORKDIR /app

# Copy only necessary files
COPY app/ ./app/

# Configure application directory ownership
RUN chown -R appuser:appgroup /app
USER appuser

# Expose the application port (Flask/Gunicorn)
EXPOSE 5000

# Create a simple healthcheck for orchestrator integration
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:5000/health || exit 1

# Use Gunicorn for production
# - --workers: processes; initial rule: 2 * CPU + 1 (adjust according to load)
# - --threads: good for IO-bound; start with 2 to 4
# - --graceful-timeout: gracefully terminates requests in deployments
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "--threads", "2", "--timeout", "30", "--graceful-timeout", "20", "app.main:app"]
Enter fullscreen mode Exit fullscreen mode

The use of a decoupled venv (virtual environment) optimizes the image because it avoids reinstalling dependencies in the final stage, the runtime remains clean and fast, keeps the container lean, increases security with a non-root user, and already prepares the app for production using integrated Gunicorn and healthcheck. To learn more about venv, access the official documentation.

Commands for the Python container:

# Build with tag
docker build -t my-app-python:prod .

# Run with port mapping
docker run -d -p 5000:5000 --name my-app my-app-python:prod

# Logs (Gunicorn sends stdout/stderr correctly)
docker logs -f my-app

# Execute shell in the container for debug
docker exec -it my-app /bin/bash
Enter fullscreen mode Exit fullscreen mode

5. Dockerfile for Django Application

Django applications usually require more native dependencies (like libpq for PostgreSQL). To learn more about Django applications, visit https://www.djangoproject.com/start/.

Use multi-stage builds with a virtual environment:

# === STAGE 1: BUILD ===
FROM python:3.11-slim AS builder

WORKDIR /app

# Install system dependencies for build
RUN apt-get update && apt-get install -y \
    build-essential \
    libpq-dev \
    gcc \
    && rm -rf /var/lib/apt/lists/*

# Create the virtual environment
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Install Python application dependencies
COPY requirements.txt .
RUN pip install --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

# === STAGE 2: PRODUCTION ===
FROM python:3.11-slim AS production

# Install runtime dependencies
RUN apt-get update && apt-get install -y \
    libpq5 \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Copy the virtual environment from the build container
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Create a non-root user
RUN groupadd -r django && useradd -r -g django django

# Create environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PATH="/opt/venv/bin:$PATH" \
    # Adjust for your django settings module
    DJANGO_SETTINGS_MODULE=myproject.settings.production \
    # Default directory for collected statics
    STATIC_ROOT=/app/staticfiles

WORKDIR /app

# Copy the application
COPY . .

# Configure application directory permissions
RUN chown -R django:django /app
USER django

EXPOSE 8000

# Entrypoint script for migrations
COPY --chown=django:django entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "myproject.wsgi:application"]
Enter fullscreen mode Exit fullscreen mode

Practices like creating a dedicated user, defining environment variables, adapting the entrypoint for migration commands, and using Gunicorn integrate security, portability, and performance.

Best Practices

Image Optimization

  1. Use Alpine base images: They are smaller and more secure.
  2. Use multi-stage builds: Separate build containers from runtime containers.
  3. Minimize layers: Combine RUN commands when possible.
  4. Create .dockerignore: Exclude unnecessary files.

Example:

# .dockerignore
.git
.gitignore
README.md
.env
.nyc_output
coverage
node_modules
npm-debug.log
Dockerfile*
docker-compose*
.dockerignore
Enter fullscreen mode Exit fullscreen mode

Security

  1. Non-root user: Always run as a non-privileged user.
  2. Minimize the attack surface: Use minimal images.
  3. Scan vulnerabilities: Use docker scan or similar tools.
  4. Secrets management: Use Docker secrets or environment variables.

Performance

  1. Layer caching: Copy dependency files first.
  2. Health checks: Implement health checks whenever possible.
  3. Resource limits: Configure CPU and memory limits.
# Run with resource limits
docker run -d \
  --memory="512m" \
  --cpus="1.0" \
  -p 8080:8080 \
  my-app:latest
Enter fullscreen mode Exit fullscreen mode

Useful Docker Commands

# List containers
docker ps -a

# Remove stopped containers
docker container prune

# List images
docker images

# Remove unused images
docker image prune

# View logs in real-time
docker logs -f container-name

# Execute command in the container
docker exec -it container-name /bin/sh

# Inspect container
docker inspect container-name

# Check resource usage
docker stats container-name
Enter fullscreen mode Exit fullscreen mode

Conclusion

Dockerizing Java and Python applications is more than just packaging code into containers—it's about building predictable, scalable, and reproducible environments to promote faster and more secure deliveries. This offers many benefits, such as portability, isolation, ease of deployment, and scalability. The techniques presented in this article, especially multi-stage builds, are fundamental for creating efficient and secure images.

There is no "Silver Bullet" for Dockerizing and running applications in production; each language and framework has its peculiarities. Java applications require attention to compilation, the use of JDK vs. JRE, and JVM optimizations. Python applications, being an interpreted language, heavily depend on the correct management of dependencies and native libraries.

Always remember to:

  • Use appropriate and lean base images.
  • Implement multi-stage builds for production.
  • Follow security practices appropriate to the language and framework you are using.
  • Optimize layers and leverage Docker cache.
  • Monitor performance, memory, and logs at runtime.

I hope I've helped you prepare to Dockerize any Java or Python application efficiently, deliver containers ready for production, and face the next step: Automating builds, pushes, and deployments with CI/CD pipelines.

References

Top comments (0)