DEV Community

Cover image for Mastering Docker Logs: A Comprehensive Tutorial
Ayooluwa Isaiah for Dash0

Posted on • Originally published at dash0.com

Mastering Docker Logs: A Comprehensive Tutorial

You've just deployed a new feature. It's not on fire, but it's not quite right either. An API response is missing a field, and performance seems a bit off. Where do you begin to unravel the mystery? You start with the logs.

In a containerized environment, however, logging isn't always straightforward. Logs are ephemeral, dispersed across multiple containers, and can grow unmanageable without the right strategy.

This guide covers everything you need to know about Docker logs. We'll start with the simplest commands to view logs in real-time and progress to designing a robust, production-grade logging strategy for your entire containerized infrastructure.

Let's get started!

Quick start: the docker logs cheat sheet

For when you need answers now. Here are the most common commands you'll use every day.

Action Command
View all logs for a container docker logs <container>
Follow logs in real-time (tail) docker logs -f <container>
Show the last 100 lines docker logs --tail 100 <container>
Show logs from the last 15 minutes docker logs --since 15m <container>
View logs for a Docker Compose service docker compose logs <service>
Follow logs for all Compose services docker compose logs -f

Mastering the docker logs command

The docker logs command is your primary tool for inspecting container output. To get all logs currently stored for a container, simply provide its name or ID:

docker logs <container_name_or_id>
Enter fullscreen mode Exit fullscreen mode

This dumps the entire log history of the specified container to your terminal, which is probably not what you're after.

For a container that's been running for a while, or one that's particularly noisy, this can mean scrolling through thousands of lines of output.

To pinpoint the information you need, you can use Docker's built-in filtering flags to narrow the output by time or by the number of lines.

Let's explore the most useful options next. Note that all options must come before the container name or ID:

docker logs [<options>] <container_name_or_id>
Enter fullscreen mode Exit fullscreen mode

Filtering logs by time (--since and --until)

For more precise debugging, you can retrieve logs from a specific time frame using the following options:

  • --since: Shows logs generated after a specified point in time.
  • --until: Shows logs generated before a specified point in time.

With either flag, you can provide a relative time (like 10m for 10 minutes, 3h for 3 hours) or an absolute timestamp (such as 2025-06-13T10:30:00).

# Show logs from the last 30 minutes
docker logs --since 30m my-database
Enter fullscreen mode Exit fullscreen mode
# Show logs from this morning, before 10 AM
docker logs --until 2025-06-13T10:00:00 my-database
Enter fullscreen mode Exit fullscreen mode

You can also combine the two:

docker logs --since 2025-06-13T18:00:00 --until 2025-06-13T18:15:00 <container_name_or_id>
Enter fullscreen mode Exit fullscreen mode

Tailing Docker container logs

While filtering helps you analyze past events, the most common task during live debugging is to see what's happening right now. For this, you need to "tail" the logs to see real-time stream of a container's output.

To enable log tailing from a container, use the -f or --follow flag:

docker logs -f <container_name_or_id>
Enter fullscreen mode Exit fullscreen mode

Note that unlike the tail -f command often used with log files, docker logs -f will first print the container's entire log history before it starts streaming new entries. The standard tail command, by contrast, only shows the last 10 lines by default.

For a container with a long history, this initial dump of information can be overwhelming. The most common and effective pattern is to combine --follow with --tail (or its shorthand -n). This gives you the best of both worlds: a small amount of recent history for context, followed by the live stream.

docker logs -f --tail 100 <container_name_or_id>
Enter fullscreen mode Exit fullscreen mode

This command shows the last 100 lines for context and then streams any new logs in real-time. When you're ready to stop streaming, press Ctrl+C.

Searching Docker container logs

The docker logs command doesn't have a built-in search feature, but you can easily pipe its output to standard shell utilities like grep:

docker logs <container_name_or_id> | grep -i "ERROR"
Enter fullscreen mode Exit fullscreen mode

Managing logs in Docker Compose

A huge amount of Docker development happens with Docker Compose. Managing logs here is just as easy. The key is to use docker compose logs instead of docker logs.

The usage syntax is:

docker compose logs [options] [service...]
Enter fullscreen mode Exit fullscreen mode

Where [service...] is an optional list of service names. The key concept to grasp is that a single service can be scaled to run across multiple containers.

When you request logs for a service, Docker Compose automatically aggregates the output from all containers belonging to that service.

Let's look at a few common usage patterns.

Viewing logs for a single service

To see logs from just one service defined in your Compose file, specify the service name:

docker compose logs image-provider
Enter fullscreen mode Exit fullscreen mode

You can also specify multiple service names:

docker compose logs image-provider shipping otel-collector
Enter fullscreen mode Exit fullscreen mode

Docker Compose will color-code the output by service, making it easy to follow:

Docker compose logs output

For ease of copying and pasting log lines, you'll want to include the --no-log-prefix flag:

docker compose logs --no-log-prefix <services>
Enter fullscreen mode Exit fullscreen mode

Viewing logs for all services

To see an interleaved stream of logs from all services in your stack, run the command without a service name:

docker compose logs
Enter fullscreen mode Exit fullscreen mode

Tailing and filtering

All the flags you learned for docker logs for tailing and filtering work with docker compose logs too:

docker compose logs --follow --tail 10 image-provider cart
Enter fullscreen mode Exit fullscreen mode
docker compose logs --since '10m' db
Enter fullscreen mode Exit fullscreen mode

Inspecting Docker logs with a GUI

If you prefer a graphical interface, these tools provide excellent alternatives to the command line.

Docker Desktop

The built-in dashboard in Docker Desktop has a Logs tab for any running container. It provides a simple, real-time view with basic search functionality.

Docker Desktop showing OpenTelemetry Collector logs

Dozzle

Dozzle is a lightweight, web-based log viewer with a slick interface. It's incredibly easy to run as a Docker container itself:

docker run -d --name dozzle \
    -p 8888:8080 \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    amir20/dozzle:latest
Enter fullscreen mode Exit fullscreen mode

Navigate to http://localhost:8888 in your browser to get a real-time view of all your container logs.

Viewing Docker logs in Dozzle

Understanding how Docker logging works

Docker is designed to capture the standard output (stdout) and standard error (stderr) streams from the main process running inside a container. This means any console output from your application is automatically collected as logs.

A logging driver acts as the backend for these logs. It receives the streams from the container and determines what to do with them: store them in a file, forward them to a central service, or discard them.

The default logging driver is json-file. It captures the log streams and writes them to a JSON file on the host machine, typically located at /var/lib/docker/containers/<container-id>/<container-id>-json.log.

You can find the path to this file for any container:

docker inspect -f '{{.LogPath}}' <container_name_or_id>
Enter fullscreen mode Exit fullscreen mode
/var/lib/docker/containers/612646b55e41d73a3f1a24afa736ef173981ed753506097d1a888e7b9cb7d6ac/612646b55e41d73a3f1a24afa736ef173981ed753506097d1a888e7b9cb7d6ac-json.log
Enter fullscreen mode Exit fullscreen mode

Choosing a logging driver

While json-file is the default, Docker supports a variety of other logging drivers to suit different needs:

  • none: Disables logging entirely. Useful when logs are unnecessary or handled externally.

  • local: Recommended for most use cases. It offers better performance and more efficient disk usage than json-file.

  • syslog: Sends logs to the system's syslog daemon.

  • journald: Write log output to the journald logging system.

  • fluentd, gelf, awslogs, gcplogs, etc.: Forward logs to external logging services or cloud platforms for centralized aggregation and analysis.

Configuring logging drivers

Configuring Docker logging is done by editing the Docker daemon's configuration file at /etc/docker/daemon.json. If the file doesn't exist, ensure to create it first.

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "4",
    "compress": "true"
  }
}
Enter fullscreen mode Exit fullscreen mode

The json-file driver's most significant drawback is that it does not rotate logs by default. Over time, these log files will grow indefinitely, which can consume all available disk space and crash your server.

This configuration addresses this by telling Docker to:

  • Rotate log files when they reach 50MB (max-size).
  • Keep a maximum of four old log files (max-file).
  • Compress the rotated log files to save space (compress).

For most use cases, the local driver is a better choice than json-file. It uses a more efficient file format and has sensible rotation defaults built-in. You can configure it as follows:

{
  "log-driver": "local"
}
Enter fullscreen mode Exit fullscreen mode

By default, the local driver retains 100MB of logs per container (as five 20MB files). You can customize this using the same log-opts as the json-file driver.

To configure other drivers like fluentd, syslog, or journald, consult the Docker logging documentation for their unique set of options.

After editing daemon.json, you must restart the Docker daemon for the changes to take effect for newly created containers. Existing containers need to be recreated to adopt the updated configuration.

sudo systemctl restart docker
Enter fullscreen mode Exit fullscreen mode

You can override the global logging configuration for specific services directly in your docker-compose.yml file. This is useful for services that require special log handling:

<service_name>:
  image: <image_name>
  logging:
    driver: "local"
    options:
      max-file: "4"
      max-size: "50m"
      compress: "true"
Enter fullscreen mode Exit fullscreen mode

To avoid repetition, you can use YAML anchors to define a logging configuration once and reuse it across multiple services.

x-default-logging: &logging
  driver: "local"
  options:
    max-size: "50m"
    max-file: "4"

services:
  <service_a>:
    logging: *logging

  <service_b>:
    logging: *logging
Enter fullscreen mode Exit fullscreen mode

Understanding Docker's log delivery mode

When your application generates a log, it faces a fundamental choice: should it pause to ensure the log is safely delivered, or should it hand the log off quickly and continue its work? This is the core trade-off managed by Docker's log delivery mode, a crucial setting that lets you tune your logging for either maximum reliability or maximum performance.

Docker supports two modes for delivering logs from your container to the configured logging driver.

1. Blocking mode

In the default blocking mode, log delivery is synchronous. When your application emits a log, it must wait for the Docker logging driver to process and accept that message before it can continue executing.

This approach is best for scenarios where every log message is critical and you are using a fast, local logging driver like local or json-file.

With slower drivers (those that send logs over a network), blocking mode can introduce significant latency and even stall your application if the remote logging service is slow or unreachable.

2. Non-blocking mode

As an alternative, you can configure a non-blocking delivery mode. In this mode, log delivery is asynchronous. When your application emits a log, the message is immediately placed in an in-memory buffer, and your application continues running without any delay. The logs are then sent to the driver from this buffer in the background.

The trade-off for this mode is a risk of losing logs. If the in-memory buffer fills up faster than the driver can process logs, new incoming messages will be dropped.

To mitigate the risk of losing logs in non-blocking mode, you can increase the size of the in-memory buffer from its 1MB default:

{
  "log-driver": "awslogs",
  "log-opts": {
    "mode": "non-blocking",
    "max-buffer-size": "50m"
  }
}
Enter fullscreen mode Exit fullscreen mode

Centralizing Docker logs with OpenTelemetry

While using docker logs or a local log viewer is fine for development, production environments present a different challenge.

Manually accessing logs across multiple hosts doesn't scale and provides a fractured, incomplete picture. To gain visibility into such a system, you need a centralized logging strategy.

Modern applications are dynamic and distributed. Containers are ephemeral---they are created, destroyed, and replaced constantly. A centralized system captures their output, ensuring logs persist long after the container that created them is gone.

By consolidating your logs in an observability platform like Dash0, you gain the ability to perform complex searches across your entire infrastructure, build real-time dashboards to visualize trends, and correlate logs with other telemetry signals like metrics or traces.

One way to ship your Docker logs is via the OpenTelemetry Collector which supports a variety of ways to collect logs from the host machine. You may be tempted to use the filelog receiver to read container log files, but this is rarely ideal for Docker environments.

A common and more effective approach is to set up fluentd as the Docker logging driver for your services. This lets Docker stream logs directly to a Collector instance without relying on file scraping.

Here's the configuration you need in your daemon.json:

{
  "log-driver": "fluentd",
  "log-opts": {
    "fluentd-address": "localhost:8006",
    "tag": "opentelemetry-demo"
  }
}
Enter fullscreen mode Exit fullscreen mode

This will send your container logs to the systemd journal and tag it with some metadata (such as CONTAINER_ID, CONTAINER_NAME, IMAGE_NAME etc) so that you can easily filter out relevant container logs.

You can then configure the fluentforward receiver to set up an endpoint at the fluentd-address specified above:

receivers:
  fluentforward:
    endpoint: 0.0.0.0:8006

processors:
  batch:
  resourcedetection/system:
    detectors: [system]
    system:
      hostname_sources: [os]

exporters:
  otlphttp/dash0:
    endpoint: <your_dash0_endpoint>
    headers:
      Authorization: Bearer <your_dash0_token>
      Dash0-Dataset: <your_dash0_dataset>

service:
  pipelines:
    logs:
      receivers: [fluentforward]
      processors: [batch, resourcedetection/system]
      exporters: [otlphttp/dash0]
Enter fullscreen mode Exit fullscreen mode

Once you replace the Dash0 placeholders with your actual account values, you can run the OpenTelemetry Collector through Docker:

docker run \
  -v $(pwd)/otelcol.yaml:/etc/otelcol-contrib/config.yaml \
  -v /var/log/journal:/var/log/journal:ro \
  otel/opentelemetry-collector-contrib:latest
Enter fullscreen mode Exit fullscreen mode

Then you'll start seeing your logs in the Dash0 interface.

Dash0 interface showing Docker logs

Troubleshooting common issues with Docker container logs

Docker logging usually works seamlessly, but there are a couple of common issues you might run into and how to identify and resolve them.

1. docker logs shows no output

What's happening: Your application likely isn't writing to stdout or stderr. It might be logging directly to a file inside the container instead. Since Docker's logging drivers only capture standard output streams, it won't pick up logs written to internal files.

How to fix it: Ideally, update your application's logging configuration to write directly to stdout and stderr. If modifying the application isn't feasible, you can redirect file-based logs by creating symbolic links to the appropriate output streams in your Dockerfile.

# Example for an Nginx container
RUN ln -sf /dev/stdout /var/log/nginx/access.log && \
    ln -sf /dev/stderr /var/log/nginx/error.log
Enter fullscreen mode Exit fullscreen mode

This ensures that even file-based logs are routed through Docker's logging mechanism.

2. Logging driver does not support reading

Error response from daemon: configured logging driver does not support reading
Enter fullscreen mode Exit fullscreen mode

What's happening: Remote logging drivers such as awslogs, splunk, or gelf forward logs directly to an external system without storing anything locally. Normally, Docker caches the logs using its dual logging functionality. However, if this feature is disabled for the container, the docker logs command can't retrieve any output.

How to fix it: You need to ensure cache-disabled is false in the logging options. This tells Docker to send logs to the remote driver and keep a local copy for docker logs to use.

{
  "log-driver": "awslogs",
  "log-opts": {
    "cache-disabled": "false"
  }
}
Enter fullscreen mode Exit fullscreen mode

Final thoughts

You've now journeyed from the basic docker logs command to understanding the critical importance of logging drivers, log rotation, and centralized logging strategies.

By mastering these tools and concepts, you're no longer just guessing when things go wrong. You have the visibility you need to build, debug, and run resilient, production-ready applications.

Whenever possible, structure your application's logs as JSON. A simple text line is hard to parse, but a JSON object with fields like level, timestamp, and message is instantly machine-readable, making your logs infinitely more powerful in any logging platform.

Thanks for reading!

Top comments (0)