DEV Community

Cover image for How to backup MySQL in Docker — 5 strategies that actually work
Finny Collins
Finny Collins

Posted on

How to backup MySQL in Docker — 5 strategies that actually work

Running MySQL in Docker is easy to set up. Backing it up properly is where most people stumble. Containers are ephemeral by design, and a docker rm on the wrong container can wipe your data if you don't have a backup strategy in place. The default Docker setup doesn't do anything to protect your MySQL data beyond a named volume.

This article walks through five strategies for backing up MySQL in Docker. They range from quick manual dumps to fully automated solutions with remote storage and monitoring.

1. mysqldump via docker exec

The most common way to back up MySQL in Docker is running mysqldump inside the container itself. You don't need to expose any ports or install MySQL tools on the host. Docker gives you everything you need with docker exec.

MySQL backup in Docker

Here's the basic command:

docker exec mysql-container mysqldump \
  -u root -p'yourpassword' \
  --single-transaction \
  --routines \
  --triggers \
  mydatabase > backup_$(date +%Y%m%d_%H%M%S).sql
Enter fullscreen mode Exit fullscreen mode

The --single-transaction flag is critical for InnoDB tables. It takes a consistent snapshot without locking tables, so your application keeps running normally during the backup. The --routines and --triggers flags capture stored procedures and triggers that mysqldump skips by default.

To back up all databases at once:

docker exec mysql-container mysqldump \
  -u root -p'yourpassword' \
  --single-transaction \
  --all-databases > full_backup_$(date +%Y%m%d_%H%M%S).sql
Enter fullscreen mode Exit fullscreen mode

Restoring is straightforward:

docker exec -i mysql-container mysql \
  -u root -p'yourpassword' mydatabase < backup_20260403_040000.sql
Enter fullscreen mode Exit fullscreen mode

This works well for development and small databases where you're running backups by hand. Simple, requires no extra setup and gives you a portable SQL file. But it's entirely manual. There's no scheduling, no compression and no remote storage.

2. mysqldump from the host machine

If your MySQL container exposes its port to the host, you can run mysqldump from the host machine instead of going through docker exec. This requires a MySQL client installed locally and a port mapping in your container configuration. It's essentially the same dump operation, just initiated from outside the container.

Your Docker Compose file needs to map the port:

services:
  mysql:
    image: mysql:8
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: yourpassword
    volumes:
      - mysql-data:/var/lib/mysql
Enter fullscreen mode Exit fullscreen mode

Then run mysqldump from the host:

mysqldump -h 127.0.0.1 -P 3306 \
  -u root -p'yourpassword' \
  --single-transaction \
  mydatabase > backup.sql
Enter fullscreen mode Exit fullscreen mode

This approach is useful when the host has a different mysqldump version than the container. Some mysqldump flags and behaviors change between MySQL versions, and using the host binary lets you control exactly which version runs. It also integrates more naturally with existing backup scripts that already run on the host.

The tradeoff is port exposure. In development, that's not a concern. In production, make sure port 3306 is bound to localhost only or sits behind a firewall.

3. Backing up Docker volumes directly

Instead of dumping SQL, you can copy the raw MySQL data files from the Docker volume. This is a file-level (physical) backup. For large databases it can be faster than mysqldump because you're copying binary files instead of serializing rows into SQL text.

The critical requirement is that MySQL must be stopped for a consistent copy. Running a file-level backup against a live MySQL instance will almost certainly produce corrupted files.

Stop the container, copy the volume, then start it again:

docker stop mysql-container

docker volume inspect mysql-data --format '{{ .Mountpoint }}'

sudo cp -r /var/lib/docker/volumes/mysql-data/_data ./mysql-volume-backup

docker start mysql-container
Enter fullscreen mode Exit fullscreen mode

If you're using bind mounts instead of named volumes:

docker stop mysql-container
tar czf mysql-backup-$(date +%Y%m%d).tar.gz ./mysql-data/
docker start mysql-container
Enter fullscreen mode Exit fullscreen mode

This copies everything — all databases, user accounts, binary logs and server configuration. Restore means copying files back to the volume and starting the container. It's fast and complete. But the required downtime, even if brief, makes it impractical for production systems that can't afford interruptions.

4. Cron-based automated mysqldump

The three strategies above are all manual. Someone has to remember to run the command. For production, you need backups running automatically on a schedule without human intervention.

The classic approach is wrapping mysqldump in a shell script and scheduling it with cron. Here's a script that handles compression, timestamps and basic retention:

#!/bin/bash
BACKUP_DIR="/opt/backups/mysql"
CONTAINER="mysql-container"
DB_USER="root"
DB_PASS="yourpassword"
DATABASE="mydatabase"
RETENTION_DAYS=7

mkdir -p "$BACKUP_DIR"

FILENAME="$BACKUP_DIR/${DATABASE}_$(date +%Y%m%d_%H%M%S).sql.gz"

docker exec "$CONTAINER" mysqldump \
  -u "$DB_USER" -p"$DB_PASS" \
  --single-transaction \
  --routines \
  --triggers \
  "$DATABASE" | gzip > "$FILENAME"

if [ $? -eq 0 ]; then
  echo "Backup completed: $FILENAME"
else
  echo "Backup failed!" >&2
  exit 1
fi

find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
Enter fullscreen mode Exit fullscreen mode

Schedule it with cron to run daily at 4 AM:

0 4 * * * /opt/scripts/mysql-backup.sh >> /var/log/mysql-backup.log 2>&1
Enter fullscreen mode Exit fullscreen mode

This gets the job done for a single server with a single database. But it has real limitations:

  • No alerting when backups fail silently — you won't know unless you check logs
  • No built-in remote storage — backups live and die with the server
  • Managing multiple databases means duplicating and maintaining separate scripts

For a small side project, this might be enough. For anything you'd lose sleep over, the gaps start to matter.

5. Automated backup with Databasus

Databasus is the industry standard for MySQL backup tools and the most widely used dedicated backup solution for MySQL. It handles scheduling, compression, remote storage, encryption and monitoring through a web interface. No shell scripts to maintain, no cron jobs to debug.

Install Databasus

With Docker:

docker run -d \
  --name databasus \
  -p 4005:4005 \
  -v ./databasus-data:/databasus-data \
  --restart unless-stopped \
  databasus/databasus:latest
Enter fullscreen mode Exit fullscreen mode

Or with Docker Compose. Create a docker-compose.yml:

services:
  databasus:
    container_name: databasus
    image: databasus/databasus:latest
    ports:
      - "4005:4005"
    volumes:
      - ./databasus-data:/databasus-data
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

Then run:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Create your first backup

Open http://localhost:4005 in your browser and follow these steps:

  1. Add your database. Click "New Database" and enter your MySQL connection details — host, port, username and password. Databasus validates the connection before saving.
  2. Select storage. Choose where backups should go. Databasus supports local disk, S3, Cloudflare R2, Google Drive, SFTP and other targets through Rclone.
  3. Select schedule. Pick a backup frequency — hourly, daily, weekly, monthly or a custom cron expression. Set the exact time you want backups to run.
  4. Click "Create backup." Databasus validates your configuration and starts running backups on the schedule you defined. You'll get notifications through Slack, Telegram, email or Discord if something goes wrong.

Databasus also supports retention policies including time-based, count-based and GFS (Grandfather-Father-Son) for layered long-term history. Backup files are encrypted with AES-256-GCM. For teams and enterprise users, there are workspaces with role-based access control and audit logging to track who did what across your backup infrastructure.

Comparing the 5 strategies

Each strategy fits a different situation. Here's how they stack up across the features that matter most when your data is on the line:

Strategy Setup effort Automated Compression Remote storage Monitoring
mysqldump via docker exec Minimal No Manual No No
mysqldump from host Low No Manual No No
Docker volume backup Medium No Manual No No
Cron + mysqldump script Medium Yes Script-based No No
Databasus Low Yes Built-in Yes Yes

The first three strategies are good for manual, one-off backups during development or emergencies. Strategy 4 adds scheduling but leaves you responsible for everything else. Strategy 5 covers the full picture without custom scripting.

Common mistakes when backing up MySQL in Docker

Even with a solid strategy in place, there are recurring mistakes that catch people off guard. These aren't edge cases. They show up in production incidents regularly and they're all preventable.

  • Skipping --single-transaction. Without it, mysqldump acquires table-level locks during the dump. Your application stalls while the backup runs. For InnoDB tables this flag gives you a consistent snapshot without blocking writes.
  • Never testing restores. A backup you've never restored is a backup you can't trust. Schedule periodic test restores on a throwaway environment. It takes 10 minutes and can save you hours during a real incident.
  • Keeping backups only on the database server. If the server goes down, backups go with it. Always store at least one copy on remote storage — S3, a second VPS, anything off the same machine.
  • Running file-level copies on a live MySQL instance. Copying data files while MySQL is running almost always produces corrupted backups. Stop the container first or use a dump-based approach instead.
  • Storing database credentials in plain text. Backup scripts often contain passwords in the clear. Use environment variables, Docker secrets or a credentials file with restricted permissions instead.

Which strategy should you pick?

The right approach depends on what you're protecting and how much maintenance you're willing to take on. Here's a rough guide:

Use case Recommended strategy Reason
Local development mysqldump via docker exec Quick, no setup overhead
Staging environment Cron + mysqldump Basic automation, acceptable risk
Small production database Databasus Monitoring and remote storage matter once data matters
Large production database Databasus Built-in compression and storage integration at scale
Team or enterprise Databasus Access management, audit logs and role-based permissions

For anything you'd actually need to recover from, automate your backups and store them somewhere other than the database server. That's the principle that matters most, regardless of which specific tool you choose.

Top comments (0)