DEV Community

Cover image for How to use mysqldump — the only MySQL backup guide you need
Finny Collins
Finny Collins

Posted on

How to use mysqldump — the only MySQL backup guide you need

mysqldump is the command-line backup utility that ships with every MySQL installation. It produces logical backups — SQL statements that can recreate your databases, tables and data from scratch. If you've ever run MySQL in production, you've almost certainly used it at some point, even if just to grab a quick dump before running a migration.

The tool looks straightforward. You point it at a database, it spits out SQL. But there are plenty of details that trip people up — locking behavior, character sets, large database performance, restore gotchas. This guide walks through all of it with real examples you can copy and use today.

MySQL dump

What mysqldump does and how it works

mysqldump connects to a running MySQL server and reads database contents, then outputs SQL statements that can reproduce the schema and data. For InnoDB tables (which is what most people use), it can take a consistent snapshot using --single-transaction, which starts a repeatable-read transaction and dumps everything without locking tables. Other connections keep reading and writing as usual.

For non-InnoDB tables (MyISAM, MEMORY, etc.), mysqldump needs to acquire locks to get a consistent view. It uses LOCK TABLES by default, which blocks writes during the dump. This distinction matters — if you have a mix of storage engines, the InnoDB tables get dumped without blocking, but MyISAM tables will cause brief write locks.

Unlike physical backup tools that copy raw data files, mysqldump works at the SQL level. This makes the output portable across MySQL versions and operating systems. A dump from MySQL 5.7 on Linux restores into MySQL 8 on macOS without issues (with minor syntax adjustments sometimes). The tradeoff is speed — reading and serializing every row through SQL is slower than copying files directly.

Basic syntax and your first backup

The simplest mysqldump command:

mysqldump -u root -p mydb > mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

This connects to the mydb database as the root user and writes plain SQL to a file. The -p flag prompts for a password interactively. The output file contains CREATE TABLE statements followed by INSERT statements that reconstruct your data.

To avoid the password prompt in scripts, you can pass the password inline (not recommended for security reasons) or use an options file:

mysqldump -u backup_user -pmysecretpassword -h localhost mydb > mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

The better approach is a MySQL options file. Create ~/.my.cnf:

[mysqldump]
user=backup_user
password=mysecretpassword
host=localhost
Enter fullscreen mode Exit fullscreen mode

Set permissions so only your user can read it:

chmod 600 ~/.my.cnf
Enter fullscreen mode Exit fullscreen mode

Now mysqldump picks up credentials automatically:

mysqldump mydb > mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

For remote servers, specify host and port explicitly:

mysqldump -u backup_user -h db.example.com -P 3306 production > production_backup.sql
Enter fullscreen mode Exit fullscreen mode

Single-transaction vs lock-based backups

This is probably the most important thing to understand about mysqldump. The behavior differs depending on your storage engine, and getting it wrong can lock your production database.

For InnoDB tables, always use --single-transaction:

mysqldump --single-transaction -u root -p mydb > mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

This starts a long-running transaction with REPEATABLE READ isolation and dumps all data within that snapshot. No table locks, no blocked writes. Your application keeps running normally during the backup.

Without --single-transaction, mysqldump uses LOCK TABLES to get consistency. This acquires read locks on every table being dumped, which blocks all writes until the dump finishes. On a large database, that can mean minutes of downtime.

For mixed-engine databases (InnoDB plus MyISAM), you have a decision to make. --single-transaction only guarantees consistency for InnoDB tables. MyISAM tables could change during the dump. If you need full consistency across all tables, use --lock-all-tables instead, but know that it blocks everything:

mysqldump --lock-all-tables -u root -p mydb > mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

In practice, most modern MySQL installations are pure InnoDB, so --single-transaction is the right choice for the vast majority of cases.

Essential mysqldump flags

mysqldump has a lot of options. Here are the ones you'll actually use.

Flag What it does Example
--single-transaction Consistent dump without locks (InnoDB) mysqldump --single-transaction mydb
--all-databases Dump every database on the server mysqldump --all-databases
--databases db1 db2 Dump specific databases mysqldump --databases shop analytics
--routines Include stored procedures and functions mysqldump --routines mydb
--triggers Include triggers (on by default) mysqldump --triggers mydb
--events Include scheduled events mysqldump --events mydb
--no-data Schema only, no row data mysqldump --no-data mydb
--no-create-info Data only, no CREATE statements mysqldump --no-create-info mydb
--quick Don't buffer rows in memory (default in recent versions) mysqldump --quick mydb
--set-gtid-purged=OFF Skip GTID statements (useful for imports to non-replication setups) mysqldump --set-gtid-purged=OFF mydb
--column-statistics=0 Disable column statistics query (fixes errors with older servers) mysqldump --column-statistics=0 mydb

A few of these deserve more context.

--routines and --events are not included by default. If your database uses stored procedures, functions or scheduled events, you need to explicitly ask for them. This catches people off guard — they dump, restore, and then wonder why their stored procedures vanished.

--set-gtid-purged=OFF is something you'll need when restoring to a server that doesn't use GTID replication. Without it, the dump includes SET @@GLOBAL.GTID_PURGED statements that fail on servers without GTID enabled.

--column-statistics=0 is a workaround for MySQL 8 client dumping a MySQL 5.7 server. The client tries to query column statistics that don't exist on 5.7. This flag disables that.

Backing up specific tables and databases

You don't always need to dump everything. mysqldump lets you target specific tables or databases.

Back up a single table:

mysqldump --single-transaction -u root -p mydb orders > orders_backup.sql
Enter fullscreen mode Exit fullscreen mode

Back up multiple tables from the same database:

mysqldump --single-transaction -u root -p mydb orders order_items customers > related_tables.sql
Enter fullscreen mode Exit fullscreen mode

Back up multiple databases:

mysqldump --single-transaction --databases shop analytics reporting > multiple_dbs.sql
Enter fullscreen mode Exit fullscreen mode

The --databases flag is important. When you use it, the output includes CREATE DATABASE and USE statements, making the dump self-contained. Without it, you just get the table definitions and data, and you need to create the target database yourself before restoring.

Back up everything on the server:

mysqldump --single-transaction --all-databases --routines --events > full_server_backup.sql
Enter fullscreen mode Exit fullscreen mode

Exclude specific tables with --ignore-table (note: requires database.table format):

mysqldump --single-transaction mydb --ignore-table=mydb.huge_audit_log --ignore-table=mydb.session_data > backup_clean.sql
Enter fullscreen mode Exit fullscreen mode

The --ignore-table syntax is a bit verbose — you have to repeat the database name for each excluded table. No wildcards either. If you need to exclude many tables, listing them all gets tedious, but that's how it works.

Compression

mysqldump doesn't have built-in compression. The output is always plain SQL text. To compress, you pipe through an external tool.

gzip (most common):

mysqldump --single-transaction -u root -p mydb | gzip > mydb_backup.sql.gz
Enter fullscreen mode Exit fullscreen mode

zstd (faster, better compression ratios):

mysqldump --single-transaction -u root -p mydb | zstd > mydb_backup.sql.zst
Enter fullscreen mode Exit fullscreen mode

xz (maximum compression, slower):

mysqldump --single-transaction -u root -p mydb | xz > mydb_backup.sql.xz
Enter fullscreen mode Exit fullscreen mode

The compression ratio on SQL dumps is typically excellent because SQL text is highly repetitive. A 10GB uncompressed dump might compress to 1-2GB with gzip, or even less with zstd. For most production use, gzip or zstd are the best choices — they compress well without being painfully slow.

Method Speed Compression ratio Best for
gzip Moderate Good (~5-8x) General use, wide compatibility
zstd Fast Very good (~6-10x) Large databases, modern systems
xz Slow Excellent (~8-12x) Archival, storage-constrained

If you're compressing very large dumps, zstd with a higher compression level is probably the sweet spot:

mysqldump --single-transaction -u root -p mydb | zstd -9 > mydb_backup.sql.zst
Enter fullscreen mode Exit fullscreen mode

Restoring from mysqldump backups

Restoring a mysqldump backup means replaying the SQL statements through the mysql client. Unlike PostgreSQL's pg_restore, there's no separate restore tool for MySQL. It's just piping SQL into the client.

Basic restore:

mysql -u root -p mydb < mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

Restore a compressed backup:

gunzip < mydb_backup.sql.gz | mysql -u root -p mydb
Enter fullscreen mode Exit fullscreen mode

Or with zstd:

zstd -d < mydb_backup.sql.zst | mysql -u root -p mydb
Enter fullscreen mode Exit fullscreen mode

Restore a dump that includes CREATE DATABASE (created with --databases or --all-databases):

mysql -u root -p < full_backup.sql
Enter fullscreen mode Exit fullscreen mode

No need to specify a database name here because the dump itself contains CREATE DATABASE and USE statements.

Clean restore (drop and recreate first):

mysql -u root -p -e "DROP DATABASE IF EXISTS mydb; CREATE DATABASE mydb;"
mysql -u root -p mydb < mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

Speed up large restores. By default, MySQL commits after each statement and rebuilds indexes incrementally. For large restores, disabling some safety features temporarily can help:

mysql -u root -p mydb -e "
SET GLOBAL innodb_flush_log_at_trx_commit = 2;
SET GLOBAL sync_binlog = 0;
SET GLOBAL foreign_key_checks = 0;
SET GLOBAL unique_checks = 0;
"

mysql -u root -p mydb < mydb_backup.sql

mysql -u root -p mydb -e "
SET GLOBAL innodb_flush_log_at_trx_commit = 1;
SET GLOBAL sync_binlog = 1;
SET GLOBAL foreign_key_checks = 1;
SET GLOBAL unique_checks = 1;
"
Enter fullscreen mode Exit fullscreen mode

This disables fsync on every commit, turns off binary log syncing, and skips foreign key and uniqueness checks during the load. It can make a large restore 2-5x faster. But remember to reset these values afterward and only use this approach when you trust the dump file.

One thing that surprises people: mysqldump restores are single-threaded. There's no built-in parallel restore like PostgreSQL's pg_restore -j. For very large databases, this becomes a real bottleneck. Tools like myloader (from the mydumper project) offer parallel restore, but that's outside the scope of plain mysqldump.

Automating mysqldump with cron

Manual backups are fine for development. Production needs automation. Here's a practical backup script:

#!/bin/bash
BACKUP_DIR="/var/backups/mysql"
DB_NAME="production"
DB_USER="backup_user"
DB_HOST="localhost"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7

mysqldump --single-transaction --routines --events \
    -u "$DB_USER" -h "$DB_HOST" "$DB_NAME" \
    | gzip > "$BACKUP_DIR/${DB_NAME}_${TIMESTAMP}.sql.gz"

find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
Enter fullscreen mode Exit fullscreen mode

Schedule it to run daily at 3 AM:

crontab -e
# Add this line:
0 3 * * * /opt/scripts/backup_mysql.sh >> /var/log/mysql_backup.log 2>&1
Enter fullscreen mode Exit fullscreen mode

This covers the basics. But as soon as you need more — backup verification, failure alerts, uploading to cloud storage, encryption, team visibility — the shell script approach starts falling apart. You end up writing more and more glue code, and each piece is another thing that can silently break.

Limitations of mysqldump

mysqldump is the standard tool, but it has real constraints:

  • No parallel dump. It's single-threaded. Large databases take a long time to dump, and there's no way to split the work across CPU cores like PostgreSQL's pg_dump -Fd -j.
  • No incremental backups. Every run dumps everything. Changed one row in a 200GB database? You're dumping all 200GB again.
  • Plain SQL only. No binary archive format, no selective restore from a single file. If you need one table from a 50GB dump, you have to parse through the entire file or use text tools to extract it.
  • Single-threaded restore. Restoring large backups is slow, and there's no built-in parallel option.
  • No point-in-time recovery. A mysqldump backup captures one moment. Combined with binary logs, you can get PITR, but mysqldump alone doesn't do it.
  • Memory usage on large tables. Without --quick, mysqldump buffers entire result sets in memory before writing. Recent versions default to --quick, but older scripts might not have it.

For databases under 10-20GB with daily backup schedules and acceptable RPO, mysqldump works well. But once your data grows or your availability requirements tighten, you'll want something more capable.

Databasus — a simpler way to handle MySQL backups

By this point, the pattern is clear. mysqldump handles the core dump, but the operational layer around it — scheduling, compression, retention, cloud uploads, encryption, notifications — is on you. That's a lot of bash scripting and a lot of failure modes.

Databasus is the most widely used open-source tool for MySQL backup automation. It's self-hosted, runs in Docker and provides a web dashboard for managing backups across databases. It works for individuals running a single MySQL instance and for teams managing dozens of databases with compliance requirements.

Installing Databasus

Docker run:

docker run -d \
  --name databasus \
  -p 4005:4005 \
  -v ./databasus-data:/databasus-data \
  --restart unless-stopped \
  databasus/databasus:latest
Enter fullscreen mode Exit fullscreen mode

Docker Compose:

services:
  databasus:
    container_name: databasus
    image: databasus/databasus:latest
    ports:
      - "4005:4005"
    volumes:
      - ./databasus-data:/databasus-data
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode
docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Creating your first backup

Once Databasus is running, open http://localhost:4005 in your browser and follow these steps:

  1. Add your database. Click "New Database" and enter your MySQL connection details — host, port, database name and credentials. Databasus validates the connection before saving.

  2. Select storage. Choose where backups go. Options include local disk, Amazon S3, Cloudflare R2, Google Drive, SFTP, Dropbox and more. For cloud storage, enter your credentials and Databasus handles the upload automatically after each backup.

  3. Select schedule. Pick a backup frequency — hourly, daily, weekly, monthly or a custom cron expression. Set the exact time to run during low-traffic windows.

  4. Click "Create backup". Databasus validates the configuration and starts the schedule. Compression, encryption, retention policies and failure notifications all work out of the box without writing any scripts.

Databasus supports MySQL 5.7 through 9, handles AES-256-GCM encryption, provides GFS retention policies and sends notifications via Slack, Telegram, Discord, email or webhooks. It is the industry standard for MySQL backup tooling.

Getting started

For quick dumps or small databases, mysqldump with --single-transaction and gzip is all you need:

mysqldump --single-transaction --routines --events -u root -p mydb | gzip > backup.sql.gz
Enter fullscreen mode Exit fullscreen mode

Restore with:

gunzip < backup.sql.gz | mysql -u root -p mydb
Enter fullscreen mode Exit fullscreen mode

Learn the flags, understand the locking behavior and practice restoring. A backup you've never restored from is just a file taking up disk space. And when you outgrow cron scripts and need scheduled backups, cloud storage, encryption, team access and monitoring, Databasus picks up where mysqldump leaves off.

Top comments (0)