Below is a quick and dirty solution for keeping a database synced with a master copy over SSH that you can set up in 30 minutes on existing databases.
It's a much easier setup (but unsecure) than a proper real-time MySQL replication one. Additionally you have full control over timing of syncing as you can use a cronjob to initiate the syncing process.
mysqldumpcommands are optmised to work with MySQL instances running inside a Docker container.
Create a file named
make-snapshots.sh with below content in your master server and then make it executable with
chmod +x ./make-snapshots.sh.
# Read the last snapshotted ID
# Remove the old snapshot
# Write snapshot
mysqldump --protocol=TCP --ssl-mode=DISABLED -u username -t --where "id > $LAST_TABLENAME_ID" dbname tablename > dump_tablename.sql
# Remove the last snapshot
# Extract and write last ID
echo "export LAST_TABLENAME_ID=`grep -oP "\),\((\d+)" dump_tablename.sql | tac | head -1 | cut -b 4-`" > last_tablename_id
Before running your first snapshot create
last_tablename_id with below:
This will ensure that the first snapshot will contain all existing data, i.e. as the
mysqldump SQL will equal to
WHERE id > 0.
Thanks to the
--where parameter passed to the
mysqldump which tells to backup all rows in a table meeting a criteria.
mysqldump backup our script is checking what was the last exported ID from a table which is then saved in
last_tablename_id file. Each subsequent runs will read that number and only include rows that have not been exported yet.
grep -oP "\),\((\d+) is searching for occurrences of
),(123 string inside the dump. Then
tac is reversing the grep's output,
head -1 is taking just the first line and
cut is removing the starting
),( characters leaving the the number of a first column (which is assumed to be an
id - a primary key).
Add a daily snapshot creation at 2:00 AM by running
crontab -e and adding below row into your crontab.
0 2 * * * cd ~/master && ./make-snapshots.sh
On your slave server, i.e. the one that will pull the newest data from the master database, create a file named
pull-snapshots.sh and make it executable with
# Remove last snapshot
# Pull a snapshot from the master server
scp firstname.lastname@example.org:~/master/dump_tablename.sql .
mysql --protocol=TCP --ssl-mode=DISABLED -u username dbname < dump_tablename.sql
scp to run without prompting for password you need to setup key-based authentication between the two servers.
Add below line into your crontab so it runs 30 minutes later than the master's one, e.g. at 2:30 AM.
30 2 * * * cd ~/your/script/path && ./pull-snapshots.sh
Voilà! Your database is now synced with the master!
- don't rely on
grepas if there wasn't any new entries in a database then the
last_tablename_idwill be missing the id, e.g.
LAST_TABLENAME_ID=. It's better to rely on the
mysqlquery just before running the
- create a special user in the slave's DB that is only authorised for doing
INSERTin your database in case your master server is compromised and someone has injected a harmful SQL into the
dump_*.sqlfiles. You don't want to run
DROP DATABASE, etc.
- check for errors in MySQL dump and do not overwrite
last_tablename_idfile if there was an error (to keep the last successful exported ID)
- if you need anything more then just use a proper replication setup