DEV Community

loading...

MySQL replication over SSH

Łukasz Wolnik
React / React Native / JavaScript / IoT developer
Updated on ・3 min read

Slave database server is pulling only the latest additions to the Master database over SSH

Below is a quick and dirty solution for keeping a database synced with a master copy over SSH that you can set up in 30 minutes on existing databases.

It's a much easier setup (but unsecure) than a proper real-time MySQL replication one. Additionally you have full control over timing of syncing as you can use a cronjob to initiate the syncing process.

Below mysql and mysqldump commands are optmised to work with MySQL instances running inside a Docker container.

Master setup

Create a file named make-snapshots.sh with below content in your master server and then make it executable with chmod +x ./make-snapshots.sh.

#!/bin/bash
export MYSQL_PWD=PasswordMaster1
# Read the last snapshotted ID
source last_tablename_id
# Remove the old snapshot
rm dump_tablename.sql
# Write snapshot
mysqldump --protocol=TCP --ssl-mode=DISABLED -u username -t --where "id > $LAST_TABLENAME_ID" dbname tablename > dump_tablename.sql
# Remove the last snapshot
rm last_tablename_id
# Extract and write last ID
echo "export LAST_TABLENAME_ID=`grep -oP "\),\((\d+)" dump_tablename.sql | tac | head -1 | cut -b 4-`" > last_tablename_id
Enter fullscreen mode Exit fullscreen mode

Before running your first snapshot create last_tablename_id with below:

export LAST_TABLENAME_ID=0
Enter fullscreen mode Exit fullscreen mode

This will ensure that the first snapshot will contain all existing data, i.e. as the mysqldump SQL will equal to WHERE id > 0.

How does the snapshot bit work

Thanks to the --where parameter passed to the mysqldump which tells to backup all rows in a table meeting a criteria.

After each mysqldump backup our script is checking what was the last exported ID from a table which is then saved in last_tablename_id file. Each subsequent runs will read that number and only include rows that have not been exported yet.

The grep -oP "\),\((\d+) is searching for occurrences of ),(123 string inside the dump. Then tac is reversing the grep's output, head -1 is taking just the first line and cut is removing the starting ),( characters leaving the the number of a first column (which is assumed to be an id - a primary key).

Crontab

Add a daily snapshot creation at 2:00 AM by running crontab -e and adding below row into your crontab.

0 2 * * * cd ~/master && ./make-snapshots.sh
Enter fullscreen mode Exit fullscreen mode

Slave setup

On your slave server, i.e. the one that will pull the newest data from the master database, create a file named pull-snapshots.sh and make it executable with chmod +x.

#!/bin/bash
export MYSQL_PWD=PasswordSlave1
# Remove last snapshot
rm dump_tablename.sql 
# Pull a snapshot from the master server
scp username@99.0.0.1:~/master/dump_tablename.sql .
mysql --protocol=TCP --ssl-mode=DISABLED -u username dbname < dump_tablename.sql
Enter fullscreen mode Exit fullscreen mode

For scp to run without prompting for password you need to setup key-based authentication between the two servers.

Crontab

Add below line into your crontab so it runs 30 minutes later than the master's one, e.g. at 2:30 AM.

30 2 * * * cd ~/your/script/path && ./pull-snapshots.sh
Enter fullscreen mode Exit fullscreen mode

Voilà! Your database is now synced with the master!

Further improvements:

  • don't rely on grep as if there wasn't any new entries in a database then the last_tablename_id will be missing the id, e.g. LAST_TABLENAME_ID=. It's better to rely on the mysql query just before running the mysqldump.
  • create a special user in the slave's DB that is only authorised for doing INSERT in your database in case your master server is compromised and someone has injected a harmful SQL into the dump_*.sql files. You don't want to run DROP DATABASE, etc.
  • check for errors in MySQL dump and do not overwrite last_tablename_id file if there was an error (to keep the last successful exported ID)
  • if you need anything more then just use a proper replication setup

Discussion (0)