Recently, my primary server unexpectedly went offline due to unplanned maintenance at the data center. The downtime led to a significant loss of traffic and ad revenue, and I couldn’t intervene in time. This experience made me realize the importance of a robust failover system, especially for situations when immediate action isn’t possible—like during the night or while traveling.
I sought a straightforward yet dependable solution to ensure nearly 100% uptime, even if one server failed. The goal was to minimize configuration complexity while maximizing reliability. To achieve this, I configured automated site replication across two virtual servers and deployed a failover Load Balancer to seamlessly redirect traffic to the backup server in case of primary server downtime.
This tutorial is designed for anyone looking to quickly set up a failover system without needing extensive knowledge of network configurations. The steps are straightforward and easy to follow, ideal for users familiar with basic SSH commands. Whether you’re a developer, a website owner, or someone new to server management, this guide will help you save time and ensure your project’s reliability.
What is load balancer for?
It receives incoming requests and distributes them among multiple servers. The Load Balancer continuously monitors each server’s status and automatically removes any that go offline, ensuring users are always directed to a functional server.
Tools and Services Used
- Two virtual machines on DigitalOcean: Provides a scalable and reliable cloud environment.
- Cloudflare for Load Balancer: Ensures efficient traffic distribution and failover support.
- Debian 11 for the virtual machines: A stable and widely used Linux distribution.
- ISPManager (trial): A user-friendly control panel to simplify server management.”
Create Droplets
When setting up a Droplet, consider the following key factors:
- Location: Choose a region that is geographically close to your target audience to reduce latency.
- Operating System: Debian 11 x64, as this tutorial is specifically tailored for this version.
- RAM: At least 2 GB, which is the minimum requirement for the smooth operation of ispmanager.
The screenshot shows an example of the minimal configuration.
Choose Authentication Method : Password
I created two virtual machines right away to demonstrate the process. If you already have a live website, you only need one backup virtual machine.
Installing ispmanager
Before installing ispmanager, you need to adjust the hostname to meet the control panel’s requirements. To do this, edit the following file:
sudo nano /etc/hostname
Here you can see the hostname we set when creating the virtual machine: in my case, it’s debian-s1 and debian-s2. Simply add .ltd at the end to form a valid domain name.
sudo nano /etc/hosts
In this file, you need to find the line that specifies the virtual machine name you set when creating it. For example, we'll use debian-s1 here, but you can replace it with your own:
127.0.1.1 debian-s1 debian-s1
127.0.0.1 localhost
Also, add the .ltd suffix to the first entry, and you'll end up with something like this:
127.0.1.1 debian-s1.ltd debian-s1
127.0.0.1 localhost
After that, we restart the hostname settings.
sudo systemctl restart systemd-hostnamed
Now the system is ready for the control panel installation.
We install wget to download the installer:
apt install wget
Download the ISPmanager installer:
wget https://download.ispmanager.com/install.sh -O install.sh
Run the installation:
sh install.sh
Wait until the following message appears:
Which version would you like to install ?
b) beta version - has the latest functionality
s) stable version - time-proved version
Вам нужно вписать s
и нажать enter
Type s
and press Enter.
Next, the system will ask you to choose a preferred version. Choose:
1) Ispmanager-lite,pro,host with recommended software
For the choice "Which web server would you like to install?" select:
1) Nginx + Apache
For "Choose database server for ispmanager's internal data," select:
2) MySQL (recommended when maintaining a large number of sites and users)
The panel installation and component configuration will now begin. This process may take about 10 minutes, so feel free to grab a coffee.
After the installation is complete, the console will display a message with your IP:
=================================================
ispmanager-lite-common is installed
Go to the "https://67.205.135.37:1500/ispmgr" to login
Login: root
Password: <root password>
=================================================
Connecting a Website
Next, log in to the control panel at https://your_server_IP:1500/ispmgr. You can activate the trial version of the control panel at https://www.ispmanager.com/.
After logging in, create a user with the same name as the one on your primary server from which the files will be copied.
Configuring the Backup Server to Connect to the Primary Server
Before setting up synchronization, you need to create an SSH access key for your primary server. On the backup server, run:
ssh-keygen -t rsa -b 4096 -C "rsync_backup"
You can leave the password blank.
Next, send this key to the primary server:
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@IP-address_of_main_server
Great! Your backup server can now connect to the primary server without a password. It's time to create bash scripts for automatic copying.
Setting Up Automatic Copying
To automatically copy the website files and database, you need to create two scripts and configure a cron job. Here’s how:
File Synchronization
Create a folder for the scripts:
mkdir -p /root/scripts/
Create a script for copying files:
nano /root/scripts/copy_files.sh
With the following content:
rsync -avz -e "ssh -p 22" root@IP-address_of_main_server:/var/www/USER_FOLDER/data/www/DOMAIN.LTD/ /var/www/USER_FOLDER/data/www/DOMAIN.LTD/
Replace the placeholders with your data:
- IP-address_of_main_server
- USER_FOLDER
- DOMAIN.LTD
If you want to exclude certain data from synchronization, add these operators after rsync
:
-
--exclude 'wp-config.php'
to exclude a specific file -
--exclude 'wp-content/cache/'
to exclude a folder’s contents
Database Synchronization
Create a script for copying the database:
nano /root/scripts/backup_db.sh
With the following content:
#!/bin/bash
# Settings
PRIMARY_HOST="192.168.1.1"
PRIMARY_HOST_PORT="22"
PRIMARY_HOST_USER="root"
PRIMARY_DB_USER="db-user"
PRIMARY_DB_PASSWORD="******"
PRIMARY_DB="data-base-name"
SECONDARY_HOST="localhost"
SECONDARY_DB_USER="db-user"
SECONDARY_DB_PASSWORD="******"
SECONDARY_DB="data-base-name"
echo "$(date) - Starting database synchronization"
# hecking the availability of the Primary server
if ! ssh -q -p $PRIMARY_HOST_PORT $PRIMARY_HOST_USER@$PRIMARY_HOST exit; then
echo "$(date) - Primary server ($PRIMARY_HOST) is unreachable. Sync aborted."
exit 1
fi
# Exporting data from the Primary server
echo "$(date) - Exporting data from primary server..."
ssh -p $PRIMARY_HOST_PORT $PRIMARY_HOST_USER@$PRIMARY_HOST "mysqldump -u $PRIMARY_DB_USER -p'$PRIMARY_DB_PASSWORD' $PRIMARY_DB" > /tmp/$PRIMARY_DB.sql
if [ $? -ne 0 ]; then
echo "$(date) - Failed to export data from primary server. Sync aborted."
exit 1
fi
# Importing data to the SECONDARY_HOST
echo "$(date) - Importing data to secondary server..."
mysql -u $SECONDARY_DB_USER -p"$SECONDARY_DB_PASSWORD" $SECONDARY_DB < /tmp/$PRIMARY_DB.sql
if [ $? -eq 0 ]; then
echo "$(date) - Database synchronization completed successfully."
# Check if the directory exists
if [ -d "$TARGET_DIR" ]; then
# Remove all contents of the directory
rm -rf "${TARGET_DIR:?}"/*
echo "Contents of $TARGET_DIR have been removed."
else
echo "Directory $TARGET_DIR does not exist."
fi
else
echo "$(date) - Database synchronization failed."
exit 1
fi
# Deleting the temporary file
rm -f /tmp/$PRIMARY_DB.sql
Note: This script is not suitable for large databases or systems with frequent updates. It works best for resources with occasional updates.
Setting Up a CRON Job for Synchronization
Set the frequency of the synchronization scripts based on how fresh you want the backup server data to be. For my site, which updates only during the day, I chose daily synchronization during off-hours.
To create a cron job, run:
nano /var/spool/cron/crontabs/root
Add the following lines to the end of the file:
## Daily MySQL copy (5:00 AM Bangkok time)
0 22 * * * bash /root/scripts/backup_db.sh
## Daily site copy (5:05 AM Bangkok time)
5 22 * * * bash /root/scripts/copy_files.sh
For an initial copy of the site and its database, run these commands once:
bash /root/scripts/backup_db.sh
bash /root/scripts/copy_files.sh
Congratulations! If you’ve reached this point, you now have a fully operational backup server. The last step is to activate the Load Balancer for failover in case the primary server goes down.
Setting Up the Load Balancer
This section explains how to set up a Load Balancer using Cloudflare. I assume your site is already connected to Cloudflare and DNS records are configured. If not, complete the basic setup first.
To make the system activate only when the primary server fails, follow these steps:
- Create one Load Balancer;
- Configure two server pools;
- Add one endpoint per pool.
Go to the Traffic / Load Balancing section for your site.
Creating Monitors
First, create a monitor to check the server’s status via a link. Enter a URL that is not cached and reflects the essential components of the system (e.g., database connection, working script, nginx). For my setup, I use the login page. Don’t forget to check "Don't verify SSL/TLS certificates (insecure)".
Creating Endpoints
Endpoints are the connected servers. It’s essential to create one endpoint for each server: one as the primary and the other as the failover.
During Load Balancer setup, you can customize names for the Pool Name and Endpoint Name fields for convenience. Ensure the Endpoint Address fields contain the IP addresses of your primary and backup servers. In the Header Value field, enter your domain in the format domain.ltd (e.g., example.com). These settings will ensure proper traffic distribution.
Also, select the previously created monitor for server health checks. In the Health Check Regions field, specify the region where most of your audience is located to ensure health checks are performed close to users. Set the Health Threshold value to 1, meaning the pool is considered available with at least one working server.
Creating the Load Balancer
Finally, consolidate everything into a unified system. Below are screenshots of the working configuration:
Hostname
Enter your main domain in the format domain.ltd.
Endpoints
Assign your primary server to "Endpoints in this Load Balancer" and your backup server to "Fallback Pool".
Monitors
Add the previously created monitor.
Traffic Steering
Leave the default setting "Off: Cloudflare will route pools in failover order."
Click Next at each step until you reach the Save button.
Congratulations! Now you can test the setup: disable the primary server and ensure the system correctly redirects traffic to the backup server. If you have any questions, feel free to leave them in the comments—I’ll be happy to help!
Top comments (1)
Very interesting topic! Thanks @anton-tyshchenko