<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Davor Jovanović</title>
    <description>The latest articles on DEV Community by Davor Jovanović (@davorj94).</description>
    <link>https://dev.to/davorj94</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/davorj94"/>
    <language>en</language>
    <item>
      <title>Self-host - Part 3 - MySQL and PostgreSQL Database Backup to Local Drive</title>
      <dc:creator>Davor Jovanović</dc:creator>
      <pubDate>Tue, 13 Aug 2024 09:01:23 +0000</pubDate>
      <link>https://dev.to/davorj94/self-host-part-3-mysql-and-postgresql-database-backup-to-local-hard-drive-53ke</link>
      <guid>https://dev.to/davorj94/self-host-part-3-mysql-and-postgresql-database-backup-to-local-hard-drive-53ke</guid>
      <description>&lt;p&gt;This blog will be the third in the three-part series (maybe more, we will see) of self-hosting. In the &lt;a href="https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94"&gt;first part&lt;/a&gt;, we have explained how to start and secure your self-hosted server. In the &lt;a href="https://dev.to/davorj94/self-host-part-2-zero-downtime-deployment-using-docker-swarm-2o3c"&gt;second part&lt;/a&gt; we addressed zero-downtime deployment using Docker Swarm. This third part will discuss backing up our PostgreSQL and MySQL databases without downtime.&lt;/p&gt;

&lt;p&gt;It has been decided to bring this topic into this series as it has a huge part in having a reliable, resilient application to system/database failures as much as possible. Do note that you should go through &lt;a href="https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94"&gt;part 1&lt;/a&gt; and &lt;a href="https://dev.to/davorj94/self-host-part-2-zero-downtime-deployment-using-docker-swarm-2o3c"&gt;part 2&lt;/a&gt; of this series, as this is the continuation of the code presented in those parts, and you might not understand everything if you don't read previous parts.&lt;/p&gt;

&lt;p&gt;First things first, &lt;strong&gt;why back up at all?&lt;/strong&gt; Well, the straightforward answer is &lt;strong&gt;because we want to save data from databases even if there is some failure of the server&lt;/strong&gt;. Namely, we don't want to force our end-users to, for example, input their data again every time something goes wrong with the database (and something can always go wrong, starting from bad scripts that accidentally delete the database, to server failure without recovery). Therefore, to provide as best UX as possible, we need to store data from the databases in multiple places, as that would reduce the chance of everything being lost altogether. We want to make our data resilient.&lt;/p&gt;

&lt;p&gt;In this article, in the spirit of previous articles in this series, we will tackle the issue of backing up MySQL and PostgreSQL databases with docker services and how to achieve it without any downtime of our production application. Namely, saving data from both databases to the local disk, and restoring that data as necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why local backup, and not S3 or any other cloud storage?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In essence, you can store backed-up data wherever you decide, as long as it is accessible at the moment of that data restoration. In the spirit of this series, the goal is to reduce costs as much as possible, and storage from cloud providers, as cheap as it might be, &lt;strong&gt;costs&lt;/strong&gt;, which, if you remember from previous parts of this series, we try to avoid as much as possible. In that regard, we will explain now how to store everything on your local disk (or external drive if you like), and if you decide to store everything in remote storage, the same principles apply, with the only difference that you would send the files to remote cloud storage, instead of saving them locally.&lt;/p&gt;

&lt;p&gt;Now that we have explained the reasoning behind backing up our databases, let's proceed with the implementation, namely, how we will achieve it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup services in Swarm cluster
&lt;/h2&gt;

&lt;p&gt;To achieve our goal, at the beginning, we need to define the services that we will use in our docker-compose file. Let's say we have MySQL and PostgreSQL databases in our production application, something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  mysqldb:
    image: "mysql:8.0"
    restart: always
    env_file:
      - "path to env file"
    volumes:
      - dbdata:/var/lib/mysql
    deploy:
      mode: replicated
      replicas: 1
      update_config:
        order: start-first
        failure_action: rollback
        delay: 5s
    networks:
      - mysql-network

  pgdb:
    image: "postgres:16.3"
    restart: always
    env_file:
      - "path to env file"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - /etc/localtime:/etc/localtime
    ports:
      - 5432:5432
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5
    deploy:
      mode: replicated
      replicas: 1
      update_config:
        order: start-first
        failure_action: rollback
        delay: 5s
    networks:
      - pg-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the code above, there are MySQL and PostgreSQL services that we can use to store whichever data we decide. Now, let's define database backup services:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MySQL:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  mysqldb-backup:
    image: fradelg/mysql-cron-backup
    volumes:
      - ./backup/mysqldb:/backup
    env_file:
      - "path to env file"
    environment:
      - MYSQL_HOST=mysqldb
      - MYSQL_USER=root
      - MYSQL_PASS=${MYSQL_ROOT_PASSWORD}
      - MYSQL_DATABASE="database name here"
      - MAX_BACKUPS=1
      - INIT_BACKUP=1
      - TIMEOUT=60s
      - CRON_TIME=0 01 * * *
      - GZIP_LEVEL=6
      - MYSQLDUMP_OPTS=--no-tablespaces
    restart: unless-stopped
    deploy:
      mode: global
      update_config:
        order: start-first
        failure_action: rollback
        delay: 5s
    networks:
      - mysql-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see above, we use &lt;strong&gt;fradelg/mysql-cron-backup&lt;/strong&gt; which can be found &lt;a href="https://github.com/fradelg/docker-mysql-cron-backup" rel="noopener noreferrer"&gt;here&lt;/a&gt;. This service will connect to our &lt;code&gt;mysqldb&lt;/code&gt; service (note that those are on the same docker network), and perform a backup every day at 00:01 (as might be seen from &lt;code&gt;CRON_TIME&lt;/code&gt;). You can take a look at the documentation of the docker image for the details, but the main thing to note from the code above is that this service will connect to our &lt;code&gt;mysqldb&lt;/code&gt; service and perform a backup to an unnamed volume &lt;code&gt;/backup/mysqldb&lt;/code&gt; next to our docker-compose file. If you have read &lt;a href="https://dev.to/davorj94/self-host-part-2-zero-downtime-deployment-using-docker-swarm-2o3c"&gt;previous article&lt;/a&gt; you will see that this docker-compose file will be placed in &lt;code&gt;/app&lt;/code&gt; on our remote server. We have also specified that we want to have only one backup at a time, as in this case, we don't want to store multiple backup files on our remote server and use too much of our remote storage. The &lt;code&gt;deploy&lt;/code&gt; section refers to Swarm cluster behavior, so our cluster knows how to handle this service.&lt;/p&gt;

&lt;p&gt;Note that this image is convenient enough and gzips everything to our specified backup folder, and after one backup, the contents of the folder would, for example, look as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;latest.sql.gz&lt;/li&gt;
&lt;li&gt;202408120100.sql.gz&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And we would always have &lt;strong&gt;latest.sql.gz&lt;/strong&gt; linked to the latest backup for convenience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostgreSQL:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  pgdb-backup:
    image: prodrigestivill/postgres-backup-local
    volumes:
      - ./backup/pgdb:/backups
    env_file:
      - "path to postgres envs here"
    environment:
      - POSTGRES_HOST=pgdb
      - SCHEDULE=@daily
      - BACKUP_KEEP_DAYS=4
      - BACKUP_KEEP_WEEKS=0
      - BACKUP_KEEP_MONTHS=0
      - HEALTHCHECK_PORT=8080
    restart: unless-stopped
    deploy:
      mode: global
      update_config:
        order: start-first
        failure_action: rollback
        delay: 5s
    networks:
      - pg-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the case of PostgreSQL, we use image &lt;strong&gt;prodrigestivill/postgres-backup-local&lt;/strong&gt; which can be found &lt;a href="https://github.com/prodrigestivill/docker-postgres-backup-local" rel="noopener noreferrer"&gt;here&lt;/a&gt;. This service will connect to our &lt;code&gt;pgdb&lt;/code&gt; service and perform the backup &lt;strong&gt;@daily&lt;/strong&gt;, namely, once a day at midnight (take a look at &lt;a href="https://pkg.go.dev/github.com/robfig/cron?utm_source=godoc#hdr-Predefined_schedules" rel="noopener noreferrer"&gt;cron schedule&lt;/a&gt;). Please, look into the documentation for specific features of this image, but for our use case, it is enough to keep every backup for a maximum of 4 days. Same as in the case of MySQL, the &lt;code&gt;deploy&lt;/code&gt; section refers to Swarm cluster behavior, so our cluster knows how to handle this service.&lt;/p&gt;

&lt;p&gt;Same as with the &lt;code&gt;mysqldb-backup&lt;/code&gt; service, this image is convenient enough and gzips everything to our specified backup folder, and will sort them into folders such as &lt;strong&gt;daily&lt;/strong&gt;, &lt;strong&gt;last&lt;/strong&gt;, &lt;strong&gt;weekly&lt;/strong&gt;, and &lt;strong&gt;monthly&lt;/strong&gt;. Consult &lt;a href="https://pkg.go.dev/github.com/robfig/cron?utm_source=godoc#hdr-Predefined_schedules" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for specific folder structure, but basically, this service also provides the latest backup as well as daily for our use case.&lt;/p&gt;

&lt;p&gt;Now that we have defined our services in Swarm cluster, which will automatically backup MySQL and PostgreSQL databases every day at midnight and store them in the &lt;strong&gt;/app/backup&lt;/strong&gt; directory on our remote server, we need to tackle the part of transferring those files and folders to our local storage so we can save them for possible future restoration of data in our production environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transferring the files and folders to our local machine
&lt;/h2&gt;

&lt;p&gt;To transfer all folders and files to our local machine, we will use a simple &lt;code&gt;rsync&lt;/code&gt; command to connect to our remote server and transfer everything necessary on our backup machine:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;execute_backup.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SCRIPT_DIR="$(cd "$(dirname "$0")" &amp;amp;&amp;amp; pwd)"

# Path to the log file
LOG_FILE="$SCRIPT_DIR/cronjob.log"

# Check if the log file exists and is a regular file
if [ -f "$LOG_FILE" ]; then
    # Truncate the log file to remove all content
    &amp;gt; "$LOG_FILE"
    echo "Log file cleared successfully."
else
    echo "Log file not found or is not a regular file."
fi

echo "Current date and time: $(date)"

. "$SCRIPT_DIR/input-data/credentials.txt"

rsync -arvzP --rsh="ssh -p $remote_port" --delete "$remote_username@$remote_ip:/app/backup" "$SCRIPT_DIR/bkp-app"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see above, the backup process is pretty straightforward. We have created a log file for the cron job (which we will cover in the next section) to see if the transfer was successful. After that, we are sourcing the &lt;strong&gt;credentials.txt&lt;/strong&gt; file, which contains all data necessary to connect to the remote server and has the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;remote_ip={ip goes here}
remote_port={port goes here}
remote_username={username goes here}
pass={password for remote machine goes here}
pg_username={postgres username}
pg_db={postgres database name}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterward, we used the &lt;code&gt;rsync&lt;/code&gt; command to transfer everything from &lt;strong&gt;/app/backup&lt;/strong&gt; on our remote server to our local directory &lt;strong&gt;bkp-app&lt;/strong&gt;. Voila! We have our, previously created files and folders by backup services on our remote server, ready and set on our local machine.&lt;/p&gt;

&lt;p&gt;Note: For flags used in the &lt;code&gt;rsync&lt;/code&gt; command, please look at &lt;a href="https://linux.die.net/man/1/rsync" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note that, even if we have all the files and folders required for restoration, we need to set up some additional logic in the following section to have it all automated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding cron jobs
&lt;/h2&gt;

&lt;p&gt;To have everything automated and transfer all backups once daily to our local machine, we need to set up a local cron job, which will run once daily. For convenience, we can run the following script:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;add_local_cronjob.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CURRENT_DIR=$(pwd)
SCRIPT_NAME="execute_backup.sh"
SCRIPT_PATH="$CURRENT_DIR/$SCRIPT_NAME"

# Get the current system timezone offset in hours
OFFSET=$(date +%z | cut -c 1-3)
# Calculate the adjusted hour for the cron job based on the offset
CRON_HOUR=$((7 + OFFSET))
# The number above is an actual hour we want to run in, in this
# case, we want to run a backup transfer every day at 7 in 
# the morning.

# Ensure CRON_HOUR is within the valid hour range (0-23)
if [ $CRON_HOUR -lt 0 ]; then
    CRON_HOUR=$((24 + CRON_HOUR))
fi

CRON_TIME="0 $CRON_HOUR * * *"
CRON_JOB="$CRON_TIME $SCRIPT_PATH &amp;gt;&amp;gt; $CURRENT_DIR/cronjob.log 2&amp;gt;&amp;amp;1"

# Check if the cron job already exists
(crontab -l | grep -F "$CRON_JOB") &amp;amp;&amp;gt; /dev/null

if [ $? -eq 0 ]; then
    echo "Cron job already exists. No changes made."
else
    # Add the new cron job
    (crontab -l; echo "$CRON_JOB") | crontab -
    echo "Cron job added."
fi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script presented above adds a cron job that runs at 7 in the morning every day and runs the &lt;code&gt;execute_backup.sh&lt;/code&gt; shell script. Also, note that we added additional logic to write all output to the &lt;strong&gt;cronjob.log&lt;/strong&gt; file so we can see if everything is okay with the script execution.&lt;/p&gt;

&lt;p&gt;Note: This script should only be run once on the local machine.&lt;/p&gt;

&lt;p&gt;Now if you read through &lt;a href="https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94"&gt;part 1&lt;/a&gt; of this series, you would know that we have set 2FA using Google Authenticator on our remote server. That means we cannot automatically use a cron job to connect to our server, as we need some way of writing down code from the Google Authenticator application. To omit that authentication, we can add another cron job, but this time on the remote server, and its script would be as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;initalize_remote_backup.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source "./input-data/credentials.txt"

rsync -arvzP --rsh="ssh -p $remote_port" "./input-data/credentials.txt" "$remote_username@$remote_ip:/app/" 

ssh $remote_username@$remote_ip -p $remote_port "bash -s" &amp;lt;&amp;lt; 'ENDSSH'

source "/app/credentials.txt"

cron_command1="59 06 * * * sed -i 's/^auth[[:space:]]\+required[[:space:]]\+pam_google_authenticator\.so[[:space:]]\+debug[[:space:]]\+nullok/# &amp;amp;/' /etc/pam.d/sshd"
cron_command2="01 07 * * * sed -i 's/^#[[:space:]]\+auth[[:space:]]\+required[[:space:]]\+pam_google_authenticator.so[[:space:]]\+debug[[:space:]]\+nullok/auth required    pam_google_authenticator.so debug nullok/' /etc/pam.d/sshd"

echo -e "$pass\n$cron_command1\n$cron_command2" | sudo -S crontab -

rm "/app/credentials.txt"
ENDSSH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the script above, when running &lt;strong&gt;initialize_remote_backup.sh&lt;/strong&gt;, we are basically removing 2FA at &lt;strong&gt;06:59&lt;/strong&gt; in the morning, and bringing back 2FA at &lt;strong&gt;07:01&lt;/strong&gt;. Namely, at &lt;strong&gt;07:00&lt;/strong&gt;, we can connect to the server to transfer backup files to our local machine without inputting code from the Google Authenticator application. &lt;/p&gt;

&lt;p&gt;Now that we have an automated process of backing up the databases using shell scripting and local, as well as remote cron jobs, we can proceed with instructions on restoration of backed-up data if necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database restoration
&lt;/h2&gt;

&lt;p&gt;Two scripts are concerned with restoration. The first script, which we will name &lt;strong&gt;local_restore.sh&lt;/strong&gt; will have the &lt;code&gt;rsync&lt;/code&gt; command which transfers all files and folders before the restoration script on the remote server. The second script, which we will name &lt;strong&gt;remote_restore.sh&lt;/strong&gt; will restore databases from files and folders on the remote server. Namely, the second script will not transfer files from our local machine before executing commands to restore data. Let's see what these scripts look like, and it will make more sense:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;local_restore.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source "./input-data/credentials.txt"
SCRIPT_DIR="$(cd "$(dirname "$0")" &amp;amp;&amp;amp; pwd)"

rsync -arvzP --rsh="ssh -p $remote_port" "$SCRIPT_DIR/input-data/credentials.txt" "$SCRIPT_DIR/bkp-app/backup" "$remote_username@$remote_ip:/app/" 

# Execute script on the remote server
ssh $remote_username@$remote_ip -p $remote_port "bash -s" &amp;lt;&amp;lt; 'ENDSSH'

source "/app/credentials.txt"

task_id=$(echo "$pass" | sudo -S -E docker stack ps swarm_stack_name --filter "desired-state=running" | grep mysqldb-backup | awk '{print $1}' | head -n 1)

mysqldb_container_id=$(echo "$pass" | sudo -S -E docker inspect --format="{{.Status.ContainerStatus.ContainerID}}" $task_id)

echo "$pass" | sudo -S chown -R root:user /app/backup/mysqldb

# Run restore script according to documentation
echo "$pass" | sudo -S docker container exec $mysqldb_container_id /restore.sh /backup/latest.sql.gz


task_id=$(echo "$pass" | sudo -S -E docker stack ps swarm_stack_name --filter "desired-state=running" | grep pgdb-backup | awk '{print $1}' | head -n 1)

db_container_id=$(echo "$pass" | sudo -S -E docker inspect --format="{{.Status.ContainerStatus.ContainerID}}" $task_id)

# Run restore script according to documentation
echo "$pass" | sudo -S -E docker exec $db_container_id /bin/sh -c "zcat /backups/last/latest.sql.gz | psql --username $pg_username --dbname $pg_db --host db &amp;gt; /dev/null 2&amp;gt;&amp;amp;1"

echo "All done!"

rm "/app/credentials.txt"
ENDSSH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the script above, we first use all variables in &lt;strong&gt;credentials.txt&lt;/strong&gt; and then use the &lt;code&gt;rsync&lt;/code&gt; command to transfer backed-up files from the local machine's &lt;strong&gt;bkp-app/backup&lt;/strong&gt; folder to &lt;strong&gt;/app/backup&lt;/strong&gt; on the remote machine. After transferring the files intended for backing up, we enter the bash script on our remote server and read the &lt;strong&gt;credentials.txt&lt;/strong&gt; file. For both &lt;strong&gt;mysqldb-backup&lt;/strong&gt; and &lt;strong&gt;pgdb-backup&lt;/strong&gt;, to execute scripts in their respective containers, we need to find their process IDs (&lt;code&gt;task_id&lt;/code&gt; in the script above) and eventually container IDs (&lt;code&gt;mysqldb_container_id&lt;/code&gt; and &lt;code&gt;db_container_id&lt;/code&gt; in the script above) inside Swarm cluster. After finding their container IDs, we can execute the command inside the container. &lt;br&gt;
For &lt;code&gt;mysqldb-backup&lt;/code&gt;, according to &lt;a href="https://github.com/fradelg/docker-mysql-cron-backup?tab=readme-ov-file#restore-using-a-docker-command" rel="noopener noreferrer"&gt;documentation for restoration&lt;/a&gt;, we need to run &lt;strong&gt;restore.sh&lt;/strong&gt; script inside the container, which will handle everything for us.&lt;br&gt;
For &lt;code&gt;pgdb-backup&lt;/code&gt;, &lt;a href="https://github.com/prodrigestivill/docker-postgres-backup-local?tab=readme-ov-file#restore-using-the-same-container" rel="noopener noreferrer"&gt;their documentation&lt;/a&gt; is a bit unclear about restoration, so this command is used in the script above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "$pass" | sudo -S -E docker exec $db_container_id /bin/sh -c "zcat /backups/last/latest.sql.gz | psql --username $pg_username --dbname $pg_db --host db &amp;gt; /dev/null 2&amp;gt;&amp;amp;1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: &lt;code&gt;$pass&lt;/code&gt;, &lt;code&gt;$pg_db&lt;/code&gt;, and &lt;code&gt;$pg_username&lt;/code&gt; are sourced from &lt;strong&gt;credentials.txt&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;After executing the script, both MySQL and PostgreSQL databases should be restored to the local version from our machine. &lt;/p&gt;

&lt;p&gt;Note: Part &lt;code&gt;echo "$pass" |&lt;/code&gt; is applied to allow the use of the &lt;code&gt;sudo&lt;/code&gt; command for docker (as sudo is required for docker on the remote server), where &lt;code&gt;$pass&lt;/code&gt; is sourced from &lt;strong&gt;credentials.txt&lt;/strong&gt; file that is included in the script.&lt;/p&gt;

&lt;p&gt;In case we want to restore from the latest data which is already present in the remote server, we should modify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rsync -arvzP --rsh="ssh -p $remote_port" "$SCRIPT_DIR/input-data/credentials.txt" "$SCRIPT_DIR/bkp-app/backup" "$remote_username@$remote_ip:/app/" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rsync -arvzP --rsh="ssh -p $remote_port" "$SCRIPT_DIR/input-data/credentials.txt" "$remote_username@$remote_ip:/app/" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;namely, remove the part where &lt;strong&gt;backup&lt;/strong&gt; is transferred to the remote server. This script in code is called &lt;strong&gt;remote_restore.sh&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We have successfully restored both databases and our end users will be very happy! Hooray!&lt;/p&gt;

&lt;h2&gt;
  
  
  Onboarding new local machine
&lt;/h2&gt;

&lt;p&gt;Let's explain what the process for a new machine should look like before wrapping up.&lt;/p&gt;

&lt;p&gt;Let's say that you cloned the repository from GitHub with all the code explained above, your file structure should look like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker-compose.yaml (with databases and services for backing up)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;input-data&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;credentials.txt (you should create this file if it is non-existent)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;add_local_cronjob.sh&lt;/li&gt;

&lt;li&gt;execute_backup.sh&lt;/li&gt;

&lt;li&gt;initialize_remote_backup.sh&lt;/li&gt;

&lt;li&gt;local_restore.sh&lt;/li&gt;

&lt;li&gt;remote_restore.sh&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;If it is a first-time setup, the new local machine should run these scripts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add_local_cronjob.sh&lt;/li&gt;
&lt;li&gt;initialize_remote_backup.sh&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Script &lt;strong&gt;add_local_cronjob.sh&lt;/strong&gt; is in charge of calling &lt;strong&gt;execute_backup.sh&lt;/strong&gt; that will handle all stuff related to backing up. While &lt;strong&gt;initialize_remote_backup.sh&lt;/strong&gt; will instruct the remote server to turn off 2FA for two minutes until we connect with our local machine to transfer backed-up files. Onboarding new machines is complete once these cronjobs are run.&lt;/p&gt;

&lt;p&gt;Script &lt;strong&gt;local_restore.sh&lt;/strong&gt; should be called when we want to restore databases on the remote server from data on our &lt;em&gt;local&lt;/em&gt; machine. Script &lt;strong&gt;remote_restore.sh&lt;/strong&gt; should be called when we want to restore databases from backed-up data on our &lt;em&gt;remote&lt;/em&gt; machine.&lt;/p&gt;

&lt;p&gt;Note that the new machine should be provided with proper data for &lt;strong&gt;credentials.txt&lt;/strong&gt; before adding cron jobs and running restorations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;We have explained how database backup and restoration can be achieved, and all data saved to our local disk (internal or external hard drive). Also, we have tackled restoring those databases if we have some outage on our remote server or whatever issue might occur with the databases, to improve the end-user experience of not having to input all data every time something happens.&lt;/p&gt;

&lt;p&gt;The benefit of this approach is, of course, that we don't pay/subscribe for our local hard drive storage (or we buy it once and that is it), and we have much more control when we have all data from our databases physically at our fingertips and not on some cloud somewhere. It gives a greater sense of control over our data and, in the end, provides us with an option for our startup to do cost-efficient backing up of our databases.&lt;/p&gt;

&lt;p&gt;Once our startup application generates meaningful revenue, we can start paying for storage at some cloud provider and transfer all the data there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Useful links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94"&gt;Part 1 of this series&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/davorj94/self-host-part-2-zero-downtime-deployment-using-docker-swarm-2o3c"&gt;Part 2 of this series&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/fradelg/docker-mysql-cron-backup" rel="noopener noreferrer"&gt;fradelg/mysql-cron-backup image&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/prodrigestivill/docker-postgres-backup-local" rel="noopener noreferrer"&gt;prodrigestivill/postgres-backup-local image&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://linux.die.net/man/1/rsync" rel="noopener noreferrer"&gt;Rsync documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>database</category>
      <category>shell</category>
      <category>server</category>
    </item>
    <item>
      <title>Self-host - Part 2 - Zero-Downtime Deployment using Docker Swarm</title>
      <dc:creator>Davor Jovanović</dc:creator>
      <pubDate>Thu, 18 Jul 2024 14:12:59 +0000</pubDate>
      <link>https://dev.to/davorj94/self-host-part-2-zero-downtime-deployment-using-docker-swarm-2o3c</link>
      <guid>https://dev.to/davorj94/self-host-part-2-zero-downtime-deployment-using-docker-swarm-2o3c</guid>
      <description>&lt;p&gt;This blog will be the second in the three-part series (maybe more, we will see) of self-hosting. In the &lt;a href="https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94"&gt;first part&lt;/a&gt;, we have explained how to start and secure your self-hosted server. This second part will address zero-downtime deployment using Docker Swarm. The &lt;a href="https://dev.to/davorj94/self-host-part-3-mysql-and-postgresql-database-backup-to-local-hard-drive-53ke"&gt;third part&lt;/a&gt; will discuss backing up our databases without downtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why zero-downtime deployment? And why should we worry about it that much?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's answer these questions as straightforward as possible. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You don't want users looking at some error page while you are deploying your application updates.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Namely, you will have updates in your application, and regardless of those updates, you don't want users to be affected. Ideally, you want them to simply switch to a more updated version when ready, and that is exactly what we are going to work on here with the help of Docker Swarm.&lt;br&gt;
Although there are multiple options for zero-downtime deployment, with one of the most common called "Blue-green deployment", most of those require multiple servers to be obtained. Basically, for blue-green deployment, you need to have two servers with the same environment behind some load balancer, and once you upload your updates to one server, the load balancer should switch all traffic from one server to the other. And if something goes wrong, there should be logic for reverting to the old server (which can get quite complex and is easier said than done, especially when there are databases involved). &lt;br&gt;
As we are in the process of making our application which still doesn't generate revenue, we don't want to pay for multiple servers, and that is why we want all of that logic to be on one server in virtual environments, while we want to achieve same behavior as with blue-green deployments. That is where Docker Swarm comes into the picture, as it is quite simple to use and has zero-downtime deployment implemented out of the box. &lt;br&gt;
In this article, we will explore how to automate zero-downtime deployment using Docker Swarm and shell scripts, so we don't have to pay for already existing cloud solutions, at least in the beginning until our super application starts generating meaningful revenue.&lt;/p&gt;

&lt;p&gt;The goal here is to allow only certain machines and users to deploy application updates. If we decide, we can allow some new users that started working on our application code (the project is growing) to only be allowed to commit to git, and when we decide (once a week, or once per two weeks for example) we deploy all of that code from our machine that is allowed to deploy. This gives us a sense of control over what is live on the server and what is still in the making. Remember, we have 2FA implemented in the previous article, and we will work with it in this article too. The end goal that we want to achieve: type the command &lt;code&gt;./deploy.sh&lt;/code&gt; from your machine, and type code from 2FA and once it is done, new changes should just magically appear in production.&lt;/p&gt;

&lt;p&gt;To achieve the goal of automatic deployment, we will turn to shell scripting and we will have the following steps that we need to complete:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initiate Docker Swarm on the server&lt;/li&gt;
&lt;li&gt;Read credentials file and clone repositories from GitHub&lt;/li&gt;
&lt;li&gt;Write down versions for current deployment in a txt file that will be synced with the remote server&lt;/li&gt;
&lt;li&gt;Build all containers in our local machine and save them with proper versions&lt;/li&gt;
&lt;li&gt;Transfer built containers from our local machine to the remote server&lt;/li&gt;
&lt;li&gt;Start containers on the remote server with zero-downtime deployment&lt;/li&gt;
&lt;li&gt;Cleanup, both remote server and local machine from builds and versions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will tackle each of these steps in the following text, but at this moment, let's get familiar with what the file structure will look like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;deploy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;input-data&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;credentials.txt&lt;/li&gt;
&lt;li&gt;repositories.txt&lt;/li&gt;
&lt;li&gt;repo-list.txt&lt;/li&gt;
&lt;li&gt;envs.zip&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;repositories&lt;/strong&gt; (will be populated once a script is run)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scripts&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;build_containers.sh&lt;/li&gt;
&lt;li&gt;cleanup.sh&lt;/li&gt;
&lt;li&gt;clone_repos.sh&lt;/li&gt;
&lt;li&gt;start_containers.sh&lt;/li&gt;
&lt;li&gt;transfer_containers.sh&lt;/li&gt;
&lt;li&gt;write_versions.sh&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;output&lt;/strong&gt; (will be populated once a script is run)&lt;/li&gt;
&lt;li&gt;deploy.sh&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker-compose.prod.yaml&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's break down what these files and directories represent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;docker-compose.prod.yaml&lt;/strong&gt; is a compose file that is created with Swarm in mind for the production, it will have a &lt;code&gt;deploy&lt;/code&gt; key in it and it will be transferred eventually to the remote server. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;deploy&lt;/strong&gt; directory contains all the logic that is required to automate automatic deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;input-data&lt;/strong&gt; is the directory that contains txt files that will be loaded in shell scripts from the &lt;strong&gt;scripts&lt;/strong&gt; directory. Those text files contain all the necessary information to clone the repository, connect to the remote server, and conduct deployment. Note that &lt;strong&gt;envs&lt;/strong&gt; is a &lt;em&gt;zip&lt;/em&gt; format, and that is basically because of encryption as a layer of security. Once deployment starts, the user will be prompted with input to decrypt environment variables to be used.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;repositories&lt;/strong&gt; directory is a temporary directory and will be cleaned up once deployment is completed. That directory contains all cloned repositories to read data from and build Docker images for the current deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scripts&lt;/strong&gt; directory contains all necessary scripts to automate deployment. We will talk about each of those separately.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have moved these out of the way, we can proceed with each step where we will explain everything in a bit more detail.&lt;/p&gt;
&lt;h3&gt;
  
  
  Initiate Docker Swarm Cluster
&lt;/h3&gt;

&lt;p&gt;Okay, so first things first, you need to make sure you have &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Docker installed&lt;/a&gt;, and once that is completed, we need to initialize the docker swarm cluster in our remote server. The command to do it is quite straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker swarm init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, type &lt;em&gt;sudo&lt;/em&gt; password (remember from &lt;a href="https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94"&gt;part 1&lt;/a&gt;, we don't allow usage of docker without sudo). Once you have done that, you will get something like this message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Swarm initialized: current node ({**node id**}) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token {**some token**} {**some port**}

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should copy this text and save it somewhere safe if by any means we need to add more remote servers to the cluster. But for now, we don't need that, we have this node as the manager, namely, this server is the manager and can make decisions on how many instances/replicas can be in worker nodes (basically nodes that listen blindly to manager). As we only have one server, in that case, manager manages itself. &lt;/p&gt;

&lt;p&gt;As we have initialized the Swarm node in our remote server, we can now proceed with deployment, and primarily, you should start thinking about what services you have in docker-compose, how many replicas you need, what is the strategy for those replicas (for example, if something goes wrong when updating version). These questions and answers vary based on the project and separate requirements. I will show you my example of Rust service, for development and production environments.&lt;/p&gt;

&lt;p&gt;This is an example of a Rust service for &lt;strong&gt;development&lt;/strong&gt; and doesn't have any deployment strategy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  api:
    build:
      context: ../api
    ports:
      - 3001:3001
    volumes:
      - ../api/migrations:/usr/api/migrations
      - ../api/src:/usr/api/src
    env_file:
      - ../api/.env
    depends_on:
      db: // Postgres service named "db"
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3001/api/health"]
      interval: 10s
      timeout: 10s
      retries: 5
      start_period: 30s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now here is the version for production, with a deployment section for Docker Swarm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  api:
    image: "api:0.0.${API_VERSION}"
    restart: always
    env_file:
      - ./envs/api.env
    healthcheck:
      test: ["CMD", "/health_check"]
      interval: 10s
      timeout: 10s
      retries: 5
      start_period: 5s
    deploy:
      mode: replicated
      replicas: 2
      update_config:
        parallelism: 2
        order: start-first
        failure_action: rollback
    networks:
      - pg-network
      - main-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, now we have &lt;strong&gt;deploy&lt;/strong&gt; config which determines how we want our service to behave when in production. Namely, we have a mode which is &lt;em&gt;replicated&lt;/em&gt; and replicas which we stated as &lt;em&gt;2&lt;/em&gt;, which means that we want at any given moment, to have 2 running replicas of our API service. We have also added a strategy for replicas when being updated with newer versions, and we stated that we want to have these updated in parallel (two instances can be updated at the same time) where we want the new version to start before stopping old versions (both updated and non-updated version can be running at the same time until the new version is considered healthy and Swarm doesn't shut down previous version). We also stated that we want to roll back to the previous version if something goes wrong. For more information about options to suit your specific project needs, visit &lt;a href="https://docs.docker.com/engine/swarm/services/" rel="noopener noreferrer"&gt;this page&lt;/a&gt; and see the configs that might suit you.&lt;/p&gt;

&lt;p&gt;Note the part &lt;code&gt;${API_VERSION}&lt;/code&gt; in the &lt;strong&gt;image&lt;/strong&gt; key. We will discuss that part in detail in the section about transferring containers to remote servers.&lt;/p&gt;

&lt;p&gt;Note also that there is no &lt;strong&gt;depends_on&lt;/strong&gt; key in Swarm mode because it is not supported by the Swarm cluster. It is considered that the &lt;strong&gt;db&lt;/strong&gt; service will always be up and running in the cluster.&lt;/p&gt;

&lt;p&gt;Now that we have clarified the difference between the Swarm cluster and pure docker-compose, we can proceed with the actual deployment script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy.sh
&lt;/h2&gt;

&lt;p&gt;Let's begin with the actual deployment script, its steps, and flow. Contents of the &lt;strong&gt;deploy.sh&lt;/strong&gt; file are:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get the directory of the current script
script_dir=$(dirname "$0")

# Clone repositories
"$script_dir/scripts/clone_repos.sh"

cd "$(pwd)"

# Write new versions
"$script_dir/scripts/write_versions.sh"

cd "$(pwd)"

# Build docker images
"$script_dir/scripts/build_containers.sh"

cd "$(pwd)"

"$script_dir/scripts/transfer_containers.sh"

cd "$(pwd)"

"$script_dir/scripts/start_containers.sh"

cd "$(pwd)"

"$script_dir/scripts/cleanup.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's go step by step through this script and explain what all of these scripts have as their content and purpose.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloning repositories
&lt;/h3&gt;

&lt;p&gt;Command from deploy.sh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"$script_dir/scripts/clone_repos.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The purpose of this script is to clone all repositories listed in &lt;strong&gt;input-data&lt;/strong&gt; txt files and make them available in &lt;strong&gt;repositories&lt;/strong&gt; directory. Contents of this script are the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get the directory of the current script
script_dir=$(dirname "$0")

# Clone repositories
source "$script_dir/../input-data/repositories.txt"
source "$script_dir/../input-data/repo-list.txt"

if [ -d "$script_dir/../repositories" ]; then
    rm -rf "$script_dir/../repositories"
    echo "All files in repositories have been removed."
else
    mkdir -p "$script_dir/../repositories"
fi

# Loop through each variable
for var in "${repositories[@]}"; do
    # Get the value of the current variable
    value="${!var}"

    REPO_NAME=$(basename -s .git "$value")

    git clone "$value" "$script_dir/../repositories/$REPO_NAME"

done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice these two lines in &lt;code&gt;clone_repos.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source "$script_dir/../input-data/repositories.txt"
source "$script_dir/../input-data/repo-list.txt"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These lines represent the &lt;em&gt;txt&lt;/em&gt; files that we will source as our variables. Contents of these files are:&lt;/p&gt;

&lt;p&gt;repositories.txt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;API_REPO={URL to your repository}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;repo-list.txt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;repositories=("API_REPO")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;URLs from repositories.txt are used to clone those repositories to &lt;strong&gt;repositories&lt;/strong&gt; directory. Note that you should have ssh configured or oauth2 and include the key in the URL to work properly without a username/password.&lt;br&gt;
repo-list.txt basically follows repositories.txt contents. It is made solely for convenience to loop through repositories while cloning them in the &lt;strong&gt;repositories&lt;/strong&gt; directory.&lt;/p&gt;

&lt;p&gt;Now that we have all repositories cloned, we can proceed to the next step.&lt;/p&gt;
&lt;h3&gt;
  
  
  Writing versions
&lt;/h3&gt;

&lt;p&gt;Command from deploy.sh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"$script_dir/scripts/write_versions.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The aim of this script is straightforward. Go into each cloned repository, get its version, and write it down in &lt;strong&gt;output&lt;/strong&gt;/versions.txt file (which will be used on the remote server). The version is determined based on the number of commits. For this flow, the accepted and most optimal versioning of Docker images is basically by changing the path version of the built image to be equal to the total number of commits to the main branch in a git repository. That way, we have an automatic versioning system to the latest code change from git.&lt;/p&gt;

&lt;p&gt;Contents of this script are the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script_dir="$(pwd)/scripts"

repositories_path="$script_dir/../repositories"

output_file="$script_dir/../output/versions.txt"

if [ -f "$output_file" ]; then
    # Remove the file
    rm "$output_file"
fi


# We are going to a directory that is cloned from the previous script
cd "$repositories_path/api"

# This command gets us the total number of commits for the main 
# branch
API_VERSION=$(git rev-list --count main)
echo "API_VERSION: $API_VERSION"

# Make output directory and write versions.txt file
mkdir -p "$script_dir/../output"
touch "$output_file"

# Write API version to a txt file
echo "API_VERSION=$API_VERSION" &amp;gt;&amp;gt; $output_file
# Version COMPOSE file
echo "COMPOSE_VERSION=$(($(date +%s%N)/1000000))" &amp;gt;&amp;gt; $output_file
echo "SWARM_STACK_NAME=demo_stack" &amp;gt;&amp;gt; $output_file

unzip "$script_dir/../input-data/envs.zip" -d "$script_dir/../output/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the code above, we can see that, apart from writing the API version in this case in the &lt;strong&gt;versions.txt&lt;/strong&gt; file, we are also writing the compose file version and Swarm stack name, as well as copying all env variables to the output directory (for easier sync with remote server). The reason we are writing the compose file version is in case we change something to our compose file, we want to deploy to the Swarm cluster based on the latest version, while if we had a static name, we could deploy with the previous compose version, which we are trying to avoid. The Swarm stack name is something we should have control over as if we ever decide to change it, we should be able to do that from deployment script.&lt;/p&gt;

&lt;p&gt;Now that we have the latest versions of our code and compose file written down, we can proceed with the actual build of our Docker images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Docker images
&lt;/h3&gt;

&lt;p&gt;Command from deploy.sh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"$script_dir/scripts/build_containers.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script only builds images based on the specified repository and saves them to the &lt;strong&gt;output&lt;/strong&gt;/images directory as &lt;strong&gt;.tar&lt;/strong&gt; file. The goal of saving these images in the specified &lt;strong&gt;output&lt;/strong&gt; directory is to transfer those as .tar files to our remote server and load them from there. Take a look at &lt;a href="https://docs.docker.com/reference/cli/docker/image/save/" rel="noopener noreferrer"&gt;saving images&lt;/a&gt; and &lt;a href="https://docs.docker.com/reference/cli/docker/image/load/" rel="noopener noreferrer"&gt;loading images&lt;/a&gt; with Docker. The contents of this script are the following for our API image and repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script_dir="$(pwd)/scripts"

source "$script_dir/../output/versions.txt"

repositories_path="$script_dir/../repositories"

# API

cd "$repositories_path/psych-api"

API_IMAGE="api:0.0.$API_VERSION"

docker build -f Dockerfile.prod -t $API_IMAGE .

cd "$script_dir"

images_path="$(pwd)/../output/images"

if [ -d "$images_path" ]; then
    rm -f "$images_path"/*
    echo "All files in $images_path have been removed."
else
    mkdir -p "$images_path"
fi


docker save -o $(pwd)/../output/images/api.tar $API_IMAGE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have more images to build, just repeat the code from the &lt;code&gt;# API&lt;/code&gt; line. Also, note that we are using &lt;code&gt;$API_VERSION&lt;/code&gt; to version our Docker image in the script. That way we will be able to instruct our compose file to use the proper image version, once loaded on the server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transfer the output directory to the remote server
&lt;/h3&gt;

&lt;p&gt;Command from deploy.sh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"$script_dir/scripts/transfer_containers.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might be wondering why we are transferring this image (or images if you have more applications) to a remote server and not to some container registry like &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;DockerHub&lt;/a&gt; or &lt;a href="https://www.digitalocean.com/pricing/container-registry" rel="noopener noreferrer"&gt;Digital Ocean registry&lt;/a&gt;. The answer is simple and quite expected, and it's... you guessed it... because of &lt;strong&gt;costs&lt;/strong&gt;. Now one might argue that container registries are quite cheap and there are free tiers available, and yes, that is correct, but since we are at the beginning of our application, we want to have full control and have essentially cheap services. This way, by transferring images to a remote server and loading them there, we essentially skipped the part where we have container registries and the necessity to pay for those (or to worry if we will surpass the free tier). This way, we control everything, and I would always advocate this way if you are using only one server with a small team and not yet profitable application.&lt;/p&gt;

&lt;p&gt;Contents of this script are the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script_dir="$(pwd)/scripts"

source "$script_dir/../input-data/credentials.txt"

output_folder="$script_dir/../output"
input_folder="$script_dir/../input-data"

source "$output_folder/versions.txt"

cp "$script_dir/../../docker-compose.prod.yaml" "$output_folder/docker-compose.$COMPOSE_VERSION.yaml"

rsync -arvzP --rsh="ssh -p $remote_port" "$output_folder/" "$script_dir/../input-data/credentials.txt" "$remote_username@$remote_ip:/app"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that in this script, we are using a previously written &lt;strong&gt;versions.txt&lt;/strong&gt; file from the &lt;strong&gt;output&lt;/strong&gt; directory. We are using those versions to use &lt;code&gt;$COMPOSE_VERSION&lt;/code&gt; that we have already written down there. That way, our current docker-compose will be unique on the server that we can start. &lt;/p&gt;

&lt;p&gt;We are also using the &lt;em&gt;rsync&lt;/em&gt; command to transfer everything written in the &lt;strong&gt;output&lt;/strong&gt; directory to &lt;strong&gt;/app&lt;/strong&gt; on the server, as well as &lt;strong&gt;credentials.txt&lt;/strong&gt; (because in that directory we have written &lt;em&gt;sudo&lt;/em&gt; password so we can automatically run commands in remote server without being prompted to write &lt;em&gt;sudo&lt;/em&gt; password; remember, we need &lt;em&gt;sudo&lt;/em&gt; to run Docker commands on the remote server). For more information about the &lt;em&gt;rsync&lt;/em&gt; command and flags used visit &lt;a href="https://linux.die.net/man/1/rsync" rel="noopener noreferrer"&gt;this page&lt;/a&gt;. Note that our &lt;strong&gt;output&lt;/strong&gt; directory now has the following contents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;envs

&lt;ul&gt;
&lt;li&gt;api.env&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;images

&lt;ul&gt;
&lt;li&gt;api.tar&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;docker-compose.1721118119140.yaml (where the number is version/timestamp)&lt;/li&gt;

&lt;li&gt;versions.txt&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;If you have 2FA set on the remote server (as we do), remember that at this moment you will be prompted to enter your one-time code from the application.&lt;/p&gt;

&lt;p&gt;Okay, so we have transferred everything that we need to the server. It is time to connect to the server and deploy a new version of our image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start containers in production
&lt;/h3&gt;

&lt;p&gt;Command from deploy.sh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"$script_dir/scripts/start_containers.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The goal of this script is to connect to a remote server and execute commands to load all Docker images on the server and deploy them to the Swarm cluster. Contents of this script are as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script_dir="$(pwd)/scripts"

# Load all variables from credentials.txt
source "$script_dir/../input-data/credentials.txt"

# Connect to the remote server and run the bash script defined 
# below
ssh $remote_username@$remote_ip -p $remote_port "bash -s" &amp;lt;&amp;lt; 
# From here, the server bash script begins
'ENDSSH'

# Go to /app directory on the server
cd /app

# Load variables that were previously transferred to the server
source "./versions.txt"
source "./credentials.txt"

# Define a path to the images directory
IMAGE_DIR="/app/images"

# Loop through each .tar file in the directory
for tar_file in "$IMAGE_DIR"/*.tar; do
    if [ -f "$tar_file" ]; then
        echo "Loading image from $tar_file..."
# Load the .tar file as an image on the server
        echo "$pass" | sudo -S docker load -i "$tar_file"
    else
        echo "No .tar files found in $IMAGE_DIR"
    fi
done

echo "All images have been loaded."

# Define API_VERSION to be used in docker-compose
export API_VERSION="${API_VERSION}"

# This is where we deploy all versions to Swarm cluster
echo "$pass" | sudo -S -E docker stack deploy --prune --detach=false -c "docker-compose.$COMPOSE_VERSION.yaml" "$SWARM_STACK_NAME"

# Removing all the files from the /app directory
rm -f docker-compose.*.yaml

rm -rf "/app/images"

rm -rf "/app/envs"

rm "/app/credentials.txt"

rm "/app/versions.txt"

# Wait to consolidate deployment before continuing
sleep 60

# Prune all that is not used (previous versions of images and
# volumes) so we clean after our deployment and do not bloat
# server with unused image versions and volumes
echo "$pass" | sudo -S -E docker system prune -f
echo "$pass" | sudo -S -E docker image prune -a -f
echo "$pass" | sudo -S -E docker volume prune -a -f

# End executing remote server script
ENDSSH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that all parts of the script that are &lt;code&gt;echo "$pass" |&lt;/code&gt; are us automating writing the sudo password before executing the docker command with sudo. That &lt;code&gt;$pass&lt;/code&gt; variable is sourced from &lt;strong&gt;credentials.txt&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As you can see, the goal is to simply execute the command to deploy everything to the Swarm cluster by reading our current version of the compose file. Understand that compose file as instructions for Swarm cluster to re-deploy stuff that is changed. Note that the command &lt;code&gt;export API_VERSION="${API_VERSION}"&lt;/code&gt; is us introducing an environment variable to have in the compose file before Swarm reads it. That variable is working in conjunction with this line in the compose file &lt;code&gt;image: "api:0.0.${API_VERSION}"&lt;/code&gt;. That way we will always read the latest written version and deploy it to the Swarm stack.&lt;/p&gt;

&lt;p&gt;We are using &lt;code&gt;--detach=false&lt;/code&gt; because we want to block our current terminal until deployment is completed. Note that we have removed all files from the &lt;strong&gt;/app&lt;/strong&gt; directory. It is, after cleaning, literally &lt;strong&gt;blank&lt;/strong&gt;, including &lt;strong&gt;envs&lt;/strong&gt;. You might wonder why is that, and here is the thing, environment variables that are used by our containers in the Swarm cluster are loaded in memory, and we don't need to have them written in a file on the server. Therefore, as soon as we deploy to Swarm stack with compose instructions and effectively load env variables from the file for the first time, we are free to delete those afterward (yes, even if Swarm needs to restart the containers in production or start more, those env variables will remain in place in memory and are not needed to be written in a file).&lt;/p&gt;

&lt;p&gt;Now that deployment is successful, and we see changes in production, it is time to run one last script before wrapping up the deployment process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clean everything from the current deployment
&lt;/h3&gt;

&lt;p&gt;Command from deploy.sh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"$script_dir/scripts/cleanup.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the last command that we are running in the deployment process, and it is one of the simpler ones. Contents of this script are as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script_dir="$(pwd)/scripts"

rm -rf "$script_dir/../output"
rm -rf "$script_dir/../repositories"

docker volume prune -a -f

docker image prune -a -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are removing &lt;strong&gt;output&lt;/strong&gt; and &lt;strong&gt;repositories&lt;/strong&gt; directories that were previously generated by the current deployment. Afterward, we prune all docker-related stuff from our local machine (so we don't bloat our memory with multiple deployments, this way we are keeping it clean). Note that we are pruning all with flag &lt;em&gt;-a&lt;/em&gt; in this case, but you can specify exactly which images and volumes you want to remove based on your current project setup.&lt;/p&gt;

&lt;p&gt;After cleaning up, we have completed our automated deployment process. Now, if we type &lt;code&gt;./deploy.sh&lt;/code&gt; from &lt;strong&gt;deploy&lt;/strong&gt; directory, it should all work as a charm (well, it should, at least on my Ubuntu 22.04 machine).&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;We have successfully deployed our API application using the Docker Swarm cluster and shell scripting on our remote server. The benefits of this approach, and why I am advocating it, are mainly &lt;strong&gt;costs&lt;/strong&gt;, as this way we are offloading everything to our local machine, no need to pay for registries and CI/CD Actions (or worrying if we'll surpass free tier) as we can do with our machine whatever and however long we want. Another big benefit is that we are in full control as to who can deploy and who can contribute. So, if we start adding people to our team when our application starts generating revenue, we can decide that our machine can be the only one that can connect to a remote server and complete deployment, while others can only contribute to Git and write code. Also, we can decide on multiple machines to deploy, it is up to us, and that is the main point, we want to decide, and we don't have to pay for every decision. Once our application starts to scale to multiple containers, then we can think about more complex tools like Kubernetes, or paying for container registry. For starters, this is enough, let's not over-engineer it (more shell scripting, please!) and overpay without practical necessity.&lt;/p&gt;

&lt;p&gt;As mentioned at the beginning of this article, stay tuned for the third and final part (maybe) of this series, where we will tackle both Postgres and MySQL databases' automatic backing up to our local hard drive, and restoring backed-up data (more shell scripting, of course!).&lt;/p&gt;

&lt;p&gt;Let me know in the comments if I should share the GitHub repository of the code presented here, or if there is something that is not properly explained and clarified. Your feedback means a lot.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>shell</category>
      <category>compose</category>
      <category>server</category>
    </item>
    <item>
      <title>Self-host - Part 1 - Securing your remote server</title>
      <dc:creator>Davor Jovanović</dc:creator>
      <pubDate>Wed, 19 Jun 2024 22:41:40 +0000</pubDate>
      <link>https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94</link>
      <guid>https://dev.to/davorj94/self-host-part-1-securing-your-remote-server-3l94</guid>
      <description>&lt;p&gt;This blog will be the first in the three-part series (maybe more, we will see) of self-hosting. In the first part, we will explain how to start and secure your self-hosted server. The &lt;a href="https://dev.to/davorj94/self-host-part-2-zero-downtime-deployment-using-docker-swarm-2o3c"&gt;second part&lt;/a&gt; will address zero-downtime deployment using Docker Swarm. In the &lt;a href="https://dev.to/davorj94/self-host-part-3-mysql-and-postgresql-database-backup-to-local-hard-drive-53ke"&gt;third part&lt;/a&gt;, we will discuss backing up your databases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is this all about? Why self-hosting?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's say you are the developer, which you most likely are. Let's say you get an idea of an application that you want to make. Namely, you should host somewhere that application, as your home computer most likely doesn't have that stable internet connection, nor IP, as usually (read always) those are changed dynamically by your ISP.&lt;/p&gt;

&lt;p&gt;Okay, so you have an idea for an application, you want to try it out under your terms, and what is your first instinct? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLOUD&lt;/strong&gt;!&lt;br&gt;
&lt;strong&gt;AWS&lt;/strong&gt;!&lt;br&gt;
&lt;strong&gt;GOOGLE&lt;/strong&gt;!&lt;br&gt;
&lt;strong&gt;SERVICES&lt;/strong&gt;!&lt;br&gt;
&lt;strong&gt;REGISTRIES&lt;/strong&gt;!&lt;br&gt;
&lt;strong&gt;ACTIONS&lt;/strong&gt;!&lt;br&gt;
&lt;strong&gt;CI/CD&lt;/strong&gt;!&lt;br&gt;
&lt;strong&gt;MORE CLOUD SERVICES&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;And many more...&lt;/p&gt;

&lt;p&gt;Now, there is a catch in all of those little things/services/conveniences, &lt;strong&gt;cloud is expensive&lt;/strong&gt;. For everything covered in this part and the future part of this series, you will be able to find equivalent services in AWS, Google Cloud, etc., of course, you would, but it might cost you quite a bit the more services you take under your belt.&lt;/p&gt;

&lt;p&gt;Now, don't get me wrong, I am not against using cloud services (although I think those are a bit costlier than they should be), I am simply stating that you should &lt;strong&gt;minimize&lt;/strong&gt; the costs of &lt;strong&gt;everything possible&lt;/strong&gt; until you get some revenue from your application. Once you start getting revenue, and you stop being the sole developer working on your app, I am telling you, it will be a breeze to scale both vertically and horizontally (okay, horizontally is a bit more involved, but still, it won't be that difficult). When there is money involved in an application, everything will be easier regarding your development, then you might hire a DevOps (if you are one, then congrats, you might hire a developer to write you an app for your impeccable infrastructure), more developers, etc., you get the point.&lt;/p&gt;

&lt;p&gt;Therefore, to conclude the big &lt;strong&gt;why&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;There is no point in you paying large chunks of money for the development of an app that is still not generating any revenue. Infrastructure for app functioning should be paid from its profit. Therefore, this series is focused on gathering the knowledge to reduce the costs of development and MVPs until you get some meaningful profits. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, enough chit-chat, let's get the server working!&lt;/strong&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  Why is a server needed?
&lt;/h3&gt;

&lt;p&gt;As we have previously explained. The server must be bought, and that is a plain infrastructure problem. You cannot really control your network connection, or if you lose electricity in your apartment, or if your ISP changes your home IP address. We are trying to make application infrastructure cheap, but by no means do we want to convert that with application uptime. We don't want our users to be unable to access our application, that is where we draw the line. Therefore, you must have a remote server bought. We are not getting into free 60 days trials from Google Cloud, or any other free trial. Why you ask? Considering that your server will be up longer than that, you might end up paying more than pay lower price from the beginning.&lt;/p&gt;

&lt;p&gt;After much research, at the time of writing this blog, the winner is simply &lt;a href="https://www.hetzner.com/cloud/" rel="noopener noreferrer"&gt;Hetzner&lt;/a&gt;. The ratio of costs and quality is simply the best at this moment (not promoted, I promise). &lt;/p&gt;

&lt;p&gt;Okay, so we will go with Hetzner. Specifically, I will take a server for 6.30€ (at the time of writing this blog) and has the following specifications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;8GB RAM&lt;/li&gt;
&lt;li&gt;4vCPU&lt;/li&gt;
&lt;li&gt;80GB Disk Storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which, in my opinion, according to the current market, is a pretty good deal. You can go with even lower specifications if you want, but these specifications will work just fine for me.&lt;/p&gt;


&lt;h3&gt;
  
  
  Buying the server
&lt;/h3&gt;

&lt;p&gt;Once we have decided which server to buy, we shall proceed with its configuration, as presented below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip4fd2fig3exlda5tuu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip4fd2fig3exlda5tuu1.png" alt="Choosing OS and Country" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Germany is closest to me and Ubuntu 22.04 is just fine for me, note that you can choose a different version.&lt;/p&gt;

&lt;p&gt;Next, we will choose which server we want from the provided options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb74uhogawo87pswi2cux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb74uhogawo87pswi2cux.png" alt="Choosing machine configuration" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After deciding on the strength of our machine, we shall proceed with its SSH configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqimdff5a7uhjfu5i459c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqimdff5a7uhjfu5i459c.png" alt="Generating ssh keys on Hetzner" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should add a public SSH key from your local machine (don't worry, public SSH keys are free to share with others). If you don't, then you will receive an e-mail with the root user password, which you don't really want. There is no need to add a third party in the whole password credentials generation. This way, when you add your public SSH key, you will receive no e-mail, and security engineers will be proud.&lt;br&gt;
To check what your public SSH key is, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.ssh/id_rsa.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then simply copy/paste from the terminal and you are good to go.&lt;/p&gt;

&lt;p&gt;Once we have completed setting up the machine, we can start SSH connection to its terminal from our local machine with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@{your server ip}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should answer any prompt that might occur for the first SSH connection (for fingerprint). That prompt is received only once, and if you get it on any following SSH connections, you are most likely a victim of a Man in the Middle attack, just so you know what to Google if that happens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now, let's make our server secure!&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Update everything to the latest version
&lt;/h3&gt;

&lt;p&gt;It is important to keep everything on the server up to date, as newer versions are patching, among other stuff, for security flaws. Therefore, we always want to operate with the latest versions of that software.&lt;/p&gt;

&lt;p&gt;To update everything, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, once you have upgraded everything, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /var/run/reboot-required
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you get &lt;strong&gt;/var/run/reboot-required&lt;/strong&gt; as a response from the last command, that means you should reboot your machine (duh!). To reboot, simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and wait for your machine to reboot. Note that you can also reboot from your dashboard from your provider, all major providers allow for dashboard reboot.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Change the password for the root user
&lt;/h3&gt;

&lt;p&gt;In the following steps, we will disable the root user completely, but I wanted to show you how you can first change the root user's password. To change it, type the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;passwd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and simply enter a new password when prompted.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) Create a non-root user
&lt;/h3&gt;

&lt;p&gt;It is important to get rid of the root user as soon as possible, as the root user does have all permissions to do whatever the root user wants. Now, since we are root at the moment, we don't type sudo for anything, but if someone malicious was to reach our server (we certainly hope that is not going to happen!), we want them to reach that server at most as some other user, namely, if they want to temper with some system configuration, they need to type sudo and to know the password for sudo (which we will create and make it so it is hard to figure out).&lt;/p&gt;

&lt;p&gt;Okay, let's create a non-root user by typing the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;adduser {username you want}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then type a new password (make sure it is hard to guess password, use some random generator or whatever, as it will be the one you will be using when typing &lt;strong&gt;sudo&lt;/strong&gt;) and also fill in answers for questions related to the user information. After that new user is created. Remember, &lt;strong&gt;keep this password somewhere safe&lt;/strong&gt;, it will be needed for future endeavors.&lt;/p&gt;

&lt;p&gt;Then we should add this user to the sudo group with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;usermod -aG sudo {username you have chosen}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check it by typing &lt;code&gt;groups {username you have chosen}&lt;/code&gt; and see if a chosen username is in the sudo group. If you see your chosen username and &lt;strong&gt;sudo&lt;/strong&gt; as output, then we are good to go.&lt;/p&gt;

&lt;p&gt;Now, we need to enable the newly created user to connect with our local machine via SSH (as previously added SSH is only for the root user). We will accomplish that by exiting the current session from the remote server (just type &lt;code&gt;exit&lt;/code&gt; and you are out), and logging in with our newly created user by typing the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh {chosen username}@{server ip}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will be prompted to type our newly created user password because we don't have SSH configured yet. Type in the password and enter the terminal in a remote machine.&lt;br&gt;
To enable a new user SSH login, first, we need to get our local machine SSH (remember, it is &lt;code&gt;cat ~/.ssh/id_rsa.pub&lt;/code&gt;), and then type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir .ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano .ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and simply paste the public key that you logged in to your local machine terminal. You can add as many as you want public SSH keys to the authorized_keys file.&lt;/p&gt;

&lt;h3&gt;
  
  
  4) Disable password login
&lt;/h3&gt;

&lt;p&gt;Now that we have configured SSH login (&lt;strong&gt;do not do this step if you haven't configured SSH login&lt;/strong&gt;, you might lock yourself out of the server and then need to go into rescue mode from the dashboard), we should disable password login completely, so we omit all those brute force attacks that try to guess our password and enter our machine, trust me, ssh is &lt;strong&gt;much&lt;/strong&gt; harder to guess. &lt;br&gt;
To disable password login, type the following into your server terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the document, find &lt;code&gt;#PasswordAuthentication&lt;/code&gt;, uncomment and set it to "no".&lt;/p&gt;

&lt;p&gt;After that, you need to restart the SSH service for changes from &lt;code&gt;sshd_config&lt;/code&gt; to take effect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service ssh restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here on forward, password login is disabled entirely, and we are much safer from brute force attacks on our host machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  5) Disable root login
&lt;/h3&gt;

&lt;p&gt;In step 2, when we changed the password for the root user, we mentioned that we would disable the root user from logging in entirely, and we are going to do that now.&lt;/p&gt;

&lt;p&gt;Go to the same sshd_config file by typing &lt;code&gt;sudo nano /etc/ssh/sshd_config&lt;/code&gt; and set &lt;code&gt;PermitRootLogin&lt;/code&gt; to &lt;strong&gt;no&lt;/strong&gt; to disable root logging in regardless if it is SSH or password logging in method.&lt;/p&gt;

&lt;p&gt;Again, you need to restart the SSH service for changes from &lt;code&gt;sshd_config&lt;/code&gt; to take effect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service ssh restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From now on, nobody can log in as a root user, so even if someone reaches our server, they still have to figure out our user password (which we made super hard to guess) to mimic root commands. That is all the philosophy around sudo and why you shouldn't use root user by default.&lt;/p&gt;

&lt;h3&gt;
  
  
  6) Network and firewall policies
&lt;/h3&gt;

&lt;p&gt;You should configure your firewall settings and close all unnecessary ports. For example, for web applications, usually only ports 80 (HTTP) and 443 (HTTPS) are needed, as well as port 22 for SSH connection, which means that all other ports can be closed.&lt;/p&gt;

&lt;p&gt;Closing ports can be done from the provider dashboard, like in the Hetzner example below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8z5m2tgddszolysnv0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8z5m2tgddszolysnv0o.png" alt="Hetzner firewall configuration" width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or by using &lt;a href="https://ubuntu.com/server/docs/firewalls" rel="noopener noreferrer"&gt;ufw&lt;/a&gt; for Ubuntu, which comes with it as the default firewall configuration tool.&lt;/p&gt;

&lt;p&gt;Whichever method you decide, close all unused ports, if you are not sure yet what app will be hosted, or if any will be hosted, close all except &lt;code&gt;22&lt;/code&gt; for SSH logging in.&lt;/p&gt;

&lt;h3&gt;
  
  
  7) Change the default ssh port
&lt;/h3&gt;

&lt;p&gt;Optionally, you can change the default &lt;code&gt;22&lt;/code&gt; port which you use to log in. Usually, scripts have port 22 included by default so it can be potentially another layer of hustle for any malicious request. But note that the other port, whichever you decide for it to be (preferably above 1024, to avoid potential conflict with other services, but it is up to you) can be quickly figured out by malicious requests, so this is mainly added as another small layer of hustle for malicious requests. To add a custom port, type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and change &lt;code&gt;Port 22&lt;/code&gt; to whichever number you want. Let's say, for example, that we want to change it to &lt;code&gt;1602&lt;/code&gt;, then we would have that line written as &lt;code&gt;Port 1602&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Afterward, do not forget to update the firewall configuration (previous step) and set SSH port to be whatever you have written instead of 22.&lt;/p&gt;

&lt;p&gt;Note that now you will have to log in to the remote server using -p (short flag for port), as we are using a non-standard port. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh {username}@{your server ip} -p {your chosen port number}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To avoid this tedious writing of port and username every time we try to connect to a remote server via SSH, we can add configuration to our local machine to let it know with which user we want to log in when we type &lt;code&gt;ssh {your server ip}&lt;/code&gt;. To update that configuration, type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd .ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Type the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host {your remote host ip}

&amp;amp;nbsp; Port {your custom SSH port}

&amp;amp;nbsp; User {username of remote server}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and save and exit. With that configuration in place, the next time you want to log in to your remote server, just type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh {your server ip}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, note that if you have multiple SSH keys you can specify which SSH key you want to use with the &lt;code&gt;Identity&lt;/code&gt; key and the name of the file that you want to identify with.&lt;/p&gt;

&lt;h3&gt;
  
  
  8) Configure automatic updates
&lt;/h3&gt;

&lt;p&gt;It is good to allow automatic updates of packages on your server, and to achieve that we will use the &lt;strong&gt;unattended-upgrades&lt;/strong&gt; package, therefore, type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install unattended-upgrades
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dpkg-reconfigure unattended-upgrades
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and hit &lt;strong&gt;yes&lt;/strong&gt;. After that, upgrades will be automatic on the remote server.&lt;/p&gt;

&lt;h3&gt;
  
  
  9) Add fail2ban package
&lt;/h3&gt;

&lt;p&gt;You should also add &lt;a href="https://en.wikipedia.org/wiki/Fail2ban" rel="noopener noreferrer"&gt;fail2ban&lt;/a&gt; package to prevent brute force attacks. Namely, this package times out too many repeated failed requests to log in, and therefore creates a lot of hustle for automated scripts that are trying various combinations of SSH secret keys to enter your server (which is hard to brute force by itself), so this package will increase security drastically. To add it, type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt install fail2ban
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that you can customize its behavior, but usually, defaults are enough, at least in the beginning.&lt;/p&gt;

&lt;h3&gt;
  
  
  10) Add 2FA using Google Authenticator
&lt;/h3&gt;

&lt;p&gt;Adding two-factor authentication has its pros and cons. Pros are that it is safe and nobody can access your remote server without the code that is available only in the authenticator app on your mobile. Cons are that automated tools might have a hard time connecting to your remote server, like, for example, GitHub Actions (there are some actions that kind of allow you to type in code for other actions to run, but that is all shady and low stability) and therefore for each deploy in the future you need to be present with authentication code from your application. Also, it is tedious to write auth code every time you log in to the server.&lt;/p&gt;

&lt;p&gt;Don't get me wrong, I use the authenticator app for remote servers, it is just that you need to be aware of the pros and cons before making an educated decision to use it.&lt;/p&gt;

&lt;p&gt;So, how can we enable 2FA in our remote server?&lt;/p&gt;

&lt;p&gt;Simply follow the &lt;a href="https://ubuntu.com/tutorials/configure-ssh-2fa#1-overview" rel="noopener noreferrer"&gt;step-by-step instructions&lt;/a&gt; for Ubuntu about configuring the 2FA. &lt;/p&gt;

&lt;p&gt;Now, this step-by-step guide didn't quite work for me properly, as it didn't prompt me for auth code once I tried to SSH into the remote server. Therefore, after digging a bit more, the following configuration needed to be changed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then scan visually this config file and make sure you have the following lines (wherever in the file, those just need to be present there) in the config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UsePAM yes
PasswordAuthentication no
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive
PermitEmptyPasswords no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /etc/pam.d/sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and scan visually to have this config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Standard Un*x authentication.
#@include common-auth

# Require authenticator, if not configured then allow
auth    required    pam_google_authenticator.so debug nullok
auth    required    pam_permit.so
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this setup, your 2FA should work as expected and you should be prompted to add an authenticator code the next time you try to SSH to a remote server.&lt;/p&gt;




&lt;p&gt;Also, for good practice, go to the remote server and type the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd .ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 600 authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are adding read/write permissions to the owner of the file only, to make sure other users cannot change this without special permission (this is especially useful if you have multiple people working on the application and you don't want just anyone to be able and lock out everyone else from the server, accidentally or intentionally).&lt;/p&gt;




&lt;p&gt;Note: You can also block connections per IP or VPN, but that is not feasible for home setup as we don't really have static IPs, and therefore let's leave it as an option here.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;We have discussed why we would want to self-host our application and set up a remote server from scratch. We have also outlined a step-by-step guide to making your remote server secure and controllable only by your local machine. &lt;br&gt;
This is quite enough for starting with remote servers and getting yourself up and running in a self-hosted world. Note that you don't have to buy a remote server for development, as you can do that on your local machine, as you can do that only when you want to provide end users with the stability of your app, or, namely, provide a production environment.&lt;/p&gt;

&lt;p&gt;In the next part of this series, we will focus on deploying our web application (in my case it is a web application) using Docker Swarm and zero downtime deployment. We will also look into how we can omit container registries and establish communication directly between our local machine and remote server (mainly to reduce costs, because, as you remember, our app shouldn't be too much of an expense until it starts to generate revenue once it changes the world).&lt;/p&gt;

&lt;h4&gt;
  
  
  Useful links:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=Q1Y_g0wMwww" rel="noopener noreferrer"&gt;Syntax - Self Host 101&lt;/a&gt; - Highly recommended&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=Q1Y_g0wMwww&amp;amp;list=PLLnpHn493BHHAxTeLNUZEDLYc8uUwqGXa" rel="noopener noreferrer"&gt;Syntax - Self Host 101 Playlist&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ubuntu.com/server/docs/firewalls" rel="noopener noreferrer"&gt;Ubuntu Firewalls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gcore.com/learning/how-to-change-ssh-port/" rel="noopener noreferrer"&gt;Change SSH Port on Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ubuntu.com/tutorials/configure-ssh-2fa#1-overview" rel="noopener noreferrer"&gt;2FA Setup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverfault.com/questions/1073593/ssh-public-key-authentication-with-google-authenticator-still-asks-for-password:" rel="noopener noreferrer"&gt;2FA Additional Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
