There are many approaches to hosting a website nowadays, and many rely on major cloud providers such as AWS or Azure. These fully fledged cloud platforms offer a wide range of services for different use cases. While these services provide major benefits—like managed infrastructure, scalability, and support—for many projects they are simply overkill.
This often leads to analysis paralysis, where it feels like you need a separate service for everything. Do you need logs? Use a managed service. Do you need a database? Use a managed service. Do you need to back up your server? Use a managed service. All of this quickly adds up to unnecessary costs, which can be a dealbreaker for smaller projects.
In my case, I just wanted to deploy a WordPress blog while maintaining control over the server, with as little management and software installation as possible, and keeping costs low. I initially considered using an AWS EC2 instance for hosting the blog.
Let's do a quick calculation:
A t3.small instance with its specs would cost about $15 per month for the instance, plus around $3.80 monthly for a general-purpose gp3 volume—for a total of approximately $18.80 (around €16 per month).
There are many VPS providers out there. I chose Hetzner because I’m based in Europe and prefer using infrastructure hosted within the EU. With Hetzner, I was able to provision an Ubuntu server with even better specs for about €5 per month, including a 10GB volume that I use to back up my database.
Table of Contents
- The Goal
- Create a Hetzner Server
- The WordPress application
- The NGINX configuration
- Configuring the CI/CD using GitActions
- Testing the deployment
- Moving to Production
- Conclusions
The Goal
My goal was to have a usable WordPress blog that required as little maintenance as possible from my side, alongside a simple CI/CD pipeline to help with deployment and potentially automate updates for WordPress and MariaDB.
I chose to use Docker — more specifically, Docker Compose — for the two main components I wanted to deploy: WordPress and MariaDB. I also used Nginx as the web server, and since I store my code repository on GitHub, I chose GitHub Actions for the CI/CD part.
For all other maintenance tasks — database backups, SSL certificate provisioning, and secrets management — I decided to use simple Bash scripts.
Without further ado, let’s jump into the setup.
Create a Hetzner Server
- Create a server in the Hetzner console
- Add your personal SSH key in the Hetzner console so you can log in. An example on how to create a SSH key can be found here.
- During the creation process, if you scroll down to the Cloud Config section, you can add a script that will run only once during the server initialization. You can use this script to install the software required for the web server, such as NGINX, Docker, UFW for the firewall, and Certbot for SSL certificate provisioning.
- Example
cloud-start.yamlscript (this format is specific to Hetzner, but it can easily be adapted into a standard Bash script):
#cloud-config
package_update: true
package_upgrade: true
packages:
- git
- curl
- ufw
- fail2ban
- unattended-upgrades
- nginx
- certbot
- python3-certbot-nginx
write_files:
- path: /etc/fail2ban/jail.local
content: |
[DEFAULT]
bantime = 1h
findtime = 10m
maxretry = 5
[sshd]
enabled = true
port = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
- path: /home/ubuntu/.ssh/config
permissions: '0600'
owner: ubuntu:ubuntu
content: |
Host github.com
HostName github.com
User git
IdentityFile /home/ubuntu/.ssh/github_deploy
StrictHostKeyChecking yes
- path: /etc/sudoers.d/nginx-reload
permissions: '0440'
content: |
ubuntu ALL=(ALL) NOPASSWD: /bin/cp /home/ubuntu/app/nginx/nginx.dev.conf /etc/nginx/sites-available/wordpress.conf, /bin/cp /home/ubuntu/app/nginx/nginx.prod.conf /etc/nginx/sites-available/wordpress.conf, /usr/sbin/nginx -t, /bin/systemctl reload nginx
- path: /usr/local/bin/print-setup-instructions
permissions: '0755'
content: |
#!/bin/bash
echo "========================================"
echo " Cloud-init setup complete"
echo "========================================"
echo ""
echo " Add this deploy key to your GitHub repo:"
echo " (Settings → Deploy keys → Add deploy key)"
echo ""
cat /home/ubuntu/.ssh/github_deploy.pub
echo ""
echo " Then on this server, clone your repo:"
echo " git clone git@github.com:<org>/<repo>.git ~/app"
echo " cd ~/app && docker compose up -d"
echo "========================================"
runcmd:
# Docker (via official apt repository)
- apt-get install -y ca-certificates curl gnupg
- install -m 0755 -d /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- chmod a+r /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list
- apt-get update
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker
- systemctl start docker
# Generate deploy key for ubuntu user and add GitHub to known_hosts
- mkdir -p /home/ubuntu/.ssh
- chown ubuntu:ubuntu /home/ubuntu/.ssh
- chmod 700 /home/ubuntu/.ssh
- sudo -u ubuntu ssh-keygen -t ed25519 -C "hetzner-deploy" -f /home/ubuntu/.ssh/github_deploy -N ""
- sudo -u ubuntu ssh-keyscan -t ed25519 github.com >> /home/ubuntu/.ssh/known_hosts
- chmod 600 /home/ubuntu/.ssh/known_hosts
- chown ubuntu:ubuntu /home/ubuntu/.ssh/known_hosts
# Nginx site
- mkdir -p /var/www/wordpress
- mkdir -p /var/www/certbot
- chown www-data:www-data /var/www/wordpress
- ln -sf /etc/nginx/sites-available/wordpress.conf /etc/nginx/sites-enabled/wordpress.conf
- rm -f /etc/nginx/sites-enabled/default
# Firewall
- ufw default deny incoming
- ufw default allow outgoing
- ufw allow OpenSSH
- ufw allow 80/tcp
- ufw allow 443/tcp
- ufw --force enable
# Services
- systemctl enable fail2ban
- systemctl start fail2ban
- dpkg-reconfigure -f noninteractive unattended-upgrades
# Print deploy key instructions to console log
- /usr/local/bin/print-setup-instructions
- Create the server and verify that the cloud-init script executed successfully.
ssh -i ~/.ssh/<your_hetzner_key> root@<server-ip>
sudo cat /var/log/cloud-init-output.log
Note: Make sure that TCP port 22 is allowed in both the Hetzner firewall and the server firewall. The command ufw allow OpenSSH from the previous step enables SSH access on the server side.
The WordPress application
Now that the server is ready and everything is set up, we can move on to the WordPress site. Let’s assume that, in this case, we are not interested in modifying the site’s code, so we can simply use the official Docker images. Since we also need a database, we can use the official MariaDB image as well. We can then use Docker Compose to bundle these two services together, as shown below:
services:
db:
image: mariadb:12
restart: unless-stopped
env_file: /etc/wordpress.env
volumes:
- db_data:/var/lib/mysql
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10" # 10 × 50MB = 500MB per service
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
wordpress:
image: wordpress:php8.4-fpm-alpine
restart: unless-stopped
depends_on: [db]
env_file: /etc/wordpress.env
ports:
- "127.0.0.1:9000:9000"
volumes:
- /var/www/wordpress:/var/www/html
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10" # 10 × 50MB = 500MB per service
healthcheck:
test: ["CMD", "php", "-r", "exit(0);"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
volumes:
db_data:
This Docker Compose file also includes some additional configuration for logging (we only want to store up to 500 MB of logs for each service to avoid overloading the server), health checks, and the wiring of the wordpress.env file.
To generate the wordpress.env file, you can run the following script on the server. In general, it is better to store secrets separately from the server itself, but for the purposes of this hands-on example, this approach should be sufficient.
#!/usr/bin/env bash
set -euo pipefail
if [[ $# -ne 3 ]]; then
echo "Usage: $0 <mysql_root_password> <mysql_password> <wp_db_password>" >&2
exit 1
fi
MYSQL_ROOT_PASSWORD="$1"
MYSQL_PASSWORD="$2"
WORDPRESS_DB_PASSWORD="$3"
sudo tee /etc/wordpress.env > /dev/null <<EOF
MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE=wordpress
MYSQL_USER=wordpress
MYSQL_PASSWORD=${MYSQL_PASSWORD}
WORDPRESS_DB_HOST=db
WORDPRESS_DB_NAME=wordpress
WORDPRESS_DB_USER=wordpress
WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD}
EOF
sudo chmod 600 /etc/wordpress.env
echo "Secrets written to /etc/wordpress.env"
The NGINX configuration
Let's now add our web server configuration. I like to use NGINX, but there are many other options, such as Apache, Caddy, or lighttpd. The following configuration allows the web server to serve the static files. This config is not production-ready, as it exposes HTTP on port 80 without SSL or a domain name, and it is missing several other security considerations.
# Rate limit zones: track by IP, 10MB storage each
limit_req_zone $binary_remote_addr zone=general:10m rate=20r/s; # 20 req/s for general traffic
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m; # 5 req/min for login
server {
listen 80;
server_name _;
limit_req zone=general burst=40 nodelay; # Allow bursts of 40 reqs, no queuing delay
limit_req_status 429; # Return "Too Many Requests" on limit hit
server_tokens off; # Hide nginx version from responses
# Security headers
add_header X-Content-Type-Options "nosniff" always; # Prevent MIME sniffing
add_header X-Frame-Options "SAMEORIGIN" always; # Block clickjacking
add_header Referrer-Policy "strict-origin-when-cross-origin" always; # Limit referrer leakage
root /var/www/wordpress;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args; # Standard WordPress permalink handling
}
location = /wp-config.php {
deny all; # Block access to DB credentials file
}
location = /wp-login.php {
limit_req zone=login burst=3 nodelay; # Strict rate limit on login page
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
}
location = /xmlrpc.php {
deny all; # Block xmlrpc — common brute-force/DDoS vector
}
location ~ \.php$ {
# Pass all PHP files to PHP-FPM
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
}
location ~* \.(js|css|png|jpg|jpeg|gif|svg|ico|woff2)$ {
expires 30d; # Cache static assets for 30 days
access_log off; # Skip logging for static files
}
}
Configuring the CI/CD using GitActions
For the deployment pipeline, we are going to keep things simple. We will detect changes to the docker-compose.yaml file and nginx.dev.config using Git. We will only redeploy the application if there are Docker-related changes, and we will only redeploy the web server if there are NGINX-related changes. The GitHub Actions pipeline should look like this:
name: Deploy
on:
push:
branches: [main]
paths:
- 'nginx/**'
- 'docker-compose.yaml'
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Detect changed files
id: changes
run: |
BEFORE="${{ github.event.before }}"
if git cat-file -e "${BEFORE}^{commit}" 2>/dev/null; then
git diff --name-only "$BEFORE" "${{ github.sha }}" > /tmp/changed.txt
elif git rev-parse HEAD~1 2>/dev/null; then
git diff --name-only HEAD~1 HEAD > /tmp/changed.txt
else
# First commit — treat everything as changed
git show --name-only --format="" HEAD > /tmp/changed.txt
fi
echo "nginx=$(grep -c '^nginx/' /tmp/changed.txt || true)" >> $GITHUB_OUTPUT
echo "compose=$(grep -c '^docker-compose.yaml' /tmp/changed.txt || true)" >> $GITHUB_OUTPUT
- name: Set up SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -H ${{ secrets.SERVER_HOST }} >> ~/.ssh/known_hosts
- name: Pull latest code on server
run: |
ssh ${{ secrets.SSH_USER }}@${{ secrets.SERVER_HOST }} "cd ~/app && git pull"
- name: Reload nginx config
if: steps.changes.outputs.nginx != '0' || github.event_name == 'workflow_dispatch'
run: |
ssh ${{ secrets.SSH_USER }}@${{ secrets.SERVER_HOST }} \
"sudo cp ~/app/nginx/nginx.${{ vars.NGINX_ENV }}.conf /etc/nginx/sites-available/wordpress.conf \
&& sudo nginx -t \
&& sudo systemctl reload nginx"
- name: Restart Docker stack
if: steps.changes.outputs.compose != '0' || github.event_name == 'workflow_dispatch'
run: |
ssh ${{ secrets.SSH_USER }}@${{ secrets.SERVER_HOST }} \
"cd ~/app && docker compose up -d --remove-orphans"
- name: Verify site is accessible
run: |
echo "Waiting for site to be ready..."
sleep 10
HTTP_STATUS=$(curl -o /dev/null -s -w "%{http_code}" --max-time 10 http://${{ secrets.SERVER_HOST }})
echo "HTTP status: $HTTP_STATUS"
if [[ "$HTTP_STATUS" == "000" ]]; then
echo "::error::Site is unreachable (connection failed)"
exit 1
elif [[ "$HTTP_STATUS" =~ ^5 ]]; then
echo "::error::Site returned server error $HTTP_STATUS"
exit 1
else
echo "Site is accessible (HTTP $HTTP_STATUS)"
fi
Before running the first test and verifying that the workflow behaves as expected, we first need to ensure that the following items have been properly configured:
GitHub Actions → Server SSH key (lets CI/CD deploy your code)
GitHub Actions runs in the cloud and needs to SSH into your server to deploy. You create this key pair on the server and authorize the public half, then give the private half to GitHub as a secret so your workflow can authenticate.
ssh-keygen -t ed25519 -C "github-actions-deploy" -f ~/.ssh/github_actions_deploy -N ""
cat ~/.ssh/github_actions_deploy.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/github_actions_deploy
Copy this the above key to GitHub → Settings → Secrets → Actions → SSH_PRIVATE_KEY
Server → GitHub SSH key (lets the server pull your code)
During deployment, the server runs git pull. GitHub needs to trust the server to read your repo. This key is already generated automatically by cloud-start.yaml (ssh-keygen … -f /root/.ssh/github_deploy), you just need to register its public half with GitHub.
cat /root/.ssh/github_deploy.pub
Copy this the above key to GitHub → Your repo → Settings → Deploy keys → Add deploy key
Add remaining GitHub Actions secrets
Go to GitHub → Settings → Secrets → Actions and add:
| Type | Name | Value |
|---|---|---|
| Secret | SSH_PRIVATE_KEY | detailed above |
| Secret | SERVER_HOST | server IP |
| Secret | SSH_USER | root |
| Variable | NGINX_ENV | dev |
Testing the deployment
Let’s commit our code and run the pipeline. In the GitHub Actions tab, we should see the Deploy workflow running. If everything goes as planned, the workflow should complete successfully.
If you navigate to http://your-server-ip, you should be redirected to a WordPress configuration page where you need to set up your admin account. Once that is done, go to http://your-server-ip/wp-admin, log in with your credentials, and you should see the admin dashboard.
Congratulations! You have just deployed a fully functional full-stack application without modifying a single line of application code.
Moving to Production
There are a few additional considerations we need to address to make this setup truly production-ready. We need to purchase a domain, configure DNS, provision SSL certificates, and optionally set up automatic database backups. Let’s go through these steps.
Registering a New Domain
There are many domain providers available, such as GoDaddy, Namecheap, and Squarespace Domains. I use AWS frequently, so I’ll use my AWS account to provision a domain, but the same general steps apply to any domain registrar.
Once you have purchased your domain, you need to configure it with a DNS provider (in this example, we’ll use Hetzner DNS). This can be done in two steps:
First, ensure that your domain points to the appropriate name servers. In your domain provider’s dashboard, there should be an option to edit the name servers and point them to Hetzner’s name servers. In AWS, this can be done from the domain management page by selecting Actions → Edit Name Servers and adding the following Hetzner name servers:
helium.ns.hetzner.de,hydrogen.ns.hetzner.comandoxygen.ns.hetzner.com.Next, create DNS records in Hetzner that map your domain name to your server’s IP address. Log in to Hetzner, navigate to the DNS section, and add a new DNS zone using your domain name. Then create two A records that point to your server’s public IP address.
Once this is configured, your website should become reachable using the domain you purchased. Keep in mind that DNS propagation can take some time, so changes may not become visible immediately.
Provision server SSL certificates
Provisioning an SSL certificate for your server using Certbot is very straightforward. You only need your domain name and an email address for registration and certificate management. Run the following script on your server:
#!/usr/bin/env bash
set -euo pipefail
DOMAIN=$1
EMAIL=$2
if [[ -z "$DOMAIN" || -z "$EMAIL" ]]; then
echo "Usage: ./scripts/provision-ssl.sh <domain> <email>"
exit 1
fi
# Ensure webroot directory exists for ACME challenge
sudo mkdir -p /var/www/certbot
# Stop nginx temporarily so certbot can bind to port 80
sudo systemctl stop nginx || true
# Obtain certificate (standalone mode for first-time provisioning)
sudo certbot certonly \
--standalone \
--email "$EMAIL" \
--agree-tos \
--no-eff-email \
-d "$DOMAIN" \
-d "www.$DOMAIN"
# Start nginx with the new certificate in place
sudo systemctl start nginx
echo "Certificate issued for $DOMAIN"
echo "Nginx started with SSL enabled."
Now we also need to redirect traffic from port 80 to port 443. This needs to be configured in the NGINX configuration file. Make sure that both your server firewall and the Hetzner firewall allow inbound traffic on port 443.
In my case, I created a separate nginx.prod.conf file with several additional production-specific configurations, but to keep this already lengthy post concise, I will only include the HTTP to HTTPS redirection section here:
# ─── HTTP → HTTPS Redirect ────────────────────────────────────────────
# Redirects all plain HTTP traffic to HTTPS, except ACME challenges
# needed for Let's Encrypt certificate renewal.
server {
listen 80;
listen [::]:80;
server_name your-domain.com www.your-domain.com;
# Let's Encrypt certificate renewal endpoint
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://your-domain.com$request_uri;
}
}
# ─── WWW → Non-WWW Redirect ───────────────────────────────────────────
# Canonicalises URLs by redirecting www to the bare domain for SEO.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.your-domain.com;
ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
return 301 https://your-domain.com$request_uri;
}
# ─── Main HTTPS Server ────────────────────────────────────────────────
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name your-domain.com;
# ── SSL / TLS ──────────────────────────────────────────────────────
# Certificate paths (managed by Certbot / Let's Encrypt)
ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
# Only allow modern TLS versions; SSLv3/TLS 1.0/1.1 are insecure.
ssl_protocols TLSv1.2 TLSv1.3;
# Reuse TLS sessions to speed up repeated handshakes.
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1h;
ssl_session_tickets off;
# wordpress and php related rules
# ......
}
Automatic database backup
This step is optional. If you completed all the previous steps, your site should already be production-ready. However, it is considered a best practice to regularly back up your database. Most major cloud providers offer automated backups as part of their managed database services, but you can also implement this yourself with a simple script.
In this example, I provisioned an external 10 GB volume for a few cents per month and attached it to my server to store backups.
The following script provides the database backup functionality (details on how to run this backup automatically are in the script comments):
#!/bin/bash
set -eo pipefail
# ─── Usage ────────────────────────────────────────────────────────────────────
# This script supports two modes:
#
# 1. Manual / one-off backup (no cleanup):
#
# ./backup.sh --now
#
# Use this before risky operations (plugin updates, migrations, etc.).
# Creates a snapshot labelled "manual" and leaves all existing backups untouched.
#
# 2. Scheduled backup (with cleanup):
#
# ./backup.sh --scheduled
#
# Creates a snapshot labelled "weekly" then deletes backups older than
# RETENTION_DAYS days. Intended to be called by cron, not manually.
#
# To schedule it, add a cron job on the server:
#
# sudo crontab -e
#
# Then add:
# 0 2 * * 0 /root/app/scripts/backup.sh --scheduled >> /var/log/backup.log 2>&1
#
# Cron syntax: minute hour day-of-month month day-of-week
# The above runs every Sunday at 2:00 AM.
# ─── Config ───────────────────────────────────────────────────────────────────
ENV_FILE="/etc/wordpress.env"
BACKUP_DIR="/mnt/backups/mysql"
CONTAINER_NAME="app-db-1"
RETENTION_DAYS=60
# ─── Usage ────────────────────────────────────────────────────────────────────
usage() {
echo "Usage: $0 [--now | --scheduled]"
echo ""
echo " --now Run an immediate one-off backup"
echo " --scheduled Run a scheduled backup with retention cleanup"
exit 1
}
# ─── Load credentials from env file ──────────────────────────────────────────
load_credentials() {
if [[ ! -f "$ENV_FILE" ]]; then
echo "[ERROR] Env file not found at $ENV_FILE"
exit 1
fi
MYSQL_DATABASE=$(grep ^MYSQL_DATABASE "$ENV_FILE" | cut -d '=' -f2)
MYSQL_USER=$(grep ^MYSQL_USER "$ENV_FILE" | cut -d '=' -f2)
MYSQL_PASSWORD=$(grep ^MYSQL_PASSWORD "$ENV_FILE" | cut -d '=' -f2)
if [[ -z "$MYSQL_DATABASE" || -z "$MYSQL_USER" || -z "$MYSQL_PASSWORD" ]]; then
echo "[ERROR] Missing credentials in $ENV_FILE"
exit 1
fi
}
# ─── Run the dump ─────────────────────────────────────────────────────────────
run_backup() {
local label=$1
local date
date=$(date +"%Y-%m-%d_%H-%M-%S")
local backup_file="${BACKUP_DIR}/${MYSQL_DATABASE}_${label}_${date}.sql.gz"
mkdir -p "$BACKUP_DIR"
echo "[$(date)] Starting $label backup..."
# Verify the container is running before attempting the dump
if ! docker inspect --format '{{.State.Running}}' "$CONTAINER_NAME" 2>/dev/null | grep -q true; then
echo "[ERROR] Container '$CONTAINER_NAME' is not running"
exit 1
fi
# Pass password via env var — avoids exposing it in the process list.
# --single-transaction ensures a consistent snapshot without locking tables (safe for InnoDB while live).
# set -eo pipefail (at the top) ensures a failed docker exec propagates through the pipe.
docker exec -e MYSQL_PWD="$MYSQL_PASSWORD" "$CONTAINER_NAME" \
mariadb-dump \
--single-transaction \
--routines \
--triggers \
--events \
-u "$MYSQL_USER" "$MYSQL_DATABASE" \
| gzip > "$backup_file"
# Verify the backup is not empty
if [[ ! -s "$backup_file" ]]; then
echo "[ERROR] Backup file is empty, something went wrong"
rm -f "$backup_file"
exit 1
fi
echo "[$(date)] Backup saved: $backup_file ($(du -h "$backup_file" | cut -f1))"
}
# ─── Cleanup old backups ──────────────────────────────────────────────────────
cleanup() {
echo "[$(date)] Removing backups older than $RETENTION_DAYS days..."
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +"$RETENTION_DAYS" -delete
echo "[$(date)] Current backups:"
ls -lh "$BACKUP_DIR"
}
# ─── Modes ────────────────────────────────────────────────────────────────────
mode_now() {
load_credentials
run_backup "manual"
echo "[$(date)] Done."
}
mode_scheduled() {
load_credentials
run_backup "weekly"
cleanup
echo "[$(date)] Done."
}
# ─── Entry point ──────────────────────────────────────────────────────────────
case "$1" in
--now) mode_now ;;
--scheduled) mode_scheduled ;;
*) usage ;;
esac
Conclusions
This post covered quite a lot: VPS deployments, Docker, CI/CD using GitActions, Bash scripting, DNS configuration, SSL certificate provisioning, and more. However, there are two key points I want to highlight:
The first point is that, for many projects, a simple VPS is more than enough, and there is often no need to rely on a full suite of managed cloud services. It is important to first understand who your users are, what their needs look like, and then choose the solution that best fits those needs from both a cost and long-term maintenance perspective.
The second point is that the fundamentals matter. Understanding the topics covered above can be extremely valuable when moving on to more advanced areas such as cloud platforms, Kubernetes, or more sophisticated CI/CD workflows. Knowing how to bring a site into production from start to finish provides a strong foundation that makes these more complex topics easier to understand.
Thank you for reading!













Top comments (0)