DEV Community

Christiana Otoboh
Christiana Otoboh

Posted on

Deploying a Containerized WordPress App on AWS with Docker, EBS & S3 Backups

Introduction

In this project, I deployed a containerized WordPress application on an AWS EC2 instance using Docker. The setup includes a MySQL database, persistent storage with EBS, and automated backups to S3.
The goal wasn't just to get WordPress running; it was to understand how real-world deployments handle data persistence, networking, and automation.

Prerequisites

Before following along, make sure you have the following in place:

  • An active AWS account
  • Basic familiarity with the Linux command line
  • A key pair created in AWS (needed to SSH into your EC2 instance)
  • Basic understanding of what Docker is (you don't need to be an expert)

Note: Everything in this project is done on a free-tier eligible EC2 instance.
Just be mindful to stop or terminate resources when you're done to avoid unexpected charges.

Project Overview

Here’s what I built:

  • A Linux EC2 instance hosted on AWS
  • Docker installed and configured using a Bash script
  • WordPress and MySQL running as Docker containers
  • Persistent storage using an attached EBS volume
  • Automated MySQL backups uploaded to an S3 bucket
  • Secure access configured via Security Groups

⚙️Step 1: Provisioning the EC2 Instance and EBS Volume via the AWS Console

With the architecture in mind, the first step was setting up the core infrastructure.
Launching the EC2 Instance
(Not covered in detail here — the process is fairly straightforward. The default configuration suffices, with one exception: make sure to configure the security group as follows.)

  • Port 22 → for SSH access
  • Port 80 → for web access

Creating the EBS Volume
(Also not covered in detail, but equally straightforward — just ensure the volume is provisioned in the same region as your EC2 instance.)

Attaching the EBS Volume to the EC2 Instance
Once the volume is created, attach it to the instance by following these steps:

  1. Select the EC2 instance in the AWS Console.
  2. Click Actions, then navigate to Storage → Attach Volume.
  3. From the dropdown, select the volume you just created.
  4. Choose a device name from the dropdown — any available option works.
  5. Click Attach Volume to confirm.

I then connected to the instance via SSH and handled everything else from the terminal.

🐳 Step 2: Installing Docker & Setting up Configurations

Using the Bash script below, I automated the entire environment setup. Specifically, the script:

  • Installed Docker and its plugins, including Docker Compose
  • Started and enabled the Docker service
  • Added the ubuntu user to the Docker group, allowing Docker commands to run without sudo
  • Mounted the EBS volume to the filesystem
  • Used a bind mount so MySQL writes data directly to the EBS volume, ensuring data survives container restarts and is not tied to the container lifecycle
  • Installed the AWS CLI in preparation for the S3 backup process

The goal of this script was to handle everything in one pass, Docker installation, EBS volume mounting, and AWS CLI setup so the instance is fully ready before any containers are launched.

#!/bin/bash
set -e

echo "============================================================"
echo "Provisioning script is now running"
echo "============================================================"

# Status of docker on the server

if  command -v docker &>/dev/null || sudo systemctl is-active --quiet docker; then
  echo "docker $(docker --version) is installed and running. Skip docker installation"
else
  echo "Docker is not installed, installing docker ................"

  #Update packages
    sudo apt update
    sudo apt install ca-certificates curl gnupg lsb-release -y

    #Add Docker's official GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg

    # Set up the Docker apt repository
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

    #Install the Docker Engine packages
    sudo apt update
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

    # Add dcoker user to docker group (so no sudo needed)
    sudo usermod -aG docker ubuntu

      # Enable and start Docker
    sudo systemctl enable docker
    sudo systemctl start docker
echo "docker $(docker -v) has been installed......"
fi



# Install aws cli

echo "============================================"
echo "Installing aws cli"
echo "============================================"

if command -v aws &>/dev/null; then
  echo "aws CLI $(aws --version) is installed"
else

  # Download the official AWS CLI v2 installer. I am using Ubuntu
  curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "/tmp/awscliv2.zip"

  # Unzip it
  sudo apt install unzip -y
  unzip /tmp/awscliv2.zip -d /tmp

  # Run the installer
  sudo /tmp/aws/install

  # Verify installation
  aws --version

  # Cleanup
  rm -rf /tmp/awscliv2.zip /tmp/aws
fi

# Format the EBS volume and make it a filesystem 
echo "==========================================="
echo "Mounting EBS Volume"
echo "==========================================="

if mountpoint -q /mnt/ebs; then
  echo "EBS Volume is mounted"
else
  echo "EBS Volume is not mounted.... Mounting EBS Volume"

  #Format the EBS volume
  sudo mkfs -t ext4 /dev/nvme1n1

  #Create a mount point where the EBS Volume will appear
  sudo mkdir -p /mnt/ebs

  #Mount the volume to the mount point (Folder), add the fstan to make the mount permanent (survives reboots)
  sudo mount /dev/nvme1n1 /mnt/ebs
  echo '/dev/nvme1n1 /mnt/ebs ext4 defaults,nofail 0 2' | sudo tee -a /etc/fstab
fi



# Create a folder on the mounted EBS volume for mysql to write to
sudo mkdir -p /mnt/ebs/mysql-data

# mysql runs as a user with UID:999,  grant it ownership to the volume directory so that it can write into it
sudo chown -R 999:999 /mnt/ebs/mysql-data

echo "Provisoning Script completed!"
Enter fullscreen mode Exit fullscreen mode

Running the Script

Before executing, make the script executable by running:
chmod +x scriptname.sh

Then execute it:
./scriptname.sh

Once the script has finished running, verify that everything was set up correctly. These checks are not strictly necessary if your script runs successfully, but they are worth doing especially if you are just starting out, as they help build confidence that everything is set up correctly before moving on.

Confirm Docker is installed:
docker --version

Next, confirm the Docker daemon is accessible:

docker ps

On first run, docker ps might return the following error:

permission denied while trying to connect to the docker API at unix:///var/run/docker.sock

Log out of your server and SSH back in, then run docker ps again, the error should be resolved.

Confirm the EBS volume is mounted:

Confirm the AWS CLI is installed:

▶️ Step 3: Getting the Containers Running

I defined the services using a Docker Compose file to run:

  • A WordPress container (frontend)
  • A MySQL container (database)

Both containers communicate over Docker's default network. I also ensured that the MySQL volume was mapped to the mounted EBS volume, so all database data is written to persistent storage.

For security reasons, I stored the database credentials in a .env file and referenced them as variables inside the Docker Compose file.

services:
  db:
    image: mysql:8.0
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - /mnt/ebs/mysql-data:/var/lib/mysql
  wordpress:
    depends_on:
      - db
    image: wordpress:php8.2-apache 
    ports:
      - "80:80"

    restart: unless-stopped
    environment:
      WORDPRESS_DB_HOST: db:3306                                                  
      WORDPRESS_DB_USER: ${MYSQL_USER}                                       
      WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD}                                      
      WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
networks:
  default: {}
Enter fullscreen mode Exit fullscreen mode

I then started the containers by running:
docker compose up -d

To confirm the containers were created and running:
docker ps

If successful, your WordPress application should now be accessible via the EC2 instance's public IP address. Make sure the URL starts with http and not https. Only port 80 was configured and not port 443, using https will make the site unreachable.

☁️ Step 4: Automating Backups to S3

To improve reliability, I implemented a backup strategy that:

  • Exports the MySQL database using mysqldump
  • Compresses the backup file
  • Uploads it to an S3 bucket using the AWS CLI. This ensures data can be restored even if the instance fails.

Prerequisites: S3 Bucket & IAM Role
Before creating the backup script, I set up two things:

  1. Created an S3 bucket to store the backups.
  2. Created an IAM role for authentication, which I attached to the EC2 instance. This allows the instance to upload to S3 without needing to hardcode credentials — I find this the easiest and most secure approach.

When creating the IAM role, make sure:

  • Trusted entity type is set to AWS Service
  • Use case is set to EC2
  • The role is granted full S3 access or a custom S3 bucket policy — without this, the upload will fail.

To attach the IAM role to the instance, select the EC2 instance, go to Actions → Security → Modify IAM Role, select the role you created from the dropdown, and save.

The Backup Script

#!/bin/bash

echo "========================================"
echo        "Starting backup.sh script"
echo "========================================"

echo "Setting up environment variables"

set -e        # stop if anything fails
set -a        # start exporting variables
source .env   # load variables from .env
set +a        # stop exporting

DB_CONTAINER="ubuntu-db-1"
MYSQL_USER=${MYSQL_USER}                # mysql DB user
MYSQL_DATABASE=${MYSQL_DATABASE}        # mysql DB
MYSQL_PASSWORD=${MYSQL_PASSWORD}        # mysql Password
S3_BUCKET="s3://sca-wordpress-demo"     # s3 bucket name

# Create local Repository first to store the backup before uploading to s3
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="mysql-backup-${TIMESTAMP}.sql.gz" 
TEMP_DIR="/tmp"
FULL_BACKUP_PATH="$TEMP_DIR/$BACKUP_FILE"



# Run the docker exec to run the mysql dump
echo "================================================="
echo "Taking the mysql backup locally"
echo "================================================"

if 
docker exec "$DB_CONTAINER" sh -c \
 "exec mysqldump \
 --single-transaction --set-gtid-purged=OFF  --no-tablespaces  \
 -u$MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE" | gzip > "$FULL_BACKUP_PATH"; then
 echo "Dump taken successfully"
else
echo "Dump not successful"
exit 1
fi


# Upload to s3 bucket
# I created an IAM role with an s3 bucket access, attached it to the EC2 Instance. This takes care of verification

echo "================================================"
echo                "Uploading to s3 bucket"
echo "================================================"

if aws s3 cp "$FULL_BACKUP_PATH" "$S3_BUCKET/"; then
echo "Upload to s3 was completed succesfully"

#Confirm if the backup file is in s3 bucket
    if aws s3 ls "$S3_BUCKET/$BACKUP_FILE" &>/dev/null; then
        echo "$BACKUP_FILE is in $S3_BUCKET"
    else
        echo "Upload seemed to succeed but file not found in bucket"
        exit 1
    fi    
else 
  echo "Upload FAILED"
    exit 1
fi


echo "=================================================="
echo " Upload completed"
echo "=================================================="
Enter fullscreen mode Exit fullscreen mode

Note: Replace the S3 bucket name with your own bucket name before running the script.

Make the script executable:
chmod +x backup.sh

Then run it:
./backup.sh

Confirm your backup was uploaded successfully:

📌 Key Takeaways

From this project, I learned:

  • How WordPress and MySQL communicate across containers using Docker's default network
  • Why databases need persistent storage and how a bind mount to an EBS volume keeps data alive beyond the container lifecycle
  • How to automate an entire server setup — Docker installation, volume mounting,and CLI configuration using a single Bash script
  • How to implement a real backup strategy using mysqldump, compression, and S3 uploads, authenticated securely via an IAM role
  • How Security Groups act as a firewall, and why only exposing the ports you actually need matters

Conclusion

This project gave me hands-on experience with deploying a real-world application using cloud and container technologies. More importantly, it helped me understand the why behind key concepts like persistence, networking, and security.

If you're learning DevOps or Cloud Engineering, I highly recommend building something like this, the lessons stick much better when things break and you fix them yourself.

💬 Feel free to share feedback or suggestions.

Top comments (0)