Beyond the Single Process: Architecting Laravel Containers for the Cloud
A deep dive into multi-process Docker builds using Supervisord, Nginx, and Reverb.
Preface
This guide serves as a technical walkthrough for architecting a Laravel application environment on the cloud. The focus is on the core building blocks required to run a modern PHP application, including real-time features and background workers, in a provider-agnostic manner. Whether deploying to AWS, GCP, or a private VPS, these architectural patterns remain consistent.
This is an introductory reference intended to explain how various system components integrate within a containerized environment. While it establishes a secure and functional foundation, it is not an exhaustive production hardening checklist.
A basic familiarity with Docker and Linux terminal commands is assumed. For those new to these tools, this post is structured to explain the logic behind each configuration, making it easier to research specific commands as they appear.
This is an introductory guide, not a production hardening checklist. Treat it as a reference for understanding how the pieces fit together rather than a one-size-fits-all solution.
Contents
VM base setup (cloud-agnostic)
- System update and timezone synchronization
- Docker engine installation and configuration
- Security hardening with UFW and SSH
- Persistent storage and log rotation setup
- The VM bootstrap script
Container architecture
- The single-container multi-process rationale
- Orchestration with Supervisord
Docker image design
- Dependency isolation with multi-stage builds
- The Application runtime image (PHP-FPM, Nginx, Supervisor)
- Optimizing for Alpine Linux
Nginx configuration
- PHP-FPM integration via TCP loopback
- Static asset handling and security headers
- WebSocket proxying for Laravel Reverb
Process management with Supervisord
- Service orchestration and boot priorities
- Managing PHP-FPM and Nginx
- Scaling background queue workers
- Running the Reverb server
Runtime initialization
- Resolving volume permissions at boot
- Automated migrations and schema synchronization
- Production caching and optimization
- The entrypoint script execution
Building and running the container
- Image creation and layer caching
- Launching with environment files and volumes
- Deployment strategies: Single-container vs. Docker Compose
Moving toward production
- Key architectural transitions for scale and security
VM setup script
The base virtual machine is intentionally kept simple. The goal is to prepare a secure, minimal host that can run Docker reliably, regardless of the cloud provider.
Different Linux distributions ship with different default packages. The commands below assume Ubuntu, which is commonly used on AWS and GCP. If you use another distribution, adjust the package manager and service names accordingly.
To keep things straightforward, we use a single bash script to bootstrap the VM. This script handles:
- System updates and timezone configuration
- Docker installation and configuration
- Firewall rules with UFW
- SSH hardening
- Creation of Docker volumes for persistent data and logs.
Before creating the VM, generate an SSH key and attach it during instance creation:
ssh-keygen -t ed25519 -C "username"
The comment in the previous command ( -C flag) is optional, but using the username helps some providers (like GCP) automatically create a matching system user with sudo privileges.
Make sure to update the USERNAME variable in the next bash script so Docker commands can be run without sudo
#!/bin/bash
# Update system and set timezone
sudo apt-get update -y
sudo timedatectl set-timezone UTC
# Add Docker official GPG key:
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
# Install Docker (latest stable)
sudo apt-get update -y
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Enable Docker at boot
sudo systemctl enable docker
sudo systemctl start docker
# Configure Docker log rotation
touch /etc/docker/daemon.json
sudo tee /etc/docker/daemon.json >/dev/null <<'EOF'
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
EOF
sudo systemctl restart docker
# Change this to your VM user
USERNAME=karim
# Create user if it does not exist
if ! id "$USERNAME" >/dev/null 2>&1; then
useradd -m -s /bin/bash "$USERNAME"
fi
# Ensure docker group exists
if ! getent group docker >/dev/null; then
groupadd docker
fi
# Add user to docker group
sudo usermod -aG docker "$USERNAME"
# Firewall setup (optional on managed cloud firewalls like GCP)
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow OpenSSH
sudo ufw allow 443/tcp
sudo ufw allow 80/tcp
sudo ufw --force enable
# SSH hardening
sudo apt-get install -y openssh-server
SSHD_CONFIG="/etc/ssh/sshd_config"
sudo cp "$SSHD_CONFIG" "${SSHD_CONFIG}.bak.$(date +%F-%T)"
sudo sed -i 's/^#PermitRootLogin.*/PermitRootLogin no/' "$SSHD_CONFIG"
sudo sed -i 's/^#PasswordAuthentication.*/PasswordAuthentication no/' "$SSHD_CONFIG"
sudo sed -i 's/^#PubkeyAuthentication.*/PubkeyAuthentication yes/' "$SSHD_CONFIG"
sudo systemctl restart ssh
# Docker volumes for persistent data
sudo docker volume create laravel-storage
sudo docker volume create laravel-supervisord
Root ssh access is disabled to ensure all privileged actions are tied to individual users and recorded in system logs. This enhances accountability and facilitates auditing. Los can later be shipped off the VM to guarantee immutability.
At this point, the VM is secured, Docker is installed, and persistent storage is ready. We can now focus on containerizing the Laravel application itself.
Docker setup
A typical Docker container is designed to run a single-long-lived process and does not includesystemctl.In this setup, however, the Laravel application requires multiple cooperating processes:
- Nginx to serve HTTP traffic and static assets
- PHP-FPM to execute the Laravel application
- Queue workers to handle background jobs
- Reverb for websocket connections
To manage these processes inside a single container, we use Supervisord it acts as a lightweight process manager that starts, restarts, and supervises each service.
This approach is intentional and works well when:
- You are limited to running a single container (shared hosting, simple VPS setups)
- You are optimizing for simplicity over strict container deployment
For larger systems, these processes are often split into separate containers.
Context and design choices
For this guide, we use SQLite to keep the setup simple and self-contained. The goal is to focus on container structure and process management rather than database provisioning. The same approach applies to MySQL or PostgreSQL, with additional service configuration.
The application relies on Docker named volumes for persistent data, such as:
- The SQLite database file
- Laravel storage and cache directories
- Supervisor logs
Docker named volumes are created and managed by the Docker engine. At build time, these volumes do not exist yet, and when they are mounted at runtime, they are owned by root by default. As a result, file permissions cannot be finalized during the image build.
Because of this, permission changes are intentionally deferred to container startup. The image is built without assuming ownership of persistent paths, and permissions are applied at runtime once the volumes are mounted.
#Dockerfile
FROM composer:2.7 AS vendor
# Application source directory inside the container
WORKDIR /app/api
# Copy only dependency files to leverage Docker layer caching
COPY ./api/composer.json ./api/composer.lock ./api/artisan ./
# Install PHP dependencies
RUN composer install \
--optimize-autoloader \
--prefer-dist \
--no-interaction \
--no-scripts
# Copy the rest of the application source
COPY ./api ./
# Create SQLite database file (permissions handled at runtime)
RUN touch database/database.sqlite
Without a multi-stage build, Composer would live in the final image. Any change to the application source would invalidate the Docker cache and force Composer install to run again. Since dependency installation is one of the slowest steps, even small code changes would significantly slow down both local builds and CI pipelines.
By using a dedicated build stage, code changes are applied quickly, builds remain predictable, and the final image does not include Composer or other build-time tools.
Runtime image
The final runtime image must balance efficiency with the functional requirement of running multiple services. While standard PHP-FPM images are slim, they lack the necessary components to serve web traffic or manage background workers. Using an Alpine Linux base reduces the final footprint, while adding Nginx and supervisord allows the container to operate as a self-contained unit capable of handling requests and queues simultaneously.
#Dockerfile
# Use Alpine for a minimal production footprint
FROM php:8.4-fpm-alpine AS backend
WORKDIR /var/www/html/laravel
# Install system binaries required for the multi-sprocess architecture
RUN apk add --no-cache \
nginx \
supervisor \
libzip-dev \
zip \
unzip \
mariadb-dev \
linux-headers
# Core PHP extentions for Laravel's queue and file handling
RUN docker-php-ext-install zip bcmath pcntl
# Bring in the pre-built vendor folder to keep the final image clean of build tools
COPY --from=vendor /app/api .
# Inject Nginx configuration to overrride default settings
COPY path-to-nginx-config /etc/nginx/http.d/default.conf
This multi-stage transition ensures the final image only contains the binaries and application code necessary for execution. By excluding Composer and temporary build files, the image size remains small, which in general reduces deployment times and overall attack surface of the container.
The inclusion of pcntl is specific to Laravel’s needs. It allows the queue workers to listen for “stop” or “restart” signals from the host system, ensuring background jobs shut down gracefully. This architectural choice transforms a standard PHP container into a robust application server that manages its own lifecycle without requiring external process managers on the host VM.
Nginx configuration
Nginx acts as the primary entry point for all incoming traffic to the container. its role is to efficiently distinguish between static assets, dynamic PHP requests, and persistent WebSocket connections. By serving images, CSS, and JavaScript directly from the file system, Nginx reduces the load on the PHP engine. For dynamic content, it functions as a reverse proxy, translating HTTP requests into a format that PHP-FPM can process.
The configuration for PHP processing must ensure that only valid scripts are executed while correctly passing path information to Laravel’s router. Using a TCP connection for the FastCGI gateway provides a stable communication channel between the web server and the PHP processor within the container’s network stack.
#Nginx config
server {
server_name api.example.com;
listen 80;
# Point to the Laravel public directory for entry points
root /var/www/html/laravel/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
# Security measure to prevent processing non-existent files
try_files $uri =404;
# Define the pattern for capturing script names and path info
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# Communication via TCP loopback on the standard PHP-FPM port
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
# Map the absolute file path and path info for the PHP interpreter
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
# Deny access to hidden system files
location ~ /\.(?!well-known).* {
deny all;
}
# Handle WebSocket upgrades for Laravel Reverb
location /app/ {
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header SERVER_PORT $server_port;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Enable connection upgrading for persistent protocols
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
# Route traffic to the Reverb server running on port 8080
proxy_pass http://0.0.0.0:8080;
}
}
The try_files directive acts as a security gate, stopping the request immediately if the requested .php file is missing, which prevents the server from attempting to pass invalid data to the PHP engine. The inclusion of fastcgi_split_path_info and the PATH_INFO parameter is necessary for applications that rely on complex URI routing, as it allows Lalravel to interpret data passed after the script extension in the URL.
The /app/ block is specifically tailored for Reverb; it passes the necessary Upgrade and Connection headers to the backend, allowing the HTTP request to transition into the WebSocket stream. This ensures that the same port used for standard web traffic can also facilitate real-time, bi-directional communication.
Process management with supervisord
#Dockerfile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
A container’s lifecycle is tied to its primary foreground process. In an environment requiring multiple services, a process manager is necessary to orchestrate the startup and health of independent components. Supervisord serves this purpose by acting as the main entry point, monitoring the execution of Nginx, PHP-FPM, and Laravel-specific workers. This ensures that if a single service fails, it can be restarted automatically without causing the entire container to exit.
[supervisord]
# Required to prevent the container from shutting down immediately
nodaemon=true
logfile=/var/log/supervisord/supervisord.log
logfile_maxbytes=50MB
loglevel=info
[program:php-fpm]
# Foreground execution is necessary for Supervisord to monitor the PID
command=php-fpm -F
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/supervisord/php-fpm.log
stdout_logfile_maxbytes=20MB
stopwaitsecs=60
# Lower priority ensures PHP starts before the web server
priority=5
user=root
[program:laravel-service-app-queues]
# Parallel processes allow for concurrent background job handling
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/laravel-service-app/api/artisan queue:work
autostart=true
autorestart=true
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/html/laravel-service-app/api/storage/logs/queue.log
groupstop=true
groupkill=true
stopwaitsecs=60
user=www-data
priority=7
stdout_logfile_maxbytes=20MB
[program:nginx]
command=nginx -g "daemon off;"
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/supervisord/nginx.log
stdout_logfile_maxbytes=20mb
stopwaitsecs=60
user=root
priority=6
[program:reverb]
command=php /var/www/html/laravel/artisan reverb:start
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/supervisord/reverb.log
stdout_logfile_maxbytes=20mb
stopwaitsecs=60
user=www-data
priority=6
The configuration is used nodaemon=true to keep the container active. By assigning different priority values, the system controls the boot order, ensuring the PHP engine is ready to accept FastCGI requests before Nginx begins listening for traffic. This orchestration allows the container to function as a complete application server, managing both the web interface and the background infrastructure required for queues and real-time WebSockets.
The numprocs setting for the queue worker demonstrates how the environment can be scaled internally. Increasing this number allows the container to process multiple background tasks simultaneously, which is essential for high-volume applications where mail sending or image processing shouldn’t block the main request cycle.
Runtime initialization
# Dockerfile
COPY run.sh /var/www/html/laravel/run.sh
# Expose ports
EXPOSE 80
# Start services using supervisor
CMD ["/bin/sh", "/var/www/html/laravel/run.sh"]
The transition from a static Docker image to a running container requires a bridge to handle environment-specific setup. Because Docker volumes for storage and databases are often mounted at runtime, their contents are not available during the build phase. This creates a gap where file permissions are reset to the host’s root user, and the application state is unoptimized. An initialization script serves as the container’s entrypoint, ensuring the environment is prepared, secure, and performant before the processes start.
#!/bin/sh
set -e
# Chage to the application directory
cd /var/www/html/laravel
# Runtime permission correction for mounted volumes
chown -R www-data:www-data database storage bootstrap/cache
chmod -R 775 storage bootstrap/cache
chmod 664 database/database.sqlite
# Create symbolic link for public file access
php artisan storage:link || true
# Synchronize database schema with current code version
php artisan migrate --force
# Rebuild optimized caches (config, routes, events) to reduce bootstrap filesystem overhead
php artisan optimize:clear
php artisan optimize
# Hand off execution to Supervisord as the long-running process
exec supervisord -n -c /etc/supervisor/conf.d/supervisord.conf
The script uses chown and chmod at the moment of boot to resolve ownership conflicts introduced by Docker’s volume mounting system. Since the www-data user inside the container must write to the SQLite database and log files, these permissions cannot be baked into the image itself; they must be applied once the persistent storage is attached.
Running php artisan migrate --force ensures that the database schema is always up to date with the application code. The --force flag is mandatory in the production environment ot bypass the interactive confirmation prompt that would otherwise stall the container’s startup. Following the migration, the optimize commands consolidate various configuration, route, and view files into cached PHP arrays. This reduces the number of file system reads required for every HTTP request, significantly improving response times in high-traffic cloud environments.
The final line uses exec to replace the shell process with Supervisord. This ensures that Supervisord becomes PID 1 inside the container, allowing it to receive termination signals directly from the Docker engine for a graceful shutdown.
Building and running the container
The lifecycle of a Dockerized application is divided into two distinct phases: the Build phase, where the immutable blueprint (the image) is created, and the Run phase, where that blueprint is instantiated as an active environment. By separating these, the heavy lifting of dependency installation and system configuration happens once, allowing the application to be deployed across multiple servers instantly.
1. Building the Image
The build process executes every instruction in the Dockerfile to create a tagged version of the application. using the --tag (or -t ) flag allows for versioning, which is critical for rollbacks.
# Build the image from the current directory
docker build --t laravel-app .
The initial build may take several minutes as it downloads the Alpine base and installs PHP extensions. However, Docker’s layer caching ensures that subsequent builds are near-instant unless the composer.json or system-level requirements are modified. This speed is vital for a responsive CI/CD pipeline.
2. Launching the Container
Once the image is ready, it is launched using the Docker run command. This stage connects the static image to the outside world by mapping ports, injecting secrets via environment files, and attaching persistent storage volumes.
# Launch the container with environment variables and persistent volumes
docker run --name laravel-app -p 80:80 \
--env-file .env \
-v laravel-storage:/var/www/html/laravel/storage \
-v laravel-sqlite3:/var/www/html/laravel/database \
-v laravel-supervisord:/var/log/supervisord \
laravel-app:latest
The -v (volume) Flags are essential for data persistence. Without them, any data written to the SQLite database or any uploaded files in the storage directory would be lost the moment the container is stopped. By mounting named volumes, the data lives on the host VM independent of the container’s lifecycle.
Choosing Your Deployment Path
This setup provides a robust foundation for small-to-medium cloud deployments where simplicity and resource efficiency are priorities. By bundling the web server, PHP engine, and background workers into a single container, you minimize the overhead of managing multiple network bridges and service dependencies.
Single-Container Setup (Current)
Suited for: MVPs, Shared Hosting, Simple VPS
Scaling: Vertical (Larger VM)
Complexity: Low, One Dockerfile to manage
Persistence: Local Named Volumes
Multi-Container (Microservices)
Suited for: High-traffic Apps, Large Teams
Scaling: Horizontal (Scale workers independently)
Complexity: High, Requires Orchestration (Kubernetes/Swarm)
Persistence: Managed Cloud Storage (S3, RDS)
Moving Toward Production
While this guide focuses on the core building blocks, moving toward a “production-hardened” state involves a few key transitions:
Orchestration with Docker Compose: Transitioning from manual commands to Docker Compose allows the entire stack, including networks, volumes, and environment variables, to be defined in a single docker-compose.yml file. This ensures the infrastructure is version-controlled and can be launched with a single command.
Externalize the Database: For high availability, move away from SQLite in a volume to a managed service like AWS RDS or GCP Cloud SQL.
Secrets Management: Instead of local .env files, use cloud-native secret managers to inject credentials at runtime.
SSL/TLS: The current setup uses port 80. In production, Nginx should be configured for port 443 with SSL certificates, or sit behind a Cloud Load Balancer that handles SSL termination.

Top comments (0)