Most Laravel teams have a deployment ritual that sounds reasonable until you think about it carefully. Push to main. SSH into the server. Run git pull && composer install. Fire off php artisan migrate. Restart the queue. Hope the app comes back cleanly. It often does. When it does not, you are debugging in production, in front of users, mid-request.
Zero-downtime Laravel deployments with GitHub Actions break that cycle. The strategy borrows Envoyer’s core mechanic (timestamped release directories and an atomic symlink swap) and pairs it with a CI/CD pipeline that runs your full test suite before anything reaches production, verifies the live deployment against a real health endpoint, and rolls back automatically when that check fails. No Envoyer license. No Laravel Forge dependency. Full control over every step.
One scoping note before we get into it: this article assumes a server that is already provisioned, Nginx-configured, and accessible over SSH with a deploy user. If you are building that foundation, start with the complete Laravel production deployment guide first. This article picks up where that one leaves off.
Why the “Pull and Pray” Approach Eventually Breaks
The problem with a naive git pull + composer install deploy is not that it is lazy. The problem is that it is a long, destructive operation performed against a live directory. Composer can take 30 to 90 seconds to resolve and install packages. During that window, your application is in an inconsistent state. Old PHP files from the previous release sit alongside whatever files git has already updated. If a new class is referenced by code that has already landed on disk but Composer has not yet installed its package, you get a fatal. Users get a 500.
Migrations compound the problem. Running php artisan migrate against a live database means the schema is changing while requests are in flight. An ALTER TABLE that adds a NOT NULL column can acquire a table lock on MySQL. Even on PostgreSQL, you are relying on the migration being fast enough that no concurrent request trips over the constraint change mid-transaction.
Queue workers are the third failure mode, and the one teams notice last. Workers running old code against a migrated schema produce type errors, silent write failures, or jobs that retry indefinitely. We have seen this cause lost inference jobs (queued API calls to Anthropic or OpenAI) that were mid-retry when the schema changed underneath them. By the time someone noticed, the queue had thousands of failed jobs and no useful error message beyond a column-not-found exception.
The fix is conceptually simple: prepare the new release while the old one is still serving traffic, then swap in a single atomic operation.
How Zero-Downtime Deployments Actually Work
Everything depends on the directory structure on the server. Instead of a single /var/www/myapp directory modified in place, you maintain three:
/var/www/myapp/
├── current -> releases/20260502143012_a1b2c3d (symlink)
├── releases/
│ ├── 20260502143012_a1b2c3d/
│ ├── 20260430091201_f4e5d6a/
│ └── 20260428181345_9c8b7a6/
└── shared/
├── .env
└── storage/
current is a symlink. Nginx’s document root points at /var/www/myapp/current/public. Each deployment creates a new timestamped directory in releases/, prepares it fully (dependencies installed, assets built, framework caches generated), and then swaps current to point to the new directory. The swap itself is a single filesystem rename call. From Nginx’s perspective, it is instantaneous.
Shared state lives outside all release directories in shared/. Your .env file and the storage/ directory are symlinked into each release. Uploads, runtime cache files, and logs persist across every deployment.
The Nginx configuration for this pattern uses $realpath_root rather than $document_root. This is not optional:
server {
listen 443 ssl http2;
server_name myapp.com;
root /var/www/myapp/current/public;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
}
$realpath_root resolves the symlink before handing the path to PHP-FPM. Without it, OPcache may hold inode references to files in the old release directory after the swap, serving stale bytecode until the cache expires or the process restarts. Reloading PHP-FPM (not restarting it) after the symlink swap flushes OPcache’s resolved paths gracefully, with no dropped connections.
Setting Up the GitHub Actions Pipeline
The pipeline has two jobs: test and deploy. The deploy job does not run unless tests pass. GitHub’s environment feature gives you an optional manual approval gate for production, useful for regulated contexts. The GitHub Actions documentation on deployment environments covers protection rules in detail.
A reliable test suite is the actual quality gate between your code and production. Building robust test factories gives you the factory-backed integration test coverage that makes a CI gate meaningful rather than ceremonial.
Here is the complete workflow:
# .github/workflows/deploy.yml
name: Test and Deploy
on:
push:
branches: [main]
env:
PHP_VERSION: '8.3'
jobs:
test:
name: Run Test Suite
runs-on: ubuntu-latest
services:
mysql:
image: mysql:8.0
env:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: testing
options: >-
--health-cmd="mysqladmin ping"
--health-interval=10s
--health-timeout=5s
--health-retries=3
ports:
- 3306:3306
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup PHP ${{ env.PHP_VERSION }}
uses: shivammathur/setup-php@v2
with:
php-version: ${{ env.PHP_VERSION }}
extensions: mbstring, pdo, pdo_mysql, redis, pcov
coverage: pcov
- name: Cache Composer dependencies
uses: actions/cache@v4
with:
path: vendor
key: composer-${{ hashFiles('**/composer.lock') }}
restore-keys: composer-
- name: Install Composer dependencies
run: composer install --no-progress --prefer-dist --no-interaction
- name: Prepare test environment
run: |
cp .env.testing .env
php artisan key:generate
- name: Run migrations against test database
run: php artisan migrate --force
env:
DB_CONNECTION: mysql
DB_HOST: 127.0.0.1
DB_PORT: 3306
DB_DATABASE: testing
DB_USERNAME: root
DB_PASSWORD: root
- name: Execute test suite
run: php artisan test --parallel
env:
DB_CONNECTION: mysql
DB_HOST: 127.0.0.1
DB_PORT: 3306
DB_DATABASE: testing
DB_USERNAME: root
DB_PASSWORD: root
deploy:
name: Deploy to Production
runs-on: ubuntu-latest
needs: test
environment: production
concurrency:
group: production-deploy
cancel-in-progress: false
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run deployment script via SSH
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.DEPLOY_HOST }}
username: ${{ secrets.DEPLOY_USER }}
key: ${{ secrets.DEPLOY_KEY }}
port: ${{ secrets.DEPLOY_PORT }}
script: /home/deploy/scripts/deploy.sh ${{ github.sha }}
- name: Verify health check
run: |
sleep 15
HTTP_STATUS=$(curl --silent --output /dev/null \
--write-out "%{http_code}" \
--max-time 10 \
https://${{ secrets.APP_DOMAIN }}/up)
if [ "$HTTP_STATUS" -ne 200 ]; then
echo "Health check returned HTTP $HTTP_STATUS — triggering rollback"
exit 1
fi
echo "Deployment verified: HTTP $HTTP_STATUS"
- name: Rollback on failure
if: failure()
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.DEPLOY_HOST }}
username: ${{ secrets.DEPLOY_USER }}
key: ${{ secrets.DEPLOY_KEY }}
port: ${{ secrets.DEPLOY_PORT }}
script: /home/deploy/scripts/rollback.sh
Two things to highlight. The concurrency block with cancel-in-progress: false prevents a rapid second push from interrupting a deployment that is already in flight. Two concurrent deploys racing to swap the same symlink is a scenario you do not want to debug at 2am. The needs: test key creates a hard dependency: no green tests, no deploy job runs, full stop.
The Deployment Script: Symlink-Based Releases
The deploy.sh script does the actual work on the production server. It accepts the commit SHA from GitHub Actions so the release directory is traceable back to a specific commit.
#!/usr/bin/env bash
set -euo pipefail
COMMIT_SHA="${1:-HEAD}"
APP_DIR="/var/www/myapp"
RELEASES_DIR="$APP_DIR/releases"
SHARED_DIR="$APP_DIR/shared"
CURRENT_LINK="$APP_DIR/current"
RELEASE_PATH="$RELEASES_DIR/$(date +%Y%m%d%H%M%S)_${COMMIT_SHA:0:7}"
KEEP_RELEASES=5
echo "==> Creating release directory"
mkdir -p "$RELEASES_DIR"
echo "==> Cloning at $COMMIT_SHA"
git clone --depth 1 git@github.com:yourorg/yourapp.git "$RELEASE_PATH"
cd "$RELEASE_PATH"
git checkout "$COMMIT_SHA"
echo "==> Linking shared resources"
rm -rf "$RELEASE_PATH/storage"
ln -sfn "$SHARED_DIR/storage" "$RELEASE_PATH/storage"
ln -sfn "$SHARED_DIR/.env" "$RELEASE_PATH/.env"
echo "==> Installing Composer dependencies"
composer install \
--no-dev \
--no-interaction \
--prefer-dist \
--optimize-autoloader \
--quiet
echo "==> Building frontend assets"
npm ci --silent
npm run build
echo "==> Running database migrations"
php artisan migrate --force
echo "==> Warming framework caches"
php artisan optimize
echo "==> Swapping symlink atomically"
ln -s "$RELEASE_PATH" "${CURRENT_LINK}.next"
mv --no-target-directory "${CURRENT_LINK}.next" "$CURRENT_LINK"
echo "==> Reloading PHP-FPM"
sudo systemctl reload php8.3-fpm
echo "==> Restarting long-running services"
php "$CURRENT_LINK/artisan" reload
echo "==> Pruning old releases (keeping $KEEP_RELEASES)"
cd "$RELEASES_DIR" && ls -t | tail -n "+$((KEEP_RELEASES + 1))" | xargs --no-run-if-empty rm -rf
echo "==> Release complete: $RELEASE_PATH"
The symlink swap line deserves its own explanation because the naive version contains a subtle bug. ln -sfn "$RELEASE_PATH" "$CURRENT_LINK" looks correct, but when $CURRENT_LINK already resolves to a directory (which it does after the first deploy), some ln implementations create the new symlink inside the target directory rather than replacing the link itself. The ln followed by mv --no-target-directory pattern sidesteps this entirely. On Linux, mv on the same filesystem is a kernel-level atomic rename. There is no window between the unlink and the new symlink creation where a request could resolve to a missing path.
php artisan optimize is the Laravel 12 shorthand for running config:cache, route:cache, view:cache, and event:cache in sequence. Run it inside the new release directory, before the symlink swap, so the first request to the new release hits a fully warm cache rather than triggering cold compilation on a live request.
Migrations, Cache Warming, and Service Restarts
Notice that php artisan migrate --force runs before the symlink swap. This is deliberate, and it introduces a constraint that every team using this pattern must internalise.
[Production Pitfall] When migrations run before the symlink swap, both old and new application code handle requests simultaneously during the migration window. The old release is still
currentwhile the database schema changes. This means every migration you write must be strictly backward-compatible with the previous release. Adding a nullable column: safe. Dropping a column the old code still reads: immediate 500s in production on every affected query during the migration window. Renaming a column that old code references: same result, every time.The pattern that works is two-phase migration: deploy one adds the new column (old code ignores it, new code writes to it), deploy two removes the old column after the transition is complete and no live code references it. This discipline is enforced by services like PlanetScale’s branching workflow and documented explicitly in the Laravel deployment documentation. Treat it as a hard rule, not a guideline.
For service restarts, php artisan reload is the correct command in Laravel 12. It sends a graceful termination signal to queue workers, Laravel Reverb servers, and Octane processes in a single call. Workers finish their current job before exiting. Supervisor (or your process monitor of choice) detects the exit and starts a fresh worker process, which picks up /var/www/myapp/current/artisan — now pointing at the new release.
Supervisor configuration for your worker pool should use the current symlink path, not a hardcoded release directory:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/myapp/current/artisan queue:work \
--sleep=3 \
--tries=3 \
--max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/supervisor/laravel-worker.log
stopwaitsecs=3620
stopwaitsecs must exceed --max-time. If Supervisor sends SIGTERM and the worker has not exited within stopwaitsecs seconds, it escalates to SIGKILL. A SIGKILL mid-job is a lost job. Set stopwaitsecs to --max-time + 20 at minimum.
If you are running Laravel Horizon for queue management, the reload command handles it. Horizon monitors its own worker pool and restarts processes automatically after they exit gracefully. No Supervisor configuration changes are needed for Horizon workers specifically, though you will still want Supervisor managing the horizon process itself.
Health Checks and Automatic Rollback
By default, the health check route is served at /up and returns HTTP 200 when the application has booted without exceptions, and HTTP 500 otherwise. It is registered in bootstrap/app.php via the withRouting() method:
// bootstrap/app.php
return Application::configure(basePath: dirname(__DIR__))
->withRouting(
web: __DIR__.'/../routes/web.php',
commands: __DIR__.'/../routes/console.php',
health: '/up',
)
->withMiddleware(function (Middleware $middleware) {
//
})
->withExceptions(function (Exceptions $handler) {
//
})->create();
When HTTP requests are made to this route, Laravel dispatches a DiagnosingHealth event, allowing you to perform additional health checks. Within a listener for this event, you can check your application’s database or cache status, and throwing an exception will cause the endpoint to return 500.
For production, register a listener that verifies database connectivity and your primary cache store. A bootstrap-only health check catches a broken Service Container but misses a misconfigured database driver, which is a common failure mode after an environment variable change.
The pipeline waits 15 seconds after the SSH deploy step before hitting /up. That window absorbs PHP-FPM’s reload time and the brief period before Nginx resolves the new symlink on the next request cycle. Adjust it upward if your server is under high load during deployments.
The rollback script is intentionally simple. It does not run migrations, does not install dependencies, and does not build assets. It only swaps the symlink back and restarts services:
#!/usr/bin/env bash
set -euo pipefail
APP_DIR="/var/www/myapp"
RELEASES_DIR="$APP_DIR/releases"
CURRENT_LINK="$APP_DIR/current"
echo "==> Identifying previous release"
PREVIOUS=$(ls -t "$RELEASES_DIR" | sed -n '2p')
if [ -z "$PREVIOUS" ]; then
echo "ERROR: No previous release found. Cannot roll back."
exit 1
fi
echo "==> Rolling back to: $PREVIOUS"
ln -s "$RELEASES_DIR/$PREVIOUS" "${CURRENT_LINK}.rollback"
mv --no-target-directory "${CURRENT_LINK}.rollback" "$CURRENT_LINK"
echo "==> Reloading PHP-FPM"
sudo systemctl reload php8.3-fpm
echo "==> Restarting services on previous release"
php "$APP_DIR/current/artisan" reload
echo "==> Rollback complete"
After rollback, current points to the second-most-recent release. Workers restart and load that release’s code. If the health check failure was caused by a broken migration (not just a code error), the rollback restores application behaviour but the database schema remains in its post-migration state. Plan for this. Automated rollback is a traffic restoration tool, not a database recovery mechanism. If you need genuine point-in-time database state restored, you need a pre-migration snapshot.
Secrets, Environments, and the .env Problem
Never commit .env to your repository. The shared .env file on the server should be provisioned once, manually or through a secrets management tool, and then left in place across all deployments. This is one area where a well-defined infrastructure provisioning strategy pays off directly. The Infrastructure as Code guide for Laravel teams covers managing server state deterministically, including initial environment file provisioning and ongoing drift prevention.
Configure these secrets in GitHub under Settings > Secrets and variables > Actions, scoped to the production environment:
| Secret | Purpose |
|---|---|
| DEPLOY_HOST | Production server IP or hostname |
| DEPLOY_USER | SSH user with write access to releases dir |
| DEPLOY_KEY | Private SSH key (Ed25519 recommended) |
| DEPLOY_PORT | SSH port (default: 22) |
| APP_DOMAIN | Application domain for the health check curl |
The deploy user should own /var/www/myapp/releases/ and have a narrow sudoers entry permitting only the systemctl reload php8.3-fpm command. Do not deploy as root. This is non-negotiable.
For teams running multiple environments (staging and production), create separate GitHub environments with separate secret sets. Keeping your local development stack as close to production as possible reduces the chance that an environment-specific failure makes it to the pipeline at all. The Laravel AI development stack setup guide covers environment parity for teams running AI-integrated Laravel applications, where model API keys, inference configuration, and queue drivers all need consistent configuration across environments.
Running the Full Pipeline
With .github/workflows/deploy.yml committed, deploy.sh and rollback.sh on the server at /home/deploy/scripts/, and GitHub secrets configured, the deployment cycle is:
- Developer pushes to
main - GitHub Actions starts the
testjob with a MySQL service container - Pest/PHPUnit runs against a fresh migrated test database in CI
- On green: the
deployjob starts (pending any environment approval rules you have set) - SSH executes
deploy.shwith the commit SHA on the production server - A new timestamped release directory is created, the repository cloned at that exact commit
- Composer installs production dependencies, assets build
- Migrations run against the production database (old release still serving)
-
php artisan optimizewarms the framework cache inside the new release directory - Symlink atomically swaps to the new release
- PHP-FPM reloads, services restart gracefully via
php artisan reload - GitHub Actions waits 15 seconds, then hits
https://yourdomain.com/up - HTTP 200: deployment succeeds, pipeline passes
- Non-200:
rollback.shexecutes, symlink reverts, services restart on the previous release
The cycle from push to verified production deployment runs in roughly 3 to 5 minutes, depending on Composer install time and asset build complexity. That is fast enough to ship multiple times per day without ceremony or coordination overhead.
The git pull approach worked for years. But it is a manual process dressed up as a system, and manual processes fail the same way every time: at the worst possible moment, on the highest-traffic day of the quarter. This pipeline is not perfect either. It is a single-server pattern that needs adaptation when you scale horizontally, and at that point, purpose-built tools like Deployer PHP or a blue-green deployment model via your load balancer justify the added complexity. For teams running one to three application servers, this handles the vast majority of production deployments cleanly, automatically, and with a recoverable failure path. That is the bar to clear.
If your deployment pipeline grows to include pre-deploy validation steps such as AI inference middleware health checks or token budget verification, the job structure here accommodates additional pipeline steps naturally. The Laravel AI middleware guide covering token tracking and rate limiting describes middleware patterns that translate directly into pre-deployment validation hooks for applications that depend on AI provider availability.
Top comments (0)