Running TopVideoHub across 9 Asia-Pacific regions means the cron fetcher, the PHP app, and the SQLite database all need to behave identically in development and on the actual LiteSpeed servers. Docker made that guarantee possible.
Here is how I containerized the stack.
Why Docker for PHP+SQLite?
SQLite lives in a single file, so there is no network database to wire up. The challenge is everything around it: PHP 8.3, the correct extensions (pdo_sqlite, curl, gd), and a web server that behaves like LiteSpeed in production.
With Docker we get:
- Reproducible builds — every developer gets the same PHP version and extensions
- Multi-stage images — keep the final image small by discarding build tools
-
Volume mounts — the SQLite
.dbfile persists outside the container - Dev/prod parity — no more "works on my machine" for regional cron behaviour
Multi-Stage Dockerfile
# Stage 1: Composer dependencies
FROM php:8.3-cli-alpine AS deps
RUN apk add --no-cache curl git unzip
COPY --from=composer:2 /usr/bin/composer /usr/bin/composer
WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --optimize-autoloader --no-interaction
# Stage 2: Production image
FROM php:8.3-fpm-alpine AS production
# Install required extensions
RUN apk add --no-cache \
libpng-dev libjpeg-turbo-dev freetype-dev \
sqlite-dev curl-dev && \
docker-php-ext-configure gd \
--with-freetype --with-jpeg && \
docker-php-ext-install \
pdo_sqlite gd curl opcache
# OPcache tuning for production
RUN { \
echo 'opcache.enable=1'; \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.max_accelerated_files=4096'; \
echo 'opcache.validate_timestamps=0'; \
} > /usr/local/etc/php/conf.d/opcache.ini
WORKDIR /var/www/html
# Copy only what the app needs
COPY --from=deps /app/vendor ./vendor
COPY app/ ./app/
COPY public/ ./public/
COPY templates/ ./templates/
COPY cron/ ./cron/
# Data directory is a volume — never baked into the image
RUN mkdir -p data && chown www-data:www-data data
USER www-data
EXPOSE 9000
CMD ["php-fpm"]
The key rule: never copy data/ into the image. The SQLite database file lives on a Docker volume so it persists across container restarts and upgrades.
docker-compose.yml for Local Dev
version: '3.9'
services:
app:
build:
context: .
target: production
volumes:
# Source files mounted for hot-reload in dev
- ./app:/var/www/html/app:ro
- ./public:/var/www/html/public:ro
- ./templates:/var/www/html/templates:ro
# Persistent data volume
- tvh_data:/var/www/html/data
environment:
SITE_NAME: TopVideoHub
FETCH_REGIONS: "US,GB,JP,KR,TW,SG,VN,TH,HK"
DB_PATH: /var/www/html/data/videos.db
networks:
- tvh
nginx:
image: nginx:1.27-alpine
ports:
- "8080:80"
volumes:
- ./public:/var/www/html/public:ro
- ./docker/nginx.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- app
networks:
- tvh
cron:
build:
context: .
target: production
volumes:
- tvh_data:/var/www/html/data
environment:
FETCH_REGIONS: "US,GB,JP,KR,TW,SG,VN,TH,HK"
DB_PATH: /var/www/html/data/videos.db
command: >
sh -c 'while true; do
php /var/www/html/cron/fetch_videos.php;
sleep 14400;
done'
networks:
- tvh
volumes:
tvh_data:
networks:
tvh:
Nginx Config (Dev Stand-In for LiteSpeed)
server {
listen 80;
root /var/www/html/public;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass app:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Cache static assets
location ~* \.(css|js|png|jpg|webp|svg|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
}
Handling the LiteSpeed Gap
Production uses LiteSpeed, not Nginx. Two differences matter:
-
Cache headers — Wrap LiteSpeed-specific rules in
<IfModule LiteSpeed>so Nginx/Apache silently skip them. -
lscache/directory — This directory only exists on LiteSpeed. In Docker, it simply will not be created and nothing breaks.
<?php
// Detect runtime environment
const IS_LITESPEED = PHP_SAPI === 'litespeed';
function setCacheHeader(string $pageType): void
{
$ttl = match($pageType) {
'home' => 10800,
'category' => 10800,
'watch' => 21600,
'search' => 600,
default => 3600,
};
if (IS_LITESPEED) {
// LiteSpeed reads this to populate its page cache
header("X-LiteSpeed-Cache-Control: public,max-age={$ttl}");
} else {
header("Cache-Control: public, max-age={$ttl}, stale-while-revalidate=" . ($ttl * 2));
}
}
SQLite Volume: Backup Strategy
#!/bin/bash
# backup_db.sh — Run from host, safe during container operation
DOCKER_VOLUME=$(docker volume inspect tvh_data -f '{{ .Mountpoint }}')
BACKUP_DIR="./backups"
mkdir -p "$BACKUP_DIR"
# SQLite .backup command is safe with open connections
docker exec tvh-app-1 \
sqlite3 /var/www/html/data/videos.db \
".backup '/var/www/html/data/videos.db.bak'"
cp "${DOCKER_VOLUME}/videos.db.bak" \
"${BACKUP_DIR}/videos-$(date +%Y%m%d-%H%M%S).db"
echo "Backup complete: $(du -sh ${BACKUP_DIR}/videos-*.db | tail -1)"
Build and Run
# Build production image
docker build --target production -t tvh:latest .
# Start dev stack
docker compose up -d
# Watch cron logs
docker compose logs -f cron
# Run cron manually
docker compose exec cron php /var/www/html/cron/fetch_videos.php
# Check database size
docker compose exec app sqlite3 data/videos.db 'SELECT COUNT(*) FROM videos;'
Results
Local dev for TopVideoHub now starts with a single docker compose up. The same PHP version, extensions, and SQLite build run everywhere. Regional cron testing with the full 9-region FETCH_REGIONS list runs locally without touching production API quota.
The multi-stage build keeps the final image at 95MB. SQLite on a named volume means zero data loss during image upgrades.
This is part of the "Building TopVideoHub" series, documenting the architecture behind a video discovery platform covering 9 Asia-Pacific regions.
Top comments (0)