DEV Community

Cover image for Scaling Laravel from 1 to 100,000 Users: A Deploynix Infrastructure Playbook
Deploynix
Deploynix

Posted on • Originally published at deploynix.io

Scaling Laravel from 1 to 100,000 Users: A Deploynix Infrastructure Playbook

Scaling is not something you do all at once. It is a series of incremental changes, each driven by a specific bottleneck that appears as your application grows. The architecture that serves your first 100 users should look nothing like the one that serves 100,000 — but you should not build for 100,000 on day one.

This playbook walks through five distinct growth stages, each with specific infrastructure changes, cost expectations, and the signals that tell you it is time to move to the next stage. Every recommendation is practical and implementable on Deploynix using the cloud providers it supports: DigitalOcean, Vultr, Hetzner, Linode, and AWS.

Stage 1: Launch (1 to 1,000 Users)

Infrastructure

One server. That is it.

Provision a single App server through Deploynix. This server runs everything: Nginx, PHP-FPM, your database (MySQL, MariaDB, or PostgreSQL), Valkey for caching and queues, and your queue worker processes.

Recommended specs:

  • 2 vCPU, 4GB RAM (Hetzner CX32: ~$7/month, DigitalOcean: ~$24/month)
  • NVMe SSD storage
  • Ubuntu LTS

Configuration priorities:

  • PHP OPcache enabled (it is by default on Deploynix-provisioned servers)
  • MySQL buffer pool sized to ~50% of available RAM
  • Valkey maxmemory set to 512MB with allkeys-lru eviction
  • 2-4 PHP-FPM workers
  • 1-2 queue worker processes

What to Focus On

At this stage, your time is better spent on application code than infrastructure. But establish good habits:

  • Set up monitoring. Deploynix provides real-time monitoring for CPU, memory, disk, and load average on paid plans. Even on the free tier, keep an eye on server resource usage and consider upgrading to access health alerts as your application grows.
  • Configure automated backups. Set up daily database backups to S3, DigitalOcean Spaces, or Wasabi through Deploynix. Test a restore.
  • Use caching from the start. Cache database queries that power your homepage, your most-viewed pages, and any data that changes infrequently. Use Valkey as your cache driver.
  • Set up SSL. Deploynix handles Let's Encrypt certificates automatically. There is no excuse for serving any page over HTTP.
  • Enable zero-downtime deployments. Even with one server, zero-downtime deploys mean your users never see a maintenance page.

Cost

$7-24/month for the server, plus your domain and any third-party services. This is the cheapest your infrastructure will ever be.

When to Move to Stage 2

  • CPU or memory consistently above 70% during normal traffic
  • Database queries slowing down due to memory pressure
  • You need queue workers to process more jobs but cannot add workers without impacting web performance

Stage 2: Growing (1,000 to 10,000 Users)

Infrastructure Changes

Separate the database onto its own server. This single change often doubles your effective capacity because the database and PHP-FPM stop competing for memory.

Server layout:

  • 1x Web/App server (2 vCPU, 4GB RAM)
  • 1x Database server (2 vCPU, 8GB RAM)

The database server gets more RAM because database performance scales directly with how much data fits in the buffer pool.

Steps on Deploynix:

  1. Provision a new Database server (choose MySQL, MariaDB, or PostgreSQL)
  2. Migrate your data to the new server
  3. Update your site's environment variables to point to the new database server's private IP
  4. Add a firewall rule on the database server through Deploynix's firewall management to allow connections only from your web server's IP address
  5. Deploy and verify

Application Optimizations

With more users come more opportunities for performance issues:

  • Eager load relationships. Use with() on every query that touches related models. Enable Model::preventLazyLoading() in your AppServiceProvider (development only) to catch N+1 queries.
  • Add database indexes. Run EXPLAIN on your slowest queries and add indexes for columns used in WHERE, ORDER BY, and JOIN clauses.
  • Implement response caching. For pages that are the same for every user (pricing pages, documentation, public profiles), cache the entire response.
  • Use queues for everything non-essential. Emails, webhooks, analytics events, PDF generation — anything that does not need to happen before the HTTP response should be queued.

Cost

$15-50/month total depending on provider. Hetzner offers the most aggressive pricing at this tier.

When to Move to Stage 3

  • Queue backlog growing during peak hours
  • Web response times degrading when heavy jobs are processing
  • You need more queue throughput than a single server provides

Stage 3: Established (10,000 to 30,000 Users)

Infrastructure Changes

Add a dedicated Worker server and separate the cache.

Server layout:

  • 1x Web server (2 vCPU, 4GB RAM)
  • 1x Database server (4 vCPU, 16GB RAM — upgraded)
  • 1x Cache server (2 vCPU, 4GB RAM)
  • 1x Worker server (2 vCPU, 4GB RAM)

Why separate the cache now: You are preparing for Stage 4, which involves multiple web servers. Multiple web servers require a shared, external cache. Separating it now also gives Valkey dedicated memory, eliminating evictions caused by memory pressure from other processes.

Why add a worker server: Dedicated workers mean background jobs never impact web request performance. You can run more queue worker processes through Deploynix and tune their configuration independently.

Application Optimizations

  • Session storage. Move sessions from file to database or Valkey. This is required before you can use multiple web servers.
  • File storage. Move uploaded files to S3 or an S3-compatible service (DigitalOcean Spaces, Wasabi). Local file storage does not work across multiple servers.
  • Query optimization. At this scale, slow queries become noticeable. Review your slow query log weekly. Consider read-through caching for expensive queries.
  • Rate limiting. Implement rate limiting on your API endpoints and form submissions to prevent abuse from consuming resources.

Cost

$40-120/month depending on provider and server specs. At this point, Hetzner's pricing advantage becomes very compelling — the entire four-server setup costs roughly what a single medium server costs on DigitalOcean.

When to Move to Stage 4

  • Single web server CPU is consistently above 70%
  • You need high availability (cannot afford downtime)
  • Traffic spikes cause timeouts because a single web server cannot handle the concurrency

Stage 4: Scaling (30,000 to 70,000 Users)

Infrastructure Changes

Add a load balancer and multiple web servers.

Server layout:

  • 1x Load Balancer
  • 2-3x Web servers (2-4 vCPU, 4-8GB RAM each)
  • 1x Database server (8 vCPU, 32GB RAM — upgraded again)
  • 1x Cache server (2 vCPU, 8GB RAM — upgraded)
  • 1-2x Worker servers (2-4 vCPU, 4-8GB RAM)

Load balancer configuration on Deploynix:

  • Method: Least Connections (best for requests with varying processing times)
  • Health checks: verify each web server is responding
  • SSL termination: at the load balancer level

Critical preparation before going multi-server:

  • Sessions stored in Valkey or database (not file)
  • All file uploads stored externally (S3-compatible storage)
  • No application state stored on individual web servers
  • Queue configuration uses the shared Valkey server

The Deployment Story

Deploynix deploys to all web servers in your site configuration. Zero-downtime deployment at this scale means:

  1. New code is prepared on each web server
  2. The load balancer continues routing to old processes
  3. Processes are swapped atomically on each server
  4. Old processes finish their in-flight requests
  5. The transition is seamless to users

Application Optimizations

  • CDN for static assets. Offload CSS, JavaScript, images, and other static files to a CDN. This dramatically reduces the load on your web servers.
  • Database read replicas. If your workload is read-heavy (most web applications are), add a read replica and route read queries to it. Laravel's database configuration supports read/write splitting natively.
  • Cache warming. For frequently accessed data, implement cache warming (pre-loading the cache after deployment) rather than relying on cache-on-miss.
  • Connection pooling. With multiple web servers, the total number of database connections increases. Monitor connection usage and consider connection pooling if you approach limits.

Cost

$100-400/month depending on provider and specs. The database server is your biggest expense at this stage.

When to Move to Stage 5

  • Database server CPU or I/O consistently at capacity
  • Need even higher availability with geographic distribution
  • Traffic exceeding what 3 web servers can handle

Stage 5: At Scale (70,000 to 100,000+ Users)

Infrastructure Changes

At this scale, you are optimizing every tier and adding redundancy.

Server layout:

  • 1x Load Balancer (consider provider-managed LB for higher reliability)
  • 4-6x Web servers
  • 1x Database server (primary, high-spec: 16 vCPU, 64GB RAM)
  • 1x Database read replica
  • 1x Cache server (dedicated, 4 vCPU, 16GB RAM)
  • 2-3x Worker servers (sized based on job complexity)
  • Optional: 1x Meilisearch server (if full-text search is a core feature)

Advanced Strategies

Database optimization at scale:

  • Partitioning large tables by date or tenant
  • Archiving old data to reduce active dataset size
  • Query result caching with tag-based invalidation
  • Connection pooling with a tool like ProxySQL

Queue architecture:

  • Multiple queue names with different priorities (high, default, low)
  • Dedicated workers for different queue names
  • Job batching for related operations
  • Horizon for queue monitoring (if not already using it)

Caching strategy:

  • Multi-layer caching: application cache (Valkey), HTTP cache (Nginx), and CDN
  • Cache tags for granular invalidation
  • Distributed caching if a single Valkey server is insufficient

Consider Octane:
If you have not already, this is the scale where Laravel Octane (with FrankenPHP, Swoole, or RoadRunner) provides meaningful benefits. Octane keeps your application bootstrapped in memory, eliminating the per-request boot cost. Deploynix supports Octane deployment with all three drivers. The performance difference at this concurrency level can reduce the number of web servers you need.

Cost

$300-1,000+/month depending on provider, specs, and whether you use Hetzner (lower end) or AWS (higher end). At this scale, the annual infrastructure cost difference between Hetzner and AWS can be $5,000-10,000.

Cross-Cutting Concerns at Every Stage

Backups

Your backup strategy should evolve with your architecture:

  • Stage 1-2: Daily database backups to external storage
  • Stage 3+: Daily full backups, hourly incremental backups
  • Stage 4+: Test restores monthly, maintain documented recovery procedures

Deploynix supports automated backups to AWS S3, DigitalOcean Spaces, Wasabi, and custom S3-compatible storage.

Monitoring

Monitoring becomes more critical as complexity increases:

  • Stage 1-2: Server-level metrics (CPU, memory, disk) and error tracking
  • Stage 3+: Application-level metrics (queue depth, response times, cache hit ratio)
  • Stage 4+: Cross-server correlation, load balancer health, deployment impact tracking

Security

Security does not change with scale — it matters from day one:

  • SSH key authentication only
  • Firewall rules on every server (Deploynix sets secure defaults during provisioning; configure inter-server rules through the firewall management interface)
  • Regular security updates
  • Encrypted credentials
  • SSL on all endpoints

DNS and SSL

  • Use Deploynix's SSL auto-provisioning from the start
  • For wildcard certificates, Deploynix supports DNS validation through Cloudflare, DigitalOcean, AWS Route 53, and Vultr
  • As you add servers, SSL termination typically moves to the load balancer

The Most Important Rule

Do not skip stages. Every premature optimization costs you time, money, and complexity. A four-server architecture for 500 users is not "being prepared" — it is burning budget on infrastructure management when you should be building features.

Monitor your current stage. Identify the specific bottleneck. Make the targeted change that addresses it. Verify the improvement. Only then consider the next stage.

Conclusion

Scaling from 1 to 100,000 users is a journey of five stages, each solving specific bottlenecks that emerge at specific scales. The path is predictable: start with a single server, separate the database, add dedicated workers and cache, introduce load balancing with multiple web servers, and finally optimize every tier with read replicas, CDN, and advanced caching.

Deploynix makes each transition manageable by providing purpose-built server types (App, Web, Database, Cache, Worker, Load Balancer, Meilisearch) that are provisioned with optimized defaults for their role. Firewall rules, deployment coordination, and monitoring work across your entire infrastructure regardless of how many servers you run.

The application changes at each stage — from eager loading and caching early on, to session externalization and file storage migration for multi-server, to read replicas and connection pooling at scale — are equally important. Infrastructure without application optimization is like buying a faster car and leaving the parking brake on.

Scale deliberately, measure constantly, and build only what your current growth demands.

Top comments (0)