DEV Community

Cover image for 7 Server Types, 7 Use Cases: Picking the Right Architecture for Your App
Deploynix
Deploynix

Posted on • Originally published at deploynix.io

7 Server Types, 7 Use Cases: Picking the Right Architecture for Your App

One of the most consequential decisions you'll make for your application is how to structure your servers. Run everything on a single box and you'll hit a ceiling. Split things up the wrong way and you'll add complexity without benefit. Get it right and your application scales smoothly, each component optimized for its specific role.

Deploynix offers seven purpose-built server types, each configured differently based on its role in your architecture: App, Web, Database, Cache, Worker, Meilisearch, and Load Balancer. Whether you're deploying Laravel, WordPress, Statamic, a static site, or a frontend framework like Next.js or Nuxt, these server types give you the building blocks for the right architecture. This guide explains what each type does, when to use it, and how they fit together in real-world architectures.

Server Type 1: App Server

The App server is the all-in-one option. It runs your web server (Nginx), PHP-FPM (or Octane), your database, cache, and queue workers — everything your application needs on a single machine.

What's Installed

  • Nginx

  • PHP (8.0–8.4, your choice) with PHP-FPM or Octane (FrankenPHP, Swoole, or RoadRunner)

  • MySQL, MariaDB, or PostgreSQL

  • Valkey (Redis-compatible cache)

  • Node.js and NPM

  • Composer

  • WP-CLI (for WordPress projects)

  • Supervisor (manages queue workers and custom daemons)

  • Certbot (for SSL certificate management)

  • Cron scheduler

When to Use It

The App server is the right choice when:

  • You're starting out. Every application starts on a single server. There's no shame in this — a well-configured single server can handle substantial traffic.

  • Your traffic is moderate. If your application serves a few hundred concurrent users or fewer, a single App server is likely sufficient.

  • You want simplicity. One server means one thing to manage, one thing to monitor, one thing to back up. Complexity has costs.

  • Your budget is limited. One server is cheaper than four servers, even if the one server is larger.

When to Move On

You'll know it's time to split off components when:

  • Database queries compete with web requests for CPU

  • Queue jobs impact web response times

  • You need to scale web capacity independently of database capacity

  • Your single server's disk, memory, or CPU is consistently maxed

Example Sizing

A 4 vCPU / 8 GB RAM App server can comfortably run an application with moderate traffic, a MySQL database under 10 GB, Valkey caching, and a few queue workers. This is the starting point for most applications.

Server Type 2: Web Server

The Web server runs your web-facing application without a local database or cache. It's the front-end of a split architecture — it handles HTTP requests, runs PHP, and connects to separate Database and Cache servers over the network.

What's Installed

  • Nginx

  • PHP (8.0–8.4) with PHP-FPM or Octane

  • Node.js and NPM

  • Composer

  • WP-CLI (for WordPress projects)

  • Supervisor (for daemons and optional queue workers)

  • Certbot (for SSL certificates)

What's Not Installed

  • No database server

  • No cache server (connects to a separate Cache server over the network)

When to Use It

Web servers are the right choice when you've split your database and cache onto dedicated servers. The Web server focuses entirely on handling HTTP requests, which means:

  • All CPU and memory are dedicated to serving web requests

  • You can scale web capacity horizontally by adding more Web servers

  • Each Web server is stateless (sessions and cache are on the Cache server), making them interchangeable

Horizontal Scaling

This is the key benefit of Web servers. When your single App server's web capacity is maxed but your database and cache are fine, you don't need a bigger server — you need more web servers behind a load balancer. Two 2 vCPU Web servers behind a Load Balancer often handle more traffic than a single 4 vCPU App server, at a similar cost.

Server Type 3: Database Server

The Database server runs your database engine and nothing else. It's configured and optimized for database workloads.

What's Installed

  • MySQL, MariaDB, or PostgreSQL (you choose during provisioning)

  • Optimized database configuration for the server's resources

What's Different

A dedicated Database server has its configuration tuned for database workloads:

  • Memory allocation. The database engine's buffer pool (InnoDB's innodb_buffer_pool_size for MySQL, shared_buffers for PostgreSQL) is sized to use a larger percentage of available RAM, since there's no PHP-FPM or web server competing for memory.

  • Disk I/O. Without web server and application code competing for disk, the database gets full disk throughput for queries, writes, and temporary tables.

  • CPU dedication. Complex queries, sorts, and joins get full access to CPU without competition from PHP processing.

When to Use It

Separate your database when:

  • Database queries are the bottleneck and you want to give the database more resources

  • You need to scale web and database independently

  • You want database backups to not impact web server performance

  • You need database replication or failover capability

  • Multiple Web servers need to connect to the same database

Sizing Guidance

For MySQL, a good rule of thumb is to allocate 70-80% of the server's RAM to innodb_buffer_pool_size on a dedicated Database server. If your database fits in the buffer pool, most reads come from memory instead of disk, dramatically improving query performance.

Server Type 4: Cache Server

The Cache server runs Valkey (Redis-compatible) and is optimized for in-memory workloads.

What's Installed

  • Valkey (Redis-compatible)

Why a Dedicated Cache Server

Laravel uses cache for multiple purposes:

  • Application cache. Results of expensive queries, API responses, computed values

  • Session storage. User sessions (when using the Redis/Valkey session driver)

  • Queue backend. Job payloads waiting to be processed

  • Real-time broadcasting. Reverb uses Redis/Valkey for pub/sub

When all of these share a cache instance on an App server that's also running PHP-FPM and MySQL, memory contention becomes a problem. The cache server, PHP, and the database all want RAM.

A dedicated Cache server solves this by giving Valkey its own memory pool. A 2 GB Cache server means 2 GB dedicated to caching, sessions, and queues — not shared with anything else.

When to Use It

Separate your cache when:

  • You're running multiple Web servers (they all need to share the same cache and sessions)

  • Your cache dataset is large enough that it competes with other services for memory

  • You want cache persistence across web server deployments

  • Queue throughput is important and you don't want queue operations competing with web request caching

Sizing Guidance

Size your Cache server based on your cache dataset size plus overhead. If your application caches 500 MB of data, a 1 GB Valkey server provides comfortable headroom. Monitor memory usage and scale up before you hit the limit — Valkey's eviction policy handles overflow, but eviction means your cache is less effective.

Server Type 5: Worker Server

The Worker server runs queue workers and scheduled tasks without serving web traffic.

What's Installed

  • PHP (8.0–8.4, for running queue workers and artisan commands)

  • Composer

  • Supervisor (manages queue workers and custom daemons)

  • Cron scheduler

What's Not Installed

  • No Nginx (no web traffic served)

  • No Node.js

  • No database server

  • No cache server (connects to a separate Cache server for queue backend)

When to Use It

Worker servers are the right choice when:

  • Queue jobs consume significant CPU (image processing, PDF generation, data imports, API calls)

  • You want job processing to not impact web response times

  • You need to scale queue processing independently of web capacity

  • You have many queue workers that compete with PHP-FPM workers for resources

Scaling Workers

Worker servers scale horizontally. If one Worker server can process 100 jobs per minute but you're generating 200 jobs per minute, add a second Worker server. Both connect to the same queue backend (Valkey/Redis, database, SQS, or Beanstalkd) and process jobs in parallel. Each worker process is configurable with its own timeout, retry count, sleep interval, and even PHP version.

This is one of the clearest scaling patterns in Laravel: web traffic generates jobs, and worker capacity determines how quickly those jobs are processed. Scaling each independently lets you match capacity to demand.

Isolation Benefits

Worker servers provide isolation between web requests and background processing. A queue job that consumes 100% CPU for 30 seconds doesn't slow down a single web request, because they're running on different machines. This isolation is especially important for applications that process user uploads, generate reports, or run data-intensive operations in the background.

Server Type 6: Meilisearch Server

The Meilisearch server runs Meilisearch, the search engine that integrates with Laravel Scout.

What's Installed

  • Meilisearch (configured and running as a systemd service)

  • Nginx (as a reverse proxy for secure HTTPS access)

  • Certbot (for SSL certificates)

Why Meilisearch Gets Its Own Server

Meilisearch is a resource-intensive application that performs best with dedicated resources. It builds and maintains search indexes in memory, and indexing operations consume significant CPU. Running Meilisearch alongside your web application means search indexing and search queries compete with web requests for resources.

On a dedicated server, Meilisearch gets full access to CPU for indexing and full access to memory for search indexes. The result is faster indexing and faster search queries. Nginx is configured as a reverse proxy so your application connects to Meilisearch over HTTPS with a secure, auto-generated master key.

When to Use It

A dedicated Meilisearch server makes sense when:

  • Your search index is large (millions of records)

  • Search is a core feature of your application

  • Indexing operations are frequent (real-time indexing on model changes)

  • You want search performance to be consistent regardless of web server load

Integration with Laravel Scout

Laravel Scout's Meilisearch driver connects to the Meilisearch server over the network. Configuration is straightforward — set the MEILISEARCH_HOST environment variable to your Meilisearch server's address. Your Laravel application sends indexing and search requests to the Meilisearch server, which handles them independently.

Server Type 7: Load Balancer

The Load Balancer server distributes incoming traffic across multiple Web servers.

What's Installed

  • Nginx (configured as a reverse proxy and load balancer)

  • Certbot (for SSL termination)

Load Balancing Methods

Deploynix supports three load balancing algorithms:

Round Robin. Requests are distributed evenly across backend servers in rotation. Server A, then Server B, then Server A, then Server B. Simple and effective when your servers are identical in capacity.

Least Connections. Requests are sent to the server with the fewest active connections. This naturally adapts to uneven load — if one server is handling a slow request, new requests go to the other server. Better than Round Robin when request processing times vary significantly.

IP Hash. Requests from the same client IP always go to the same server. This provides sticky sessions without application-level session management. Useful when you can't use a shared session store, though using a Cache server for sessions is generally preferred.

Backend Configuration

Each backend server in the load balancer pool is individually configurable:

  • Weight. Assign a weight to each backend to control traffic distribution. A server with weight 3 receives three times the traffic of a server with weight 1. Useful when your servers have different capacities.

  • Backup. Mark a server as a backup — it only receives traffic when all primary servers are unavailable. A cost-effective way to add failover without doubling your infrastructure.

  • Down status. Manually mark a server as down to remove it from rotation without deleting it. Useful for maintenance windows.

When to Use It

A Load Balancer makes sense when:

  • You have multiple Web servers and need to distribute traffic

  • You want SSL termination at the load balancer level (so backend servers don't handle SSL)

  • You need health checking to automatically remove unhealthy servers from rotation

  • You're scaling horizontally and need a single entry point for your application

Health Checking

The Load Balancer exposes a /health endpoint for external monitoring and tracks backend server availability. If a backend server fails to respond, Nginx automatically removes it from rotation until it recovers. You can also manually mark servers as down for planned maintenance. Combined with the backup server feature, this gives you a resilient traffic distribution layer.

Architecture Examples

Let's look at how these server types combine for real-world Laravel applications.

Solo Project / Early Stage

  • 1x App Server (4 vCPU / 8 GB RAM)

Total: 1 server. Everything runs on one box. Simple, affordable, sufficient for most early-stage applications.

Growing SaaS Application

  • 1x Web Server (4 vCPU / 8 GB RAM)

  • 1x Database Server (4 vCPU / 8 GB RAM)

  • 1x Cache Server (2 vCPU / 4 GB RAM)

  • 1x Worker Server (2 vCPU / 4 GB RAM)

Total: 4 servers. Web, database, cache, and queue processing are separated. Each can be scaled independently.

High-Traffic Application

  • 1x Load Balancer (2 vCPU / 2 GB RAM)

  • 3x Web Servers (4 vCPU / 8 GB RAM each)

  • 1x Database Server (8 vCPU / 32 GB RAM)

  • 1x Cache Server (4 vCPU / 8 GB RAM)

  • 2x Worker Servers (4 vCPU / 8 GB RAM each)

  • 1x Meilisearch Server (4 vCPU / 8 GB RAM)

Total: 9 servers. Load-balanced web tier, powerful database, dedicated cache, parallel worker processing, and fast search.

Content Platform with Search

  • 1x Web Server (4 vCPU / 8 GB RAM)

  • 1x Database Server (4 vCPU / 16 GB RAM)

  • 1x Cache Server (2 vCPU / 4 GB RAM)

  • 1x Meilisearch Server (4 vCPU / 8 GB RAM)

Total: 4 servers. Search is a first-class concern with its own dedicated resources.

Scaling Patterns

As your application grows, here's the typical scaling progression:

  1. Start with App Server. Everything on one box.

  2. Separate the database. The database is usually the first bottleneck. Move it to a dedicated Database Server.

  3. Add a Cache Server. Shared sessions and cache for a potential second web server.

  4. Add Worker Servers. Move queue processing off the web server.

  5. Add a Load Balancer and more Web Servers. Scale web capacity horizontally.

  6. Add Meilisearch if needed. When search becomes a core feature.

Each step adds complexity, so don't prematurely optimize. Run on a single App server until you have evidence that splitting will solve a real problem. Monitor your server metrics on Deploynix to identify which resource is the bottleneck, and scale that component.

Conclusion

The right architecture depends on your application's specific workload, traffic patterns, and budget. Deploynix's seven server types give you the building blocks to construct the right architecture at each stage of growth, from a single App server to a fully distributed, load-balanced infrastructure. And with support for Laravel, WordPress, Statamic, static sites, and modern frontend frameworks (React, Vue, Next.js, Nuxt, Svelte, SvelteKit, Angular), these server types work for any stack.

Every server type is managed through the same Deploynix dashboard with the same deployment pipeline, monitoring, and team permissions. Whether you manage one server or twenty, the experience is consistent.

Start simple. Monitor your metrics. Scale what needs scaling. Deploynix gives you the tools for every stage.

Get started at https://deploynix.io.

Top comments (0)