Serverless has been the darling of the infrastructure world for the better part of a decade. The pitch is irresistible: no servers to manage, automatic scaling to zero, pay only for what you use, and infinite scalability. For certain workloads, serverless genuinely delivers on these promises. For Laravel applications, the reality is considerably more nuanced.
This is not a hit piece on serverless. AWS Lambda, Laravel Vapor, and Bref have legitimate use cases and have helped many teams solve real problems. But the hype around serverless has created a distorted perception of its tradeoffs, and many Laravel developers adopt it without fully understanding what they are giving up.
Let us separate the marketing from the engineering reality.
The Serverless Promise
The core serverless proposition for Laravel typically involves one of two approaches:
Laravel Vapor is the first-party serverless deployment platform for Laravel, built by the Laravel team. It deploys your application to AWS Lambda, uses API Gateway for HTTP routing, SQS for queues, and integrates with RDS for databases and ElastiCache for Redis. Vapor abstracts away most of the AWS complexity and provides a familiar deployment experience.
Bref is an open-source project that provides PHP runtimes for AWS Lambda. You can deploy Laravel on Bref using the Serverless Framework or AWS SAM. Bref gives you more control than Vapor but requires more AWS knowledge.
Both approaches share the same underlying infrastructure: AWS Lambda functions that spin up on demand, execute your PHP code, and shut down when idle.
The promise is compelling:
- No servers to provision, patch, or monitor.
- Automatic scaling from zero to thousands of concurrent requests.
- Pay-per-invocation pricing that theoretically costs nothing when your app has no traffic.
- Managed infrastructure that AWS keeps running with 99.99% availability.
In practice, every one of these promises comes with caveats that matter for Laravel applications.
Cold Starts: The Tax on Every Request
Cold starts are the single most discussed problem with serverless PHP, and for good reason. When a Lambda function has not been invoked recently, AWS needs to spin up a new execution environment: download your deployment package, initialize the PHP runtime, bootstrap your Laravel application, and then handle the request.
For a typical Laravel application, this cold start adds 500 milliseconds to 2 seconds to the first request. With Octane or Bref's event-driven mode, subsequent requests to the same instance are fast. But cold starts happen more frequently than you might expect:
- After periods of low traffic. Lambda reclaims idle instances after roughly 5 to 15 minutes of inactivity. If your SaaS application has quiet periods (nights, weekends), the first user in each quiet period pays the cold start penalty.
- During traffic spikes. When concurrent requests exceed your current instance count, Lambda spins up new instances — each with a cold start. A traffic spike that would be seamless on a traditional server with Octane causes hundreds of cold starts on Lambda.
- Across multiple Lambda functions. A typical Vapor deployment creates separate Lambda functions for web requests, queue workers, and artisan commands. Each function has its own cold start behavior.
Vapor mitigates this with a "warming" feature that periodically pings your Lambda functions to keep instances alive. But warming adds cost (you are paying for invocations that serve no users), does not help during traffic spikes (you cannot warm instances you do not know you will need), and adds complexity to your deployment configuration.
On a traditional server managed by Deploynix — especially with Laravel Octane using FrankenPHP, Swoole, or RoadRunner — your application is always warm. Workers are always running. The first request of the day is just as fast as the millionth.
Cost at Scale: The Curve That Bends the Wrong Way
Serverless pricing looks magical at low traffic: you pay fractions of a cent per request, and idle applications cost nearly nothing. This is genuinely great for side projects, internal tools, and applications with very low or very spiky traffic.
But the cost curve for serverless bends the wrong way at scale.
AWS Lambda pricing is based on invocation count and execution duration (measured in GB-seconds). For a Laravel application handling a reasonable volume of traffic, let us do the math:
- 100,000 requests per day at an average execution time of 200ms with 512MB memory.
- Lambda cost: roughly 100,000 x 0.2s x 0.5GB = 10,000 GB-seconds per day.
- At AWS's pricing ($0.0000166667 per GB-second after free tier): approximately $5 per day, or $150 per month.
- Add API Gateway costs ($3.50 per million requests): another $10.50 per month.
- Add SQS for queue processing, CloudWatch for logging, RDS for the database, and ElastiCache for Redis.
The total for a moderately-trafficked Laravel application on Lambda easily exceeds $200 to $400 per month. Vapor's own pricing adds another $39 per month per project on top of AWS costs.
Compare this to a $24/month DigitalOcean droplet managed by Deploynix: 4 vCPUs, 8GB RAM, running Nginx, PHP-FPM (or Octane), MySQL, and Valkey. This single server handles 100,000 requests per day without breaking a sweat. Add Deploynix's subscription and your total cost is a fraction of the serverless equivalent.
The crossover point where serverless becomes more expensive than a managed server is lower than most developers expect. For many Laravel applications, it is somewhere around 10,000 to 50,000 requests per day — a threshold that any application with real users crosses quickly.
Stateful Limits: Laravel Was Built for Servers
Lambda functions are stateless by design. Each invocation starts fresh, with no shared memory between requests (unless you use Octane-style persistent workers, which Bref supports but with limitations). This fundamental constraint conflicts with several things Laravel does well:
File Storage
Lambda has a writable /tmp directory with 512MB of space (configurable up to 10GB). But this storage is ephemeral — it exists only for the lifetime of the execution environment. File uploads must be streamed directly to S3. File generation (PDFs, exports, images) must be written to S3 immediately. Any workflow that assumes local filesystem persistence breaks.
This is not insurmountable. Laravel's filesystem abstraction with S3 drivers handles it. But it adds latency to every file operation and eliminates the option of using local storage for temporary work.
WebSockets
Laravel Reverb, which provides real-time WebSocket connections, does not work on Lambda. WebSocket connections are long-lived and stateful — the opposite of Lambda's execution model. On serverless, you must use AWS API Gateway WebSockets (complex) or a third-party service like Pusher or Ably (additional cost and external dependency).
On a traditional server, Reverb runs alongside your application with zero configuration overhead. Real-time features work out of the box.
Sessions and Cache
While you can use external session and cache drivers (Redis via ElastiCache, database sessions via RDS), every cache hit and session read now involves a network round trip to another AWS service. On a traditional server, Valkey runs locally with sub-millisecond response times.
Long-Running Processes
Lambda functions have a maximum execution time of 15 minutes. Most web requests finish in under a second, so this limit rarely matters for HTTP. But queue jobs that process large files, generate complex reports, or interact with slow external APIs can exceed this limit. You need to architect around it, breaking large jobs into smaller chunks — which is good practice anyway but is now a hard requirement rather than an optimization choice.
Database Connection Limits: The Hidden Bottleneck
This is the problem that catches most teams by surprise.
Each Lambda instance opens its own database connection. When Lambda scales to handle traffic, the number of database connections scales proportionally. A traffic spike that creates 200 concurrent Lambda instances creates 200 database connections.
Most RDS instances have connection limits in the low hundreds. A sudden traffic spike can exhaust your database connection pool, causing failures not just for Lambda but for any other service that connects to the same database.
AWS offers RDS Proxy to mitigate this — a connection pooler that sits between Lambda and RDS. But RDS Proxy adds cost ($0.015 per vCPU hour), adds latency, and requires additional configuration.
On a traditional server, your application uses a fixed, predictable number of database connections. PHP-FPM with 20 worker processes uses 20 connections. Octane with 8 workers uses 8 connections. The connection count is stable, predictable, and well within any database's limits.
Deployment Complexity
Vapor and Bref abstract away much of the AWS complexity, but the abstraction leaks.
A Vapor deployment involves packaging your application, uploading it to S3, creating a new Lambda function version, updating API Gateway, configuring SQS queues, and managing environment variables across multiple AWS services. When something goes wrong — a deployment fails, a function times out, a queue stops processing — you need to debug across Lambda CloudWatch logs, API Gateway metrics, SQS dead-letter queues, and RDS monitoring.
This is a fundamentally different debugging experience from SSH-ing into a server and reading a log file. The observability tooling exists, but it is distributed across a dozen AWS services, each with its own interface and pricing.
On Deploynix, deployment is a Git push or a dashboard click. Logs are on the server. Processes are visible through the web terminal. The mental model is simpler because the infrastructure is simpler.
When Serverless Actually Makes Sense for Laravel
Despite all these caveats, serverless has legitimate use cases:
Extremely spiky traffic. If your application goes from zero to 10,000 concurrent requests and back to zero within minutes (think: ticket sales, flash sales, event registrations), Lambda's instant scaling is genuinely valuable. A traditional server cannot scale this fast without load balancing across pre-provisioned instances.
Event-driven processing. Lambda excels at processing events from other AWS services: S3 uploads triggering image processing, SNS notifications triggering webhook delivery, DynamoDB streams triggering data transformations. If your Laravel application is part of a larger AWS event-driven architecture, Lambda fits naturally.
Near-zero traffic applications. Internal tools, admin panels, and staging environments that receive a handful of requests per day genuinely benefit from Lambda's scale-to-zero pricing. Paying nothing when nobody is using the app is appealing.
Compliance requirements. Some enterprise environments mandate AWS-native deployments with specific security certifications that Lambda provides out of the box.
When Traditional Servers Win
For the majority of Laravel applications — SaaS products, e-commerce platforms, content management systems, APIs, and web applications with consistent traffic — traditional servers managed by a purpose-built platform like Deploynix offer a better experience:
- Predictable performance. No cold starts, no variable latency, no connection pooling headaches.
- Lower cost at scale. A single managed server outperforms Lambda for most workloads at a fraction of the cost.
- Full Laravel compatibility. Octane, Reverb, local filesystem, long-running processes, and every other Laravel feature works without modification.
- Simpler debugging. One server, one set of logs, one mental model.
- Infrastructure you understand. Nginx, PHP-FPM, MySQL, and Valkey are technologies the Laravel community has deep expertise with.
Deploynix adds the operational layer that makes traditional servers manageable: automated provisioning on DigitalOcean, Vultr, Hetzner, Linode, AWS, or custom providers; zero-downtime deployments; automated backups; SSL provisioning; real-time monitoring; and health alerts.
You get the reliability and performance of a dedicated server with the operational simplicity that serverless promises but does not always deliver.
Making the Right Choice
The serverless vs. servers debate is not about which technology is objectively better. It is about which technology is the right fit for your specific application, team, and constraints.
If your application has consistent traffic, uses Laravel's full feature set, and you want predictable costs and performance — a managed server is the better choice. If your application has extreme traffic spikes, runs in an AWS-native environment, and your team has deep AWS expertise — serverless might be the right fit.
The mistake is choosing serverless because it sounds modern and choosing servers because they sound safe. Choose based on your actual requirements, your actual traffic patterns, and your actual budget.
Top comments (0)