DEV Community

Cover image for Laravel Performance Optimization: 20 Quick Wins for Production Apps
Deploynix
Deploynix

Posted on • Originally published at deploynix.io

Laravel Performance Optimization: 20 Quick Wins for Production Apps

Performance is not a feature you add at the end. It is the sum of a hundred small decisions made throughout your application's development. But if you have an existing Laravel application that feels sluggish in production, the good news is that most performance problems have well-known solutions, and many of them can be implemented in an afternoon.

These 20 optimizations are ordered roughly by impact-to-effort ratio. The first few items take minutes and can yield dramatic improvements. The later items require more work but address the deeper structural issues that limit your application's ceiling.

Every optimization here has been validated on production Laravel applications running on Deploynix. Where relevant, we will note how Deploynix's infrastructure features support each optimization.

Caching and Compilation

1. Cache Your Configuration

Every time your Laravel application boots, it reads dozens of configuration files from disk, merges them, and resolves environment variables. The php artisan config:cache command compiles all configuration into a single file that loads in a fraction of the time.

Add this to your Deploynix deploy script so it runs after every deployment. The improvement is immediate: configuration loading drops from tens of milliseconds to under one millisecond. The only caveat is that you cannot use env() calls outside of configuration files once config is cached, because the .env file is no longer read.

2. Cache Your Routes

Route registration is surprisingly expensive in large applications. An application with 200 routes spends meaningful time on every request parsing route definitions, compiling regex patterns, and registering middleware. The php artisan route:cache command serializes the entire route table into a single file.

The impact scales with your route count. Applications with hundreds of routes see route registration drop from 50+ milliseconds to under 5 milliseconds. Include php artisan route:cache in your Deploynix deploy script, right after config:cache.

3. Cache Your Views

Blade templates are compiled to PHP on first access. The php artisan view:cache command pre-compiles every Blade template so the first request after deployment does not pay the compilation cost. The effect is most noticeable immediately after a deployment when the view cache is cold.

4. Cache Your Events

The php artisan event:cache command pre-discovers all event listeners so Laravel does not need to scan your application directories on every request. This is a small optimization but costs nothing to implement.

5. Optimize Composer Autoloading

Run composer install --optimize-autoloader --no-dev in production. The --optimize-autoloader flag converts PSR-4 autoloading to a classmap, which is faster because it avoids filesystem lookups. The --no-dev flag excludes development dependencies, reducing the number of classes the autoloader needs to know about.

Database Optimization

6. Fix N+1 Query Problems

N+1 queries are the single most common performance issue in Laravel applications. You load a collection of models, then access a relationship on each one, triggering a separate query per model. Loading 50 posts with their authors generates 51 queries instead of 2.

Use eager loading to solve this: Post::with('author')->get(). Laravel can also help you find these problems: add Model::preventLazyLoading() in your AppServiceProvider boot method during development. This throws an exception whenever a lazy-loaded relationship is accessed.

7. Add Missing Database Indexes

If your application queries a column in a WHERE, ORDER BY, or JOIN clause, that column should have an index. Missing indexes force the database to scan entire tables instead of using efficient index lookups.

Run EXPLAIN on your slow queries to identify missing indexes. Create migrations to add them. On Deploynix, you can run migrations as part of your deploy script, and they will execute against your Database server automatically.

Common candidates for indexes: foreign key columns (user_id, team_id), status columns that appear in filters, timestamp columns used in ordering, and any column used in a unique validation rule.

8. Select Only the Columns You Need

User::all() selects every column from the users table, even if you only need the name and email. Use User::select(['id', 'name', 'email'])->get() to reduce the amount of data transferred from the database and the memory consumed by your models.

This is particularly impactful for tables with large text columns, JSON columns, or many columns. A table with 30 columns where you only need 5 is transferring 6 times more data than necessary.

9. Use Database Query Caching

For data that changes infrequently (settings, categories, feature flags, permissions), cache the query results instead of hitting the database on every request. Laravel's cache system integrates naturally:

$categories = Cache::remember('categories', 3600, function () {
    return Category::all();
});
Enter fullscreen mode Exit fullscreen mode

On Deploynix, configure a dedicated Cache server running Valkey for fast, reliable caching. Valkey is Redis-compatible, so all Laravel Redis cache drivers work out of the box.

10. Paginate Large Result Sets

Never return unbounded collections to the user. If a table has 10,000 rows, Model::all() loads all of them into memory. Use Model::paginate(25) or Model::cursorPaginate(25) to load only the rows needed for the current page.

Cursor pagination is more efficient for large datasets because it does not require a COUNT(*) query. Use it for API endpoints and infinite-scroll interfaces where total count is not needed.

Queue and Background Processing

11. Offload Slow Operations to Queues

Any operation that takes more than 100 milliseconds should be a candidate for background processing. Email sending, PDF generation, image processing, third-party API calls, report generation, and webhook dispatches can all move to queued jobs.

On Deploynix, run queue workers as daemons on your App server, or better yet, provision dedicated Worker servers for queue processing. This isolates background work from web request handling, ensuring your users' response times are not affected by heavy background processing.

12. Batch Queue Jobs When Possible

If you need to send 1,000 notification emails, dispatching 1,000 individual jobs creates overhead in job serialization, queue management, and worker polling. Use Laravel's Bus::batch() to group related jobs. Batches provide built-in progress tracking, failure handling, and completion callbacks.

13. Configure Queue Timeouts and Retries

Set appropriate --timeout and --tries values on your queue workers. A job that hangs indefinitely ties up a worker process. A job that retries infinitely on a permanent failure wastes resources. Configure these values based on the expected duration and failure modes of your jobs.

PHP and Server Tuning

14. Enable and Tune OPcache

OPcache caches compiled PHP bytecode in shared memory, eliminating the need to parse and compile PHP files on every request. This is the single most impactful PHP-level optimization.

Key settings for production: set opcache.enable=1, opcache.memory_consumption=256 (MB), opcache.max_accelerated_files=20000, and opcache.validate_timestamps=0. The last setting tells OPcache not to check if files have changed, which is safe in production because you redeploy when files change. Deploynix automatically reloads PHP-FPM after each deployment to invalidate the OPcache.

15. Tune PHP-FPM Worker Count

PHP-FPM's pm.max_children setting determines how many concurrent PHP requests your server can handle. The formula is: available memory divided by average memory per PHP process. A server with 4 GB of RAM, where 2 GB is available for PHP (after accounting for Nginx, database, and OS), with processes averaging 40 MB each, can handle 50 concurrent workers.

Deploynix configures pm = dynamic by default, which balances performance and memory efficiency. For dedicated App servers with consistent traffic, you can switch to pm = static for more consistent performance at the cost of higher idle memory usage.

16. Consider Laravel Octane

Laravel Octane keeps your application in memory between requests, eliminating the bootstrap cost that happens on every traditional PHP-FPM request. Deploynix supports Octane with FrankenPHP, Swoole, and RoadRunner drivers.

Octane can reduce response times by 50-80% for applications with heavy bootstrap costs. However, it requires careful attention to memory leaks, static state, and service provider behavior. Test thoroughly before deploying to production.

Frontend and Asset Optimization

17. Use a CDN for Static Assets

Serve your CSS, JavaScript, images, and fonts from a CDN. This offloads traffic from your server and delivers assets from edge locations closer to your users. Set the ASSET_URL environment variable in your Deploynix environment to your CDN's URL.

Vite's built-in asset versioning ensures browsers always load the latest version of your assets while caching them aggressively.

18. Enable Response Compression

Configure Nginx to compress HTML, CSS, JavaScript, JSON, and XML responses with Gzip or Brotli. Compressed responses are 60-80% smaller, which reduces bandwidth usage and improves load times, especially for users on slower connections.

Deploynix's Nginx configuration includes compression by default, but verify that your response types are covered.

Application-Level Optimization

19. Use Lazy Collections for Large Datasets

When processing large datasets (CSV imports, batch operations, report generation), use Laravel's LazyCollection or cursor() to process records one at a time instead of loading the entire dataset into memory:

User::cursor()->each(function (User $user) {
    // Process one user at a time
});
Enter fullscreen mode Exit fullscreen mode

This keeps memory usage constant regardless of the dataset size. A million-row export that would exhaust memory with all() runs smoothly with cursor().

20. Profile Before Optimizing

This should really be item zero, but it goes last because it is the principle that should govern all the others. Do not optimize based on intuition. Profile your application with tools like Laravel Telescope, Debugbar (in development only), or Clockwork to identify your actual bottlenecks.

You might spend hours optimizing a query that runs in 5 milliseconds while ignoring a middleware that adds 200 milliseconds to every request. Profiling tells you where to focus your effort for maximum impact.

Deploynix Infrastructure Tips

Beyond application-level optimizations, your infrastructure choices on Deploynix significantly affect performance.

Use a dedicated Database server. Network latency between your App server and Database server on the same provider is typically under 1 millisecond. The benefit of dedicated database resources far outweighs this tiny latency cost.

Use a dedicated Cache server. Valkey on its own server means cache operations do not compete with your application or database for memory and CPU.

Use Worker servers for queue processing. Background jobs should not compete with web requests for CPU and memory. Dedicated Worker servers ensure consistent performance for both.

Use a Load Balancer for horizontal scaling. When vertical scaling reaches its limit, add more Web servers behind a Deploynix Load Balancer. Choose Round Robin for stateless applications, Least Connections for variable request durations, or IP Hash for session affinity.

Right-size your servers. Deploynix supports provisioning across DigitalOcean, Vultr, Hetzner, Linode, AWS, and custom providers. Different providers offer different price-to-performance ratios for CPU-bound versus memory-bound workloads. Match your provider to your workload.

Measuring Success

After implementing these optimizations, measure the results. Track your application's response time percentiles (p50, p95, p99), your database query count per request, your memory usage per request, and your server's CPU and memory utilization through Deploynix's real-time monitoring.

Set up Deploynix health alerts to notify you when performance degrades. A sudden increase in response times or resource usage often indicates a performance regression in a recent deployment. Deploynix's rollback feature lets you quickly revert to a known-good deployment while you investigate.

Conclusion

Performance optimization is an ongoing practice, not a one-time project. Start with the items at the top of this list, as they offer the biggest impact for the least effort. Then work your way down as your application's needs demand.

The most impactful optimizations are often the simplest: cache your config, fix your N+1 queries, add missing indexes, and offload slow work to queues. These four items alone can transform a sluggish application into a responsive one.

Top comments (0)