Queues are the backbone of any serious Laravel application. Sending emails, processing images, generating reports, syncing data with external APIs, these operations belong in the background where they won't block your users' requests. Laravel's queue system is powerful, but as your application grows, managing queue workers becomes increasingly complex. How many workers do you need? Are jobs failing silently? Is your queue backing up?
Laravel Horizon answers these questions with a beautiful dashboard and intelligent worker management. This guide covers deploying Horizon on Deploynix, from initial setup through scaling across multiple worker servers.
Why Horizon Over Basic Queue Workers
Before Horizon, managing Laravel queues meant configuring Supervisor to run queue:work processes and hoping for the best. You had limited visibility into what was happening inside your queues.
Horizon provides:
- A real-time dashboard showing job throughput, runtime, failure rates, and queue depths.
- Auto-scaling workers that spin up and down based on queue pressure.
- Worker balancing that distributes workers across queues based on demand.
- Metrics and monitoring with historical data on job processing.
- Failed job management with retry capabilities from the dashboard.
- Code-driven configuration that lives in your repository, not on individual servers.
The key insight is that Horizon replaces Supervisor for queue worker management entirely. You run Horizon as a single process, and it manages all your queue workers internally.
Installing and Configuring Horizon
Installation
composer require laravel/horizon
php artisan horizon:install
This publishes the Horizon configuration file (config/horizon.php), assets, and service provider.
Configuration
The Horizon configuration file defines your worker environments and supervisor configurations. Here's a production-ready configuration:
// config/horizon.php
'environments' => [
'production' => [
'supervisor-default' => [
'maxProcesses' => 10,
'balanceMaxShift' => 1,
'balanceCooldown' => 3,
'balance' => 'auto',
'autoScalingStrategy' => 'time',
'minProcesses' => 1,
'tries' => 3,
'timeout' => 300,
'queue' => [
'default',
'emails',
'notifications',
],
],
'supervisor-long-running' => [
'maxProcesses' => 3,
'balance' => 'false',
'minProcesses' => 1,
'tries' => 1,
'timeout' => 1800,
'queue' => [
'reports',
'exports',
],
],
],
'local' => [
'supervisor-default' => [
'maxProcesses' => 3,
'balance' => 'simple',
'queue' => ['default', 'emails', 'notifications'],
],
],
],
Key configuration decisions:
-
balance: 'auto': Horizon automatically distributes workers across queues based on workload. This is ideal for queues with variable load. -
autoScalingStrategy: 'time': Scales based on time-to-clear rather than queue size. This provides more consistent latency. -
maxProcesses: The maximum number of worker processes for this supervisor. Set this based on your server's CPU and memory capacity. -
timeout: The maximum time a job can run before being killed. Set this higher than your longest-running job. -
tries: How many times a job is attempted before being marked as failed.
Separate supervisors for different workloads:
Notice the two supervisor configurations above. supervisor-default handles fast jobs (emails, notifications) with auto-scaling. supervisor-long-running handles slow jobs (reports, exports) with fixed processes and a longer timeout. This prevents a long-running report from blocking email delivery.
Redis Configuration
Horizon requires Redis (or a Redis-compatible server like Valkey). On Deploynix, your server can be provisioned with Valkey, which is fully compatible with Horizon.
Ensure your .env has the correct Redis connection:
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
QUEUE_CONNECTION=redis
If your Valkey/Redis server is on a separate machine, update REDIS_HOST to point to its IP address and configure the Deploynix firewall to allow connections between servers.
Deploying Horizon on Deploynix
Setting Up the Daemon
Horizon runs as a long-lived process that needs to be managed by a process supervisor. On Deploynix, configure it as a daemon through the dashboard:
- Command:
php artisan horizon - Directory: Your site's root directory (e.g.,
/home/deploynix/your-site) - User:
deploynix
Deploynix uses Supervisor behind the scenes to keep your Horizon process running. If it crashes, Supervisor restarts it automatically.
Restarting Horizon During Deployments
Horizon workers load your application code into memory. When you deploy new code, the running workers still execute the old code until they're restarted. This is a common source of bugs where queue jobs behave differently than expected after deployment.
Add a deployment hook in Deploynix to restart Horizon after each deployment:
php artisan horizon:terminate
This gracefully shuts down Horizon, allowing currently processing jobs to finish before stopping. Supervisor then automatically starts a fresh Horizon process with the new code.
Important: Use horizon:terminate, not horizon:pause. Terminate gracefully stops and allows Supervisor to restart it. Pause just pauses processing without stopping the process.
Securing the Horizon Dashboard
Horizon's dashboard is accessible at /horizon by default. In production, you must restrict access to authorized users only.
In your app/Providers/HorizonServiceProvider.php, configure the authorization gate:
protected function gate(): void
{
Gate::define('viewHorizon', function ($user) {
return $user->isSuperAdmin();
});
}
This ensures only super admins can view the Horizon dashboard. Adjust the authorization logic to match your application's role structure. Deploynix applications with the Owner, Admin, Manager, Developer, and Viewer roles can use whichever check is appropriate for your security requirements.
Managing Failed Jobs
Jobs fail. Network timeouts, external API errors, out-of-memory conditions, these are inevitable in production. How you handle failed jobs determines whether failures are minor blips or data loss events.
Horizon's Failed Job Dashboard
Horizon shows failed jobs with full stack traces, job payloads, and metadata. From the dashboard, you can:
- View the exception that caused the failure.
- Inspect the job payload to understand what data was being processed.
- Retry individual failed jobs with a click.
- Delete failed jobs that aren't worth retrying.
Retry Strategies
Automatic retries: Configure tries in your Horizon supervisor to automatically retry failed jobs:
'tries' => 3,
Exponential backoff: On your job class, define a backoff method or property:
public function backoff(): array
{
return [30, 60, 300]; // Wait 30s, 60s, then 5 minutes between retries
}
Manual retry via Artisan: For bulk retries:
php artisan horizon:forget {id} # Delete a specific failed job
php artisan queue:retry all # Retry all failed jobs
php artisan queue:retry {id} # Retry a specific failed job
Monitoring Failed Job Trends
Watch for patterns in failed jobs. If the same job type fails repeatedly, there's likely a systemic issue (an external API being down, a database connection problem, or a code bug). Horizon's metrics dashboard helps identify these patterns.
Consider integrating failed job notifications into your error tracking. You can listen for the JobFailed event and send alerts:
use Illuminate\Queue\Events\JobFailed;
use Illuminate\Support\Facades\Event;
Event::listen(JobFailed::class, function (JobFailed $event) {
// Send notification to Slack, email, etc.
});
Scaling Across Worker Servers
As your queue workload grows beyond what a single server can handle, Deploynix makes it straightforward to scale horizontally.
Dedicated Worker Servers
Provision a Worker server type through Deploynix. Worker servers are optimized for background processing: they don't run Nginx or serve web traffic. All resources are dedicated to running Horizon and processing jobs.
Deployment setup:
- Provision a Worker server on Deploynix with your preferred cloud provider (DigitalOcean, Vultr, Hetzner, Linode, or AWS).
- Deploy your Laravel application to the worker server (it needs the full application code to process jobs).
- Configure environment variables, ensuring the worker server connects to the same database and Redis/Valkey instance as your app server.
- Set up the Horizon daemon on the worker server through the Deploynix dashboard.
- Do not set up a Horizon daemon on your app server. Only one server should run Horizon.
Important: Horizon should only run on a single server in your cluster. If you need multiple queue worker servers, run Horizon on one designated server and use php artisan queue:work processes on additional servers. Horizon's auto-scaling manages the workers on its host server, while additional servers run independent workers.
Resource Allocation
For worker servers, choose a server size based on your workload:
- CPU-bound jobs (image processing, PDF generation, data computation): Choose a CPU-optimized server with multiple cores.
- I/O-bound jobs (API calls, email sending, database operations): Standard servers work fine, but ensure you have enough memory for concurrent workers.
- Memory-intensive jobs (large CSV imports, data transformations): Choose a memory-optimized server and set appropriate memory limits on your PHP processes.
A common starting point is 2 vCPUs and 4GB RAM, scaling up based on observed resource usage through Deploynix's monitoring.
Queue Priority and Separation
Design your queue names to enable flexible worker allocation:
// High-priority jobs (payment processing, user notifications)
dispatch(new ProcessPayment($order))->onQueue('critical');
// Standard jobs (emails, data sync)
dispatch(new SendWelcomeEmail($user))->onQueue('default');
// Low-priority jobs (reports, analytics)
dispatch(new GenerateMonthlyReport($org))->onQueue('low');
Configure Horizon to process queues in priority order:
'queue' => ['critical', 'default', 'low'],
Horizon will drain the critical queue before processing default jobs, and default before low. This ensures time-sensitive operations aren't blocked by long-running reports.
Performance Tuning
Optimize Job Serialization
Jobs that serialize large Eloquent models or collections can consume significant memory and time. Use model IDs instead of full models in job constructors, and reload the data within the handle method:
public function __construct(
public int $userId,
) {}
public function handle(): void
{
$user = User::findOrFail($this->userId);
// Process user...
}
Batch Processing
For operations that process many items, use Laravel's job batching to track progress and handle partial failures:
use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;
Bus::batch([
new ProcessOrder($order1),
new ProcessOrder($order2),
new ProcessOrder($order3),
])->then(function (Batch $batch) {
// All jobs completed successfully
})->catch(function (Batch $batch, \Throwable $e) {
// First failure detected
})->finally(function (Batch $batch) {
// Batch finished (with or without failures)
})->dispatch();
Horizon displays batch progress in its dashboard.
Rate Limiting Jobs
If your jobs interact with rate-limited APIs, use Laravel's rate limiting within jobs:
use Illuminate\Support\Facades\RateLimiter;
public function handle(): void
{
RateLimiter::attempt('external-api', 30, function () {
// Call external API
}, 60);
}
Conclusion
Laravel Horizon transforms queue management from a black box into a transparent, manageable system. On Deploynix, deploying Horizon is straightforward: configure a daemon, add a deployment hook, and secure the dashboard.
Start with Horizon on your app server with auto-scaling supervisors. Monitor job throughput and failure rates through the dashboard. As your workload grows, provision dedicated worker servers through Deploynix and scale horizontally. Separate fast and slow jobs into different supervisors. Design your queues with priority levels.
The combination of Horizon's intelligent worker management and Deploynix's infrastructure automation gives you a production queue system that scales from handling a few dozen jobs per hour to processing thousands per minute, with full visibility at every step.
Top comments (0)