DEV Community

Cover image for Laravel Queues Deep Dive: Connections, Workers, and Retry Strategies on Deploynix
Deploynix
Deploynix

Posted on • Originally published at deploynix.io

Laravel Queues Deep Dive: Connections, Workers, and Retry Strategies on Deploynix

Laravel's queue system is one of the framework's most powerful features and one of its most misunderstood. The documentation covers the basics — dispatch a job, process it later — but production queue management involves decisions about drivers, concurrency, timeouts, retry behavior, and failure handling that significantly affect application reliability.

This deep dive covers the queue architecture decisions you need to make when running Laravel on Deploynix, from choosing the right driver to configuring workers for production workloads. We will go beyond the basics to cover the nuances that separate a queue setup that works in development from one that is reliable in production.

Queue Drivers: Choosing the Right One

Laravel supports several queue drivers. On Deploynix, two are practical for production: Redis (via Valkey) and the database driver. Understanding their tradeoffs is essential.

Redis/Valkey: The Production Default

Valkey is the Redis-compatible cache server that Deploynix installs on your servers. It is the recommended queue driver for production Laravel applications, and for good reason.

Performance. Valkey stores queue data in memory. Pushing a job to the queue and popping a job off the queue are both O(1) operations that complete in microseconds. A single Valkey instance handles tens of thousands of queue operations per second without breaking a sweat.

Reliability. Valkey uses Redis's reliable queue primitives (RPUSH/BRPOP with visibility timeout) to ensure that jobs are not lost. When a worker pops a job from the queue, the job is moved to a "reserved" set. If the worker crashes before completing the job, the reserved job is returned to the queue after the timeout expires.

Atomicity. Queue operations in Valkey are atomic. Two workers cannot pop the same job. Rate limiting, unique jobs, and batch operations work correctly because Valkey guarantees atomic operations on its data structures.

Memory considerations. Valkey stores everything in memory. If your queue contains millions of pending jobs with large payloads, memory consumption becomes a concern. For most applications, this is not an issue — a million jobs with 1KB payloads consume about 1GB of memory, which is well within the capacity of even modest servers.

Configure Redis/Valkey as your queue driver in your Deploynix environment variables:

QUEUE_CONNECTION=redis
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
Enter fullscreen mode Exit fullscreen mode

Database Driver: When Valkey Is Not Available

The database queue driver stores jobs in a jobs table in your MySQL, MariaDB, or PostgreSQL database. It is a viable option when:

  • You are running a minimal setup without Valkey.
  • Your queue volume is low (fewer than 1,000 jobs per day).
  • You want queue data to be included in your database backups automatically.

Performance. The database driver is significantly slower than Valkey. Each queue operation involves a database query with row locking. Under high concurrency, the jobs table becomes a bottleneck as workers compete for locks.

Reliability. The database driver is reliable in the sense that jobs are persisted to disk (assuming your database is durable). However, polling for new jobs adds latency — workers query the database every few seconds instead of receiving instant notifications.

Scaling limitations. The database driver does not scale well beyond a few hundred jobs per minute. At higher volumes, the polling overhead and lock contention degrade both queue throughput and your application's database performance.

For any application with meaningful queue usage on Deploynix, use Valkey. The database driver should be reserved for development, testing, or applications with trivial queue requirements.

The Sync Driver: Never in Production

The sync driver processes jobs immediately within the current request. It exists for development convenience. Never use it in production — it defeats the entire purpose of queuing and makes your HTTP requests as slow as your slowest job.

Configuring Queue Workers on Deploynix

Workers are the processes that pull jobs from the queue and execute them. Getting worker configuration right is critical for both performance and reliability.

Worker Processes: How Many?

Deploynix lets you configure queue worker daemons with a specific number of processes. The right number depends on your workload:

CPU-bound jobs (image processing, PDF generation, data transformation): One worker process per available CPU core. More workers than CPU cores causes context switching overhead without improving throughput.

I/O-bound jobs (API calls, email sending, file uploads to S3): You can run more workers than CPU cores because workers spend most of their time waiting for external responses. Two to four workers per CPU core is reasonable.

Mixed workloads: Start with one worker per CPU core and increase if your queue consistently has pending jobs. Monitor CPU usage — if workers are consuming 100% CPU, adding more workers will not help. If CPU usage is low but jobs are pending, adding workers will improve throughput.

Configure workers in the Deploynix dashboard:

php artisan queue:work redis --queue=high,default,low --tries=3 --timeout=90 --sleep=3
Enter fullscreen mode Exit fullscreen mode

Let us break down each flag:

The --queue Flag: Priority Ordering

The --queue flag accepts a comma-separated list of queue names. Workers process queues left to right — all jobs in high are processed before any job in default, and all jobs in default before any in low.

This priority system lets you ensure critical jobs (payment processing, authentication tokens, webhook delivery) are processed before lower-priority work (report generation, analytics aggregation).

// Dispatching with priority
ProcessPayment::dispatch($order)->onQueue('high');
GenerateReport::dispatch($report)->onQueue('low');
SendWelcomeEmail::dispatch($user)->onQueue('default');
Enter fullscreen mode Exit fullscreen mode

Warning: Strict priority ordering means that a flood of high priority jobs can starve default and low queues. If this is a concern, run separate workers for each queue:

# Worker 1: Only processes high-priority jobs
php artisan queue:work redis --queue=high --tries=3 --timeout=60

# Worker 2: Processes default and low-priority jobs
php artisan queue:work redis --queue=default,low --tries=3 --timeout=120
Enter fullscreen mode Exit fullscreen mode

The --timeout Flag: Preventing Hung Workers

The --timeout flag kills a worker process if a single job takes longer than the specified number of seconds. This prevents a hung job from blocking a worker indefinitely.

Set the timeout to slightly longer than your longest-running job's expected execution time. If your longest job takes 60 seconds, set --timeout=90 to provide a buffer.

Important: The --timeout value must be shorter than any retry delay for the same job. If a job has a retryAfter of 60 seconds and a worker --timeout of 90 seconds, the job could be retried while the original execution is still running, causing duplicate processing.

The --sleep Flag: Polling Frequency

When the queue is empty, workers sleep for --sleep seconds before checking again. The default is 3 seconds. Lower values reduce latency for new jobs but increase CPU usage during idle periods. Higher values save CPU but add up to --sleep seconds of latency before an idle worker picks up a new job.

For most applications, 3 seconds is fine. If you need sub-second job processing latency, reduce to 1 second. If your queue is usually empty and jobs are not time-sensitive, increase to 5 or 10 seconds.

The --tries Flag: Maximum Attempts

The --tries flag sets the maximum number of times a job will be attempted before being moved to the failed jobs table. The right value depends on the nature of your jobs:

  • Idempotent, retriable jobs (sending an email, calling an API): Set tries to 3 to 5. Transient failures (network timeouts, rate limits) often resolve on retry.
  • Non-idempotent jobs (charging a credit card, creating a record): Set tries to 1 or handle retries carefully within the job to prevent duplicate operations.
  • Jobs with their own retry logic: Set a high try count on the worker but implement backoff and retry logic within the job class itself.

Retry Strategies

How you handle job failures is as important as how you process them successfully. Laravel provides several mechanisms for retry behavior.

Exponential Backoff

Instead of retrying immediately after a failure, exponential backoff increases the delay between retries. This is critical for jobs that call external APIs — hammering a failing API with immediate retries makes the problem worse.

class CallExternalApi implements ShouldQueue
{
    public $tries = 5;

    public function backoff(): array
    {
        return [10, 30, 60, 120, 300]; // seconds
    }

    public function handle(): void
    {
        // Call external API
    }
}
Enter fullscreen mode Exit fullscreen mode

This job waits 10 seconds after the first failure, 30 seconds after the second, and so on up to 5 minutes. This gives the external service time to recover without your workers sitting idle.

Retry Until

For jobs where the number of retries matters less than the total retry window, use retryUntil():

class ProcessWebhook implements ShouldQueue
{
    public function retryUntil(): DateTime
    {
        return now()->addHours(24);
    }

    public function handle(): void
    {
        // Process webhook
    }
}
Enter fullscreen mode Exit fullscreen mode

This job retries for up to 24 hours, regardless of how many attempts it takes. Combined with backoff, this is useful for jobs that should eventually succeed but might take a long time due to external dependencies.

Rate Limiting

When jobs call rate-limited APIs, you need to throttle job processing to stay within limits:

use Illuminate\Support\Facades\RateLimiter;

class SyncToExternalService implements ShouldQueue
{
    public function handle(): void
    {
        RateLimiter::attempt(
            'external-api',
            maxAttempts: 60, // 60 calls per minute
            callback: function () {
                // Call the API
            },
            decaySeconds: 60,
        );
    }
}
Enter fullscreen mode Exit fullscreen mode

Unique Jobs

Prevent duplicate jobs from accumulating in the queue:

use Illuminate\Contracts\Queue\ShouldBeUnique;

class UpdateSearchIndex implements ShouldQueue, ShouldBeUnique
{
    public function __construct(public int $productId) {}

    public function uniqueId(): string
    {
        return (string) $this->productId;
    }

    public $uniqueFor = 300; // 5 minutes
}
Enter fullscreen mode Exit fullscreen mode

This ensures that only one UpdateSearchIndex job per product exists in the queue at any time. If the same product is updated five times in rapid succession, only one index update job runs.

Handling Failed Jobs

Despite retries, some jobs will ultimately fail. How you handle these failures matters.

The Failed Jobs Table

Laravel stores failed jobs in the failed_jobs database table. Each entry includes the job payload, the exception that caused the failure, the queue name, and the timestamp.

On Deploynix, you can inspect failed jobs through the web terminal:

php artisan queue:failed
Enter fullscreen mode Exit fullscreen mode

Retry Failed Jobs

Retry a specific failed job:

php artisan queue:retry 
Enter fullscreen mode Exit fullscreen mode

Retry all failed jobs:

php artisan queue:retry all
Enter fullscreen mode Exit fullscreen mode

The failed() Method

Implement a failed() method on your job class to perform cleanup when a job exhausts all retries:

class ProcessPayment implements ShouldQueue
{
    public $tries = 3;

    public function handle(): void
    {
        // Process payment
    }

    public function failed(\Throwable $exception): void
    {
        // Notify the team
        // Update order status to "payment_failed"
        // Log the failure for investigation
    }
}
Enter fullscreen mode Exit fullscreen mode

Failed Job Monitoring

Set up a scheduled task to monitor failed jobs and alert your team:

Schedule::command('queue:monitor redis:default,redis:high --max=100')
    ->everyFiveMinutes();
Enter fullscreen mode Exit fullscreen mode

The queue:monitor command checks queue sizes and triggers notifications when they exceed thresholds. On Deploynix, this runs automatically through the Laravel scheduler.

Dedicated Worker Servers on Deploynix

As your queue volume grows, you may want to separate queue processing from web request handling. Deploynix supports dedicated Worker server types designed for this purpose.

A worker server runs queue workers without serving web traffic. This provides:

  • Resource isolation. CPU and memory used by job processing do not affect web request latency.
  • Independent scaling. Add worker servers without affecting your web infrastructure.
  • Different instance types. Worker servers might benefit from CPU-optimized instances, while web servers might benefit from memory-optimized instances.

When provisioning a worker server on Deploynix:

  1. Select the Worker server type.
  2. Connect it to the same database and cache server as your web servers.
  3. Deploy the same codebase.
  4. Configure worker daemons — no web server configuration needed.

The worker server runs your Laravel application but only processes queue jobs. It connects to the same Valkey instance as your web servers, pulling jobs from the same queues.

Queue Monitoring and Observability

Monitoring Queue Size

A growing queue means jobs are being dispatched faster than workers can process them. Monitor queue size to detect this:

Schedule::call(function () {
    $size = Queue::size('default');
    if ($size > 1000) {
        // Alert: queue is backing up
    }
})->everyMinute();
Enter fullscreen mode Exit fullscreen mode

Monitoring Worker Health

Deploynix monitors your worker daemons and restarts them if they crash. However, a worker that is running but stuck (waiting on a deadlocked database query, for example) appears healthy but is not processing jobs.

Add queue health checks to your monitoring:

Schedule::command('queue:monitor redis:default --max=500')
    ->everyFiveMinutes();
Enter fullscreen mode Exit fullscreen mode

Horizon: If You Need It

Laravel Horizon provides a beautiful dashboard for monitoring Redis queues. It is optional on Deploynix — you can configure workers directly through the Deploynix dashboard without Horizon. But if you want detailed job metrics, queue balancing, and a dedicated monitoring UI, Horizon works well alongside Deploynix.

Install Horizon and configure it as a daemon in Deploynix instead of individual queue workers. Horizon manages workers internally with features like auto-balancing across queues.

Production Queue Checklist

Before considering your queue setup production-ready on Deploynix:

  • Queue driver is set to redis (not database or sync).
  • Worker processes are configured as daemons in Deploynix.
  • --timeout is set longer than your longest job but shorter than retryAfter.
  • --tries is appropriate for each job type (configured per-job or per-worker).
  • Failed job handling is implemented (failed() methods, monitoring, alerts).
  • Deploy script includes php artisan queue:restart.
  • Queue-specific environment variables are set (QUEUE_CONNECTION, REDIS_HOST).
  • Rate-limited and unique jobs are implemented where appropriate.
  • Database backups include the failed_jobs table.
  • Queue monitoring is in place.

Queues are the backbone of any production Laravel application. They handle the work that cannot happen during an HTTP request — payment processing, email delivery, data synchronization, report generation. Getting them right on Deploynix means configuring workers thoughtfully, implementing retry strategies that match your job characteristics, and monitoring for both failures and performance degradation.

Top comments (0)