DEV Community

Matheus Lopes Santos
Matheus Lopes Santos

Posted on • Updated on

Unleashing the power of Laravel Horizon

I think background jobs are really handy, whether it's for importing a large CSV file, processing data from a webhook, or many other tasks that we can delegate to be processed without anyone watching.

In Laravel, it's no different. The framework provides us with a powerful out-of-the-box feature for queue processing. Let's recap how we can run our queue:

php artisan queue:work {connectionName} --queue={queueName} ...
Enter fullscreen mode Exit fullscreen mode

Above, we have a very basic example of how to start our queue. There are several other options, and to learn about all of them, I suggest checking out the official documentation.

Enter Horizon

Horizon was released in 2017, and it's a package for running and monitoring the queue (remember, what isn't measured can't be managed). Its mission was not to replace but to complement Laravel's queues.

With Horizon, we have a much simpler way to manage everything needed to run our queue, as well as mechanisms to balance our queue execution.

A Common Issue...

One of the most common problems I've seen in applications I've maintained is the underutilization of Horizon. What do I mean by that? Let's see:

  • Projects that use only one worker for all queues.
  • Poor memory management.
  • An inadequate number of workers compared to the number of jobs the system processes.

"Does that mean I don't know how to configure my queue, MatheusΓ£o?"

It's not about that. Each project has its uniqueness, and all the configurations I'm going to show here will depend on whether the system uses queues extensively or not.

In this short article, I'll show you how to optimize your Horizon so it can handle many jobs and execute them smoothly.

Default Configuration

By default, Horizon comes with only one configured worker:

'defaults' => [
    'supervisor-1' => [
    'connection'          => 'redis',
        'queue'               => ['default'],
        'balance'             => 'auto',
        'autoScalingStrategy' => 'time',
        'maxProcesses'        => 3,
        'maxTime'             => 0,
        'maxJobs'             => 0,
        'memory'              => 128,
        'tries'               => 1,
        'timeout'             => 60,
        'nice'                => 0,
    ],
],
Enter fullscreen mode Exit fullscreen mode

Great, but what does this configuration tell us?

  • Our worker will handle the default queue.
  • In this case, load balancing doesn't apply because we have only one queue.
  • We will have the maximum of three running process.
  • Jobs will be executed only once, and no other attempts are allowed.
  • Jobs have a timeout of 60 seconds.

That's great, but what if our demand starts to grow?

Adding Queues to the Worker

Let's imagine I have one queue for processing emails and another for other system jobs. Cool, I could do it like this:

'defaults' => [
    'supervisor-1' => [
    'connection'          => 'redis',
        'queue'               => ['emails', 'default'],
        'balance'             => 'auto',
        'autoScalingStrategy' => 'time',
        'maxProcesses'        => 3,
        'maxTime'             => 0,
        'maxJobs'             => 0,
        'memory'              => 128,
        'tries'               => 1,
        'timeout'             => 60,
        'nice'                => 0,
    ],
],
Enter fullscreen mode Exit fullscreen mode

Nice, but what do we have now?

  • Two queues being processed by the worker.
  • A maximum of 3 processes for the queues.

With this configuration, whoever needs more gets more processes. Imagine that the email queue has 40 jobs and the default queue only has 4. Horizon will allocate 2 processes for the emails queue and 1 process for the default queue.

Configuring Multiple Workers in Horizon

Now, let's get to the peak of our scenario. Let's imagine the following:

  • Notifications (e.g., via Slack)
  • Email sending
  • Receiving webhook data
  • CSV data import

"Isn't that a bit too much?"

Let's take it step by step; it will work out.

Prioritizing Our Queue

To start things off, let's categorize our queues:

  • High priority
  • Low priority
  • Default
  • Jobs that can take a long time to finish

Great, now that I have these categories, I'll use an enum to keep track of them:

<?php

declare(strict_types=1);

namespace App\Enums;

enum QueuePriority: string
{
    case Low         = 'low';
    case High        = 'high';
    case LongTimeout = 'long-timeout';
}
Enter fullscreen mode Exit fullscreen mode

Note that I didn't add the default queue to this enum because jobs that aren't categorized will go directly to it in queue. But feel free to add it if you want πŸ˜ƒ.

This way, we can dispatch our jobs like this:

InviteUser::dispatch($user)->onQueue(
    QueuePriority::High->value
);
Enter fullscreen mode Exit fullscreen mode

Elegant, isn't it?

Tuning Our Horizon

Now, we can create more workers and further separate and organize our queue:

'defaults' => [
    'supervisor-high-priority' => [
        'connection'          => 'redis',
        'queue'               => [QueuePriority::High->value],
        'balance'             => 'auto',
        'minProcesses'        => 1,
        'maxProcesses'        => 6,
        'balanceMaxShift'     => 3,
        'balanceCooldown'     => 2,
        'autoScalingStrategy' => 'size',
        'maxTime'             => 0,
        'maxJobs'             => 0,
        'memory'              => 128,
        'tries'               => 1,
        'timeout'             => 60,
        'nice'                => 0,
    ],
    'supervisor-low-priority' => [
        'connection'          => 'redis',
        'queue'               => [QueuePriority::Low->value, 'default'],
        'balance'             => 'auto',
        'minProcesses'        => 1,
        'maxProcesses'        => 3,
        'balanceMaxShift'     => 1,
        'balanceCooldown'     => 3,
        'autoScalingStrategy' => 'size',
        'maxTime'             => 0,
        'maxJobs'             => 0,
        'memory'              => 128,
        'tries'               => 1,
        'timeout'             => 60,
        'nice'                => 0,
    ],
    'supervisor-long-timeout' => [
        'connection'          => 'redis',
        'queue'               => [QueuePriority::LongTimeout->value],
        'balance'             => 'auto',
        'minProcesses'        => 1,
        'maxProcesses'        => 3,
        'balanceMaxShift'     => 1,
        'balanceCooldown'     => 3,
        'autoScalingStrategy' => 'size',
        'maxTime'             => 0,
        'maxJobs'             => 0,
        'memory'              => 128,
        'tries'               => 1,
        'timeout'             => 600,
        'nice'                => 0,
    ],
],

'environments' => [
    'production' => [
        'supervisor-high-priority' => [],
        'supervisor-low-priority'  => [],
        'supervisor-long-timeout'  => [],
    ],

    'staging' => [
        'supervisor-high-priority' => [],
        'supervisor-low-priority'  => [],
        'supervisor-long-timeout'  => [],
    ],

    'local' => [
        'supervisor-high-priority' => [],
        'supervisor-low-priority'  => [],
        'supervisor-long-timeout'  => [],
    ],
],
Enter fullscreen mode Exit fullscreen mode

What do we have here, my friend? Now we've got 3 workers for different queues, with the following setups:

supervisor-high-priority

  • Takes care of the high priority queue.
  • Maximum of 6 processes.
  • It will start or terminate 3 processes (maxShift) every 2 seconds (coolDown).
  • Timeout set to 1 minute.

supervisor-low-priority

  • Manages the low priority and default queues.
  • Maximum of 3 processes.
  • It will start or terminate 1 process (maxShift) every 3 seconds (coolDown).
  • Timeout set to 1 minute.

supervisor-long-timeout

  • Handles the long-timeout queue.
  • Maximum of 3 processes.
  • It will start or terminate 1 process (maxShift) every 3 seconds (coolDown).
  • Timeout set to 10 minutes.

It's a straightforward setup, but it gets the job done πŸ˜ƒ.

To wrap it up...

Now our queue is configured and ready to handle a good number of jobs. But remember, before increasing the number of workers, make sure your server has enough resources available for your new workers.

The way I presented it, the queue itself won't cause issues for your server. However, if you're using Redis, for example, what can potentially cause problems is seeing your application go down due to memory shortages, but that's a topic for another conversation.

Start small, increase a process here, another there, tweak a timeout as needed. Never go all-in with the maximum your server can handle. Gradually and with caution, you can find the perfect balance between background processing and the rest of your application.

Take care and until next time πŸ˜—πŸ§€.

Top comments (0)