Almost anyone has experienced a scenario where you click a "Register / Submit" button, then stare at the screen waiting seconds — if not minutes — for a response, only to see a success message or, even worse, some errors popping up.
I know, it feels frustrating. It gives a bad user experience, and honestly? Everyone will just go find another faster solution.
In this article, we'll explain what's actually happening behind that loading spinner — and how you can better architect your solution to fix this once and for all.
Part 1 — The Problem (The Real One)
Let's make it concrete. Here's what a typical registration endpoint looks like in most Laravel apps:
public function register(Request $request)
{
$user = User::create($request->validated());
// Send welcome email
Mail::to($user->email)->send(new WelcomeEmail($user));
// Generate and store a welcome PDF
$pdf = Pdf::loadView('pdfs.welcome', compact('user'));
Storage::put("welcome/{$user->id}.pdf", $pdf->output());
return response()->json(['message' => 'Registration successful!']);
}
Looks familiar, right?
The problem is that everything here runs sequentially, inside the same request. Laravel won't return that success response until every single line finishes executing — the email, the Slack call, the PDF, all of it.
So if your mail provider takes 2 seconds and the PDF takes another second, your user just waited 3+ seconds to see "Registration successful". And that's on a good day, with no timeouts or failures.
This pattern shows up everywhere:
- Sending emails or SMS after an action
- Calling third-party APIs (payment gateways, webhooks, analytics)
- Generating reports or exports
- Processing uploaded files or images
All of these have one thing in common — the user doesn't need to wait for them. They just need to know their action was received. The rest can happen in the background.
That's exactly what we're going to fix.
Part 2 — How Laravel Queues Work
Before jumping into code, let's build a quick mental model — I promise this is the only "theory" section.
When a user hits your endpoint, Laravel normally does everything in that same request lifecycle — sends the email, calls the API, generates the PDF — and only then returns a response. The user waits for all of it.
Queues flip that completely.
Instead of doing the heavy work immediately, you push a job onto a queue — think of it as a to-do list — and return the response right away. A separate process called a worker is running in the background, picking jobs off that list and executing them one by one, completely outside the user's request.
Three actors to keep in mind:
-
Job — the class that contains the actual work (
SendWelcomeEmail,GenerateInvoice...) - Queue — the list where jobs wait (we'll use Redis, more on that in a second)
-
Worker — the background process that pulls jobs from the queue and runs them (
php artisan queue:work)
That's really it. The rest is just configuration and knowing which scenarios to apply this to — which is exactly what we'll cover next.
💡 Why Redis and not the database driver? Laravel supports multiple queue drivers —
database,Redis,SQS, and others. The database driver works fine for getting started, but Redis is faster and lighter on your DB, and it's what you'll realistically use in production. So we'll go with Redis from the start and skip the detour.
Part 3 — Setting Up Redis & Queue Config
Alright, enough theory — let's get our hands dirty.
Install Redis
On Ubuntu / your VPS:
sudo apt update
sudo apt install redis-server -y
sudo systemctl enable redis-server
sudo systemctl start redis-server
redis-cli ping # If you got PONG — you're good.
Find below the installation instructions for each platform:
https://redis.io/docs/latest/operate/oss_and_stack/install/archive/install-redis/
Install the Laravel Redis Package
Laravel needs one extra package to talk to Redis:
composer require predis/predis
Configure Your .env
Two changes only in .env:
QUEUE_CONNECTION=redis # switch to the redis driver
REDIS_CLIENT=predis # use the predis package installed
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
By default, Laravel uses sync as the queue driver — meaning jobs run immediately, synchronously, defeating the whole purpose. Switching to redis is what actually enables the background behavior.
Quick Sanity Check
Before writing a single job, let's confirm everything is wired correctly. Run your worker:
php artisan queue:work
You should see something like:
INFO Processing jobs from the [default] queue.
No errors? Perfect. Leave that terminal open — that's your worker listening for jobs. Open a second terminal for the next steps.
💡 Heads up for production: Running
queue:workmanually is fine for local development, but on your server you need a process manager to keep it alive — if it crashes, your jobs just pile up with nobody processing them. We'll cover that with Supervisor in the Horizon section.
Part 4 — Your First Real Job
Remember that messy controller from Part 1? Let's start fixing it — one job at a time.
We'll tackle the welcome email first since it's the most common and the cleanest example to learn the pattern with.
php artisan make:job SendWelcomeEmail # Create the Job
This creates app/Jobs/SendWelcomeEmail.php. Open it up:
class SendWelcomeEmail implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function __construct(public User $user) {}
public function handle(): void
{
Mail::to($this->user->email)->send(new WelcomeEmail($this->user));
}
}
Two things to notice here:
-
ShouldQueue— This interface is what tells Laravel "don't run this now, push it to the queue." -
SerializesModels— handles serializing and deserializing your Eloquent models automatically so that you can pass$userdirectly without worrying about it.
Dispatch It From Your Controller
Now go back to your controller and replace the Mail::to(...) line:
// Before
Mail::to($user->email)->send(new WelcomeEmail($user));
// After
SendWelcomeEmail::dispatch($user);
That's it — one line. Your controller doesn't care when or how the email gets sent anymore — it just says "handle this" and moves on.
Your registration endpoint now looks like this:
public function register(Request $request)
{
$user = User::create($request->validated());
SendWelcomeEmail::dispatch($user);
return response()->json(['message' => 'Registration successful!']);
}
The response comes back instantly. The email is sent in the background by your worker.
💡 Want to delay a job? You can dispatch a job with a delay super easily:
SendWelcomeEmail::dispatch($user)->delay(now()->addMinutes(5));Useful for things like "send a follow-up email 10 minutes after registration".
That's the core pattern — everything else you'll do with queues is just a variation of this. Create a job, move the logic into handle(), and dispatch it. Let's now apply this to three real-world scenarios you'll actually run into.
Part 5 — Real World Scenarios
Scenario A — Calling a Third-Party API (Slack, SMS, Webhooks)
Third-party APIs are the #1 culprit for slow responses. You have zero control over their response time — and they can fail.
php artisan make:job NotifyAdminOnSlack # create the job
class NotifyAdminOnSlack implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 3;
public int $backoff = 10;
public function __construct(public User $user) {}
public function handle(): void
{
Http::post(config('services.slack.webhook'), [
'text' => "New user registered: {$this->user->email}"
]);
}
}
Notice $tries = 3 and $backoff = 10 — if Slack is down or slow, Laravel will automatically retry the job 3 times, waiting 10 seconds between each attempt. Your user never sees any of this.
Dispatch both jobs from your controller:
SendWelcomeEmail::dispatch($user);
NotifyAdminOnSlack::dispatch($user);
return response()->json(['message' => 'Registration successful!']);
Two background jobs, zero waiting.
Scenario B — Generating a PDF or Report on Demand
This one is slightly different — the user actually needs the result, they just don't need to wait for it right now.
php artisan make:job GenerateWelcomePdf # create the job
class GenerateWelcomePdf implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function __construct(public User $user) {}
public function handle(): void
{
$pdf = Pdf::loadView('pdfs.welcome', [
'user' => $this->user
]);
Storage::put(
"welcome/{$this->user->id}.pdf",
$pdf->output()
);
// Notify the user it's ready
$this->user->notify(new PdfReadyNotification());
}
}
The pattern here is: generate → store → notify. The user gets an instant response on registration, and a notification (email, in-app, whatever you prefer) once their PDF is actually ready. Clean and professional.
Scenario C — Processing a Bulk CSV Import
This is where queues really shine. Importing 5,000 rows in a single request is a recipe for timeouts and misery.
Instead of one giant job, chunk your data into smaller jobs:
php artisan make:job ImportUserRow # create the job
class ImportUserRow implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function __construct(public array $row) {}
public function handle(): void
{
User::updateOrCreate(
['email' => $this->row['email']],
[
'name' => $this->row['name'],
'phone' => $this->row['phone'],
]
);
}
}
Then, in your import controller, loop through the CSV and dispatch one job per row (or per chunk):
public function import(Request $request)
{
$rows = array_map('str_getcsv', file($request->file('csv')));
foreach ($rows as $row) {
ImportUserRow::dispatch($row);
}
return response()->json([
'message' => 'Import started! We will notify you when it\'s done.'
]);
}
5,000 rows? 50,000 rows? Doesn't matter — your endpoint returns in milliseconds, and your workers chew through the data in the background at their own pace.
💡 Need all jobs to finish before doing something? Laravel has
Bus::batch()for exactly this — group jobs together, track their progress, and run a callback when they all complete. Worth a separate deep-dive on its own.
Three scenarios, one pattern. Create the job, handle the work, dispatch and forget. Next up — what happens when things go wrong?
Part 6 — Handling Failures Like a Pro
Background jobs fail. Mail providers go down, APIs timeout, PDFs throw exceptions — it happens. The difference between a solid implementation and a fragile one is how you handle it when things go wrong.
Set Up the Failed Jobs Table
First, make sure you have the failed jobs table:
php artisan queue:failed-table # create the migration file
php artisan migrate
Laravel will now store any failed job in this table instead of silently dropping it — including the exception, the payload, and when it failed.
Retries, Timeouts & Backoff
You can control failure behavior directly on the job class:
class SendWelcomeEmail implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 3; // retry up to 3 times
public int $timeout = 30; // kill the job if it runs longer than 30s
public int $backoff = 15; // wait 15 seconds between retries
// ...
}
Or if you want exponential backoff — waiting longer after each failed attempt:
public function backoff(): array
{
return [10, 30, 60]; // 10s after 1st fail, 30s after 2nd, 60s after 3rd
}
This is great for flaky third-party APIs — instead of hammering them every 10 seconds, you give them room to recover.
The failed() Method
When a job exhausts all its retries, Laravel calls the failed() method on it — if you define one:
public function failed(\Throwable $exception): void
{
// Notify yourself, log it, alert the user...
Log::error("SendWelcomeEmail failed for user {$this->user->id}", [
'error' => $exception->getMessage()
]);
$this->user->notify(new EmailFailedNotification());
}
Never leave this empty on jobs that matter. Silently failing jobs are worse than crashing — at least a crash is loud.
Managing Failed Jobs via CLI
php artisan queue:failed # See all failed jobs
php artisan queue:retry 5 # Retry a specific failed job by its ID
php artisan queue:retry all # Retry all failed jobs at once
php artisan queue:forget 5 # Delete a specific failed job
php artisan queue:flush # Clear the entire failed jobs table
ShouldBeUnique — Prevent Duplicate Jobs
Sometimes the same job gets dispatched multiple times — user double-clicks, a webhook fires twice, whatever. For jobs where duplicates are a real problem, implement ShouldBeUnique:
class GenerateWelcomePdf implements ShouldQueue, ShouldBeUnique
{
public int $uniqueFor = 3600; // lock for 1 hour
public function uniqueId(): string
{
return $this->user->id; // one job per user at a time
}
}
Laravel will skip dispatching if a job with the same uniqueId() is already in the queue. Clean solution, zero extra code on the dispatch side.
💡 Golden rule: always define
$tries,$timeout, andfailed()on any job that touches a third-party service or generates critical data. The 2 minutes it takes to add them will save you hours of debugging in production.
Part 7 — Laravel Horizon
queue:work gets the job done locally, but in production, you need visibility — which jobs are running, how long they're taking, what's failing, and whether your workers are keeping up. That's exactly what Horizon gives you.
Install Horizon
composer require laravel/horizon
php artisan horizon:install # install and create migration files
php artisan migrate
This publishes a config file at config/horizon.php and sets up the dashboard.
Basic Configuration
Open config/horizon.php. The part you care about most is environments:
'environments' => [
'production' => [
'supervisor-1' => [
'maxProcesses' => 10,
'balanceMaxShift' => 1,
'balanceCooldown' => 3,
],
],
'local' => [
'supervisor-1' => [
'maxProcesses' => 3,
],
],
],
Horizon uses supervisors to manage your workers internally — don't confuse these with the system-level Supervisor we'll set up in a moment. These are Horizon's own worker groups.
You can also assign specific jobs to specific queues and control priority:
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['critical', 'default', 'low'],
'balance' => 'auto',
'maxProcesses' => 10,
],
Jobs on the critical queue get processed before default, which gets processed before low. Useful when you want invoice processing to always beat bulk imports.
Dispatch to a specific queue like this:
GenerateInvoice::dispatch($order)->onQueue('critical');
ImportUserRow::dispatch($row)->onQueue('low');
Protect the Dashboard
The Horizon dashboard runs at /horizon and exposes sensitive data — failed jobs, job payloads, throughput. Lock it down:
// app/Providers/HorizonServiceProvider.php
protected function gate(): void
{
Gate::define('viewHorizon', function ($user) {
return in_array($user->email, [
'you@yourdomain.com',
]);
});
}
Only whitelisted emails can access the dashboard in production. Simple, effective.
Reading the Dashboard
Once Horizon is running (php artisan horizon), head to /horizon:
- Throughput — how many jobs per minute your workers are processing
- Runtime — average execution time per job class — if something spikes here, that's your bottleneck
- Wait time — how long jobs sit in the queue before a worker picks them up — if this grows, you need more workers
- Failed jobs — everything that broke, with the full exception and payload right there in the UI — no more digging through logs
This alone is worth installing Horizon for.
Keep Horizon Running with Supervisor
On your server, you need Supervisor to keep Horizon alive. If it crashes or the server restarts, Supervisor brings it back automatically.
sudo apt install supervisor -y # Install Supervisor
sudo nano /etc/supervisor/conf.d/horizon.conf # Create a config file
# horizon.conf
[program:horizon]
process_name=%(program_name)s
command=php /var/www/yourapp/artisan horizon
autostart=true
autorestart=true
user=www-data
redirect_stderr=true
stdout_logfile=/var/www/yourapp/storage/logs/horizon.log
stopwaitsecs=3600
Apply and start:
sudo supervisorctl reread # reload the config
sudo supervisorctl update
sudo supervisorctl start horizon
One last thing — whenever you deploy new code, restart Horizon gracefully so it picks up the changes without dropping running jobs:
php artisan horizon:terminate # run manually or put in CI/CD pipeline
Part 8 — Production Checklist
Before you ship, run through this quickly:
Redis
- [ ] Redis is installed and running on your server (
redis-cli pingreturnsPONG) - [ ]
QUEUE_CONNECTION=redisin your production.env - [ ] Redis password is set if your server is exposed
Jobs
- [ ] Every job that touches a third-party service has
$tries,$timeout, andbackoff()defined - [ ] Critical jobs implement
failed()and notify you when they break - [ ] Duplicate-sensitive jobs implement
ShouldBeUnique - [ ] Failed jobs table migrated (
queue:failed-table)
Horizon
- [ ] Dashboard protected with
gate()— not open to the public - [ ] Queue priorities configured (
critical,default,low) - [ ]
maxProcessestuned to your server's capacity
Supervisor
- [ ] Horizon running under Supervisor (
supervisorctl status horizonshowsRUNNING) - [ ]
horizon:terminateadded to your deploy script - [ ] Horizon logs are accessible at
storage/logs/horizon.log
Sanity Check
- [ ] Trigger a job locally and confirm the worker picks it up
- [ ] Intentionally fail a job and confirm it shows up in
/horizonandqueue:failed - [ ] Check wait times in Horizon after your first real traffic — scale workers if needed
Part 9 — Conclusion
Let's go back to where we started — a user clicking "Register" and staring at a spinner for 8 seconds.
With everything we've set up, here's what that same flow looks like now:
User clicks Register
→ Controller creates the user
→ Dispatches 3 jobs to Redis (takes ~2ms)
→ Returns "Registration successful" instantly ✓
Meanwhile, in the background:
→ Worker picks up SendWelcomeEmail → email sent
→ Worker picks up NotifyAdminOnSlack → Slack notified
→ Worker picks up GenerateWelcomePdf → PDF stored, user notified
The user is already on the dashboard while your workers are still doing the heavy lifting. That's the difference.
To recap what we covered:
- Why synchronous code kills user experience and where it hides in typical Laravel apps
- How the queue / job / worker model works under the hood
- Setting up Redis and wiring it to Laravel in minutes
- Converting slow controller logic into clean, dispatchable jobs
- Three real-world scenarios — API calls, PDF generation, bulk imports
- Handling failures properly with retries, backoff, and
failed() - Monitoring everything in production with Laravel Horizon and Supervisor
Queues aren't an advanced topic — they're a fundamental tool, and once you get comfortable with the pattern, you'll start seeing opportunities to use them everywhere.
What's Next?
If you want to go deeper, here's where to go from here:
-
Job Batching with
Bus::batch()— group related jobs, track progress, run callbacks on completion - Broadcasting job progress — combine queues with Laravel Reverb to show a real-time progress bar in your UI
- Horizontal scaling — running multiple workers across multiple servers with Redis as the shared backbone
🔗 Stay Connected
Follow me for more Laravel tutorials, dev tips, deployment workflows and solving real-world production headaches.
- Follow me on LinkedIn
- Follow me here on Medium and join my mailing list
- Follow me here on Dev.to for more in-depth content and tutorials!
Found this article useful?
🙏 Show your support by clapping 👏, subscribing 🔔, sharing to social networks


Top comments (0)