- Book: PHP to TypeScript
- Also by me: The TypeScript Library — the 5-book collection
- My project: Hermes IDE | GitHub — an IDE for developers who ship with Claude Code and other AI coding tools
- Me: xgabriel.com | GitHub
php artisan queue:work boots a long-running PHP process that pulls jobs out of Redis. Open /horizon in your browser and there it all is: throughput, runtime per queue, failures with the payload, retry button. The whole queue story is one Laravel package the framework already knew how to find.
Open the TypeScript project next to it and the shape changes. There is no queue:work. There is a bullmq dependency, a worker.ts booted by node, and a producer.ts the HTTP server imports. There is no dashboard at /horizon because there is no framework owning a route called /horizon. Bull Board exists, but you mount it yourself, behind whatever auth you wire up.
The reflex is to ask which package is "the same as Horizon." There isn't one. BullMQ is closer to Ruby's Sidekiq. Redis is the queue. BullMQ is the SDK that talks to it. The framework is none of the above's concern.
What Horizon Actually Is (and why BullMQ looks like Sidekiq)
Laravel's queue is a contract over a backend driver: Redis, SQS, database, Beanstalk. queue:work boots a long-running PHP process that reads jobs and dispatches them through the framework's container, so resolved dependencies, events, and middleware behave like an HTTP request did.
Horizon adds a dashboard, a configuration file, and a process supervisor on top. Open config/horizon.php and you see something like:
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default', 'emails', 'reports'],
'balance' => 'auto',
'autoScalingStrategy' => 'time',
'minProcesses' => 1,
'maxProcesses' => 10,
'tries' => 3,
'timeout' => 60,
],
],
],
balance: auto earns its keep. The Horizon docs on balancing strategies spell it out: the auto strategy adjusts worker processes per queue based on current workload. If your notifications queue has 1,000 pending jobs while your render queue is empty, Horizon allocates more workers to notifications until it drains. The supervisor watches each queue's wait-time-to-complete (or job count, if you switch autoScalingStrategy to size) and shifts processes around inside the configured min/max bounds.
That config is doing a lot of work. The framework owns:
- The runtime. Horizon spawns and supervises the child PHP processes.
- The routing. Which queues get how many workers.
- The dashboard.
/horizonships with the package. - The retry semantics.
tries,backoff,timeoutare read from the job class. - The deploy story.
php artisan horizon:terminateis the signal-and-restart loop.
Drop into a TypeScript project and those become five separate decisions.
The honest comparison: BullMQ is a producer/worker SDK that talks to Redis. Sidekiq in Ruby is the same shape. You start bin/sidekiq processes that pull from Redis, and your web app writes jobs into Redis when it needs work done. There is no framework-owned supervisor. There is no /sidekiq mounted by default. You mount the Web UI yourself behind your own auth.
Horizon collapsed that into one object because Laravel could afford to. The framework already owned bootstrapping, config, routing, and the supervisor. Node has none of those by default. Express, Fastify, Hono, Nest, the Bun HTTP server: they handle requests. None of them claim to be your queue runtime too.
So when you reach for "the same thing in TS" and find BullMQ, the surprise is the surface area. Everything Horizon was doing under one roof is now four or five separate, smaller decisions.
A Welcome Email, Both Sides
Here is the same job in both worlds. Start with Laravel.
// app/Jobs/SendWelcomeEmail.php
namespace App\Jobs;
use App\Mail\WelcomeMail;
use App\Models\User;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Mail;
class SendWelcomeEmail implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 3;
public int $backoff = 60;
public int $timeout = 30;
public function __construct(public User $user) {}
public function handle(): void
{
Mail::to($this->user->email)->send(new WelcomeMail($this->user));
}
}
// somewhere in your controller
SendWelcomeEmail::dispatch($user)->onQueue('emails');
The job is one class. tries and backoff are properties on it. The dispatcher is a static facade. The framework knows how to serialize the User, find the emails queue from config/horizon.php, and route the job to a worker process Horizon already started for that queue. Run php artisan horizon, refresh /horizon, see it land.
Now the same thing in TypeScript with BullMQ.
// src/queues/emails.ts — the producer side
import { Queue } from 'bullmq'
import IORedis from 'ioredis'
export const connection = new IORedis(process.env.REDIS_URL!, {
maxRetriesPerRequest: null,
})
export type WelcomeEmailJob = {
userId: string
email: string
}
export const emailsQueue = new Queue<WelcomeEmailJob>('emails', {
connection,
defaultJobOptions: {
attempts: 3,
backoff: { type: 'exponential', delay: 60_000 },
removeOnComplete: { age: 3600, count: 1000 },
removeOnFail: { age: 24 * 3600 },
},
})
// src/queues/emails.worker.ts — the consumer side
import { Worker } from 'bullmq'
import { connection, type WelcomeEmailJob } from './emails'
import { sendWelcomeMail } from '../mail/welcome'
const worker = new Worker<WelcomeEmailJob>(
'emails',
async (job) => {
await sendWelcomeMail(job.data.email, job.data.userId)
},
{
connection,
concurrency: 10,
lockDuration: 30_000,
},
)
worker.on('failed', (job, err) => {
console.error(`[emails] job ${job?.id} failed`, err)
})
// somewhere in your HTTP handler
import { emailsQueue } from '../queues/emails'
await emailsQueue.add('send-welcome', {
userId: user.id,
email: user.email,
})
One thing the BullMQ version makes explicit that Laravel hides: Laravel re-hydrates the User model on the worker via SerializesModels. BullMQ doesn't. The wire format is JSON, so you pass IDs and let the worker re-fetch. That is the trade for cross-runtime portability.
A few other shifts. The Redis connection is a value you import. The job's data type is a generic on the queue. The retry policy lives on defaultJobOptions (or the per-add call) instead of as a property on a class. The worker is a separate file booted by a separate process: node dist/queues/emails.worker.js in production. The HTTP server has the producer; the standalone process has the worker. Two artifacts.
Wordier than Laravel's three lines. Less framework intermediation. The job's name is a string the producer chose. The worker dispatches by queue name, so renaming a job class never breaks a worker already deployed.
The Producer/Worker Model
The piece PHP devs trip on first is that the worker is not the same process as the web server. In Laravel that is also technically true (queue:work is its own process), but the framework hides the seam. You still write one codebase, one routes/web.php, one set of jobs under app/Jobs/. Horizon ties them together with shared config.
BullMQ does not hide the seam. The producer calls queue.add(...), usually inside the HTTP server. The worker calls new Worker(...) in its own Node process. That worker might live on a different machine, scale to ten replicas, or run from the same Docker image as the web server invoked with a different command.
What ties them together is the queue name in Redis. Both sides hit the same Redis URL, both sides agree on the string "emails", and that is the contract. You can rewrite the producer in Bun and the worker in Node and they will keep cooperating, because the wire format is what BullMQ writes to Redis, not anything specific to the runtime on either end.
A typical project layout that makes this clean:
src/
queues/
connection.ts // shared ioredis client
emails.ts // Queue + types + addJob helpers
emails.worker.ts // Worker, started as its own process
reports.ts
reports.worker.ts
server.ts // imports emails.ts, calls .add()
server.ts and the workers are different entrypoints in package.json:
{
"scripts": {
"start:web": "node dist/server.js",
"start:worker:emails": "node dist/queues/emails.worker.js",
"start:worker:reports": "node dist/queues/reports.worker.js"
}
}
In Kubernetes, that is two or three Deployments behind one image. In Fly.io, multiple [processes] entries in fly.toml. In Docker Compose, three services. You decide how many workers, in what shape, on what machine. Horizon decided that for you on a single host inside its supervisor.
Retries, Scheduling, and Crons
Both sides ship retries. Horizon reads $tries and $backoff off the job class. If $backoff = [10, 30, 60], it retries after 10s, then 30s, then 60s. BullMQ's retry docs describe the built-in exponential strategy: backoff: { type: 'exponential', delay: 1000 } retries after (2 ** attemptsMade - 1) * delay milliseconds. With delay: 1000 and attemptsMade going 1, 2, 3, 4, that lands at 1s, 3s, 7s, 15s — not the 1s/2s/4s/8s a "doubling each step" reading would suggest. There is also a fixed strategy and a jitter option to randomize the delay so a thundering herd does not retry in lockstep.
Per-job overrides:
await emailsQueue.add(
'send-welcome',
{ userId: user.id, email: user.email },
{
attempts: 5,
backoff: { type: 'exponential', delay: 2_000, jitter: 0.5 },
},
)
That call sets attempts and a jittered exponential backoff for this one job and overrides the queue's defaults. In Laravel, the closest equivalent is $tries and $backoff properties on the job class, plus per-dispatch overrides via the pending dispatch chain. (retryUntil() exists too, but it bounds retries by wall-clock time rather than swapping the per-attempt backoff.) Both shapes work. BullMQ's is a plain options object on the call site; configuration is data passed to functions, not state on classes.
When attempts run out, the job moves to the failed set. Bull Board shows it there with the error stack trace. The failed job stays in Redis according to your removeOnFail policy: { age: 24 * 3600 } keeps it for a day so an on-call engineer can inspect or retry. Horizon's dashboard does the same, with similar retention, except Laravel's defaults are a failed_jobs table in your DB unless you switch to Redis storage.
Scheduling is the other side of the same coin. In Laravel, you have two stories. dispatch()->delay(now()->addMinutes(15)) for one-shot delayed jobs, and app/Console/Kernel.php for cron — $schedule->job(new ProcessAnalytics)->dailyAt('02:00'). The cron path leans on php artisan schedule:work (or a system cron entry calling schedule:run). Horizon shows the resulting jobs but does not own the scheduling itself.
BullMQ has both, in one place. Delayed jobs are an option on add:
await emailsQueue.add(
'send-welcome',
{ userId: user.id, email: user.email },
{ delay: 15 * 60 * 1000 },
)
Cron-style repeating jobs use the Job Scheduler API, the supported successor to the older repeatable jobs API:
await emailsQueue.upsertJobScheduler(
'daily-digest',
{ pattern: '0 2 * * *', tz: 'Europe/Lisbon' },
{
name: 'send-digest',
data: { kind: 'daily' },
},
)
upsertJobScheduler registers a recurring job under a stable key. If it exists, the call updates it instead of duplicating. That is the bit Bull's older repeat API got bitten by: duplicate cron jobs after a deploy. The scheduler API is idempotent on the key, which is the right shape for migrations.
The catch the docs flag, and the one that produces 3 a.m. pages: the scheduler runs inside the Worker process. As long as one worker is running for a queue, that queue's scheduled jobs fire. As long as zero workers are running, the schedule pauses. If you scale your worker Deployment to zero overnight, your 02:00 cron does not run. The schedule lives inside Redis and inside whichever workers read from it, not in a server-host crontab.
A Dashboard, Mounted by You — and the Process Model Underneath
Horizon ships its dashboard. BullMQ does not. The community's answer is Bull Board (on the v7 line at time of writing), and you mount it onto your HTTP server yourself.
// src/server.ts (Express example)
import express from 'express'
import { createBullBoard } from '@bull-board/api'
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter'
import { ExpressAdapter } from '@bull-board/express'
import { emailsQueue } from './queues/emails'
import { reportsQueue } from './queues/reports'
const app = express()
const serverAdapter = new ExpressAdapter()
serverAdapter.setBasePath('/admin/queues')
createBullBoard({
queues: [
new BullMQAdapter(emailsQueue),
new BullMQAdapter(reportsQueue),
],
serverAdapter,
})
app.use(
'/admin/queues',
requireAdminAuth, // your middleware
serverAdapter.getRouter(),
)
You pick the path. You pick the auth. You pick which queues are visible, which is useful when a queue has PII in the payload that you do not want surfaced to support engineers. Bull Board for Fastify, Koa, Hapi, and Nest exists with the same shape. Pick the adapter for whichever HTTP server you run.
The first reaction from a Horizon dev is "this is more boilerplate." It is. The second reaction, after the first time someone in your org asks "can we restrict the dashboard to ops, hide the SSN field, and put it behind our SSO?", is "the dashboard is yours and it does what you tell it to."
For something closer to Horizon's metric-charts feel, the BullMQ team also ships a paid dashboard called Taskforce. Free option: Bull Board. Paid option with multi-org and richer analytics: Taskforce.
Below the dashboard sits the process model, and this is the section that catches Horizon devs out. Same words, different shapes.
Where workers run. Horizon supervises children on one host. Across two hosts, you run php artisan horizon on each. BullMQ has nothing to supervise. Boot N copies of worker.ts wherever you want and let Redis stitch them together. That makes BullMQ a more comfortable fit for container orchestration where the orchestrator already supervises processes.
Autoscaling. Horizon's balance: auto moves processes between queues based on workload. BullMQ has no equivalent because BullMQ is not a supervisor. The TS answer is the orchestrator's autoscaler: Kubernetes HPA on queue depth via a custom metric, KEDA's BullMQ scaler, or Fly's autoscaler hitting an endpoint that reads await emailsQueue.getWaiting(). You write the metric; the orchestrator scales the worker Deployment. Annoying to set up once, and exactly what you want at scale, because it is the same scaler that handles your web Deployment.
Tenant isolation. Horizon routes by queue name and supervisor. BullMQ does the same trick: one Queue per tenant, or one queue with a tenant ID in the data plus a worker that rate-limits per tenant. BullMQ's Worker accepts a limiter option, and the rate-limiting docs cover dynamic limits per group key. You build tenant isolation on the primitives.
Failure storage. Horizon writes failed-job records into Redis (when configured) or your DB. BullMQ writes them into Redis sets per queue. Both inspectable, both retryable, retention configurable. Same idea, different keys.
Deploy semantics. Horizon expects php artisan horizon:terminate on deploy so the running supervisor drains and exits. The TS equivalent is worker.close() on SIGTERM, which Kubernetes already sends during a rolling deploy. Wire the signal handler once:
process.on('SIGTERM', async () => {
await worker.close()
await connection.quit()
process.exit(0)
})
That handler is your graceful-drain story. Same idea as Horizon's terminate command, spelled out at the process boundary instead of behind an artisan call.
Forward Motion
In Laravel, the framework runs the queue. Horizon owns the supervisor, the dashboard, the routing, the deploy command, the retry semantics. The queue is something Laravel operates.
In TypeScript with BullMQ, Redis is the queue, and BullMQ is the SDK that talks to it. The supervisor is your container orchestrator. The dashboard is something you mount on your HTTP server. The retry semantics are options on a queue. The deploy command is whatever your orchestrator already does. The framework is gone from the picture.
That is not a regression. It is the shape Sidekiq has had for a decade in the Ruby world, and the shape Resque had before it. Horizon was the unusual case, a framework integration deep enough that the queue felt like a Laravel feature. Most queue libraries look like BullMQ.
The trade, in both directions:
- You give up: the one-command dashboard, the auto-balance supervisor, the
dispatch()ergonomics, framework introspection of job classes for retry config. - You gain: producers and workers that scale independently in your orchestration story; a dashboard that is yours to lock down; a wire format any language can write into; per-call configuration that lives in code, not in a
config/horizon.phpfile you forgot to redeploy.
So if you are migrating a Laravel + Horizon system to TypeScript, do not start with "find the BullMQ class that maps to my Laravel job class." That fights the runtime the whole way. Start with the wire shape: what jobs do I enqueue, with what payload, with what retry and timeout policy. That is the BullMQ contract. Workers and producers fall out of it.
Lay out the project as two entrypoints from day one, the HTTP server and at least one worker. Pick a Redis. Mount Bull Board behind your auth. Wire SIGTERM to worker.close(). Let your orchestrator be the supervisor.
Stop looking for Horizon. Start running Redis.
If this reframing landed, PHP to TypeScript is the book it came from. The async-and-background-work chapter walks the Horizon-to-BullMQ migration end to end, including the producer/worker split, the retry-and-backoff translation, scheduler-vs-cron semantics, and what the Kubernetes deployment looks like when the orchestrator is your supervisor. There are also chapters on the sync-to-async paradigm, generics, and discriminated unions for the bits Laravel did not prepare you for.
It is one of five books in The TypeScript Library:
- TypeScript Essentials — entry point. Types, narrowing, modules, async, daily-driver tooling.
- The TypeScript Type System — deep dive. Generics, mapped/conditional types, infer, template literals, branded types.
- Kotlin and Java to TypeScript — bridge for JVM developers. Variance, null safety, sealed→unions, coroutines→async/await.
- PHP to TypeScript — bridge for PHP 8+ developers. Sync→async paradigm, generics, discriminated unions.
- TypeScript in Production — production layer. tsconfig, build tools, monorepos, library authoring, dual ESM/CJS, JSR.
Books 1 and 2 are the core path. Books 3 and 4 substitute for them if you speak JVM or PHP. Book 5 is for anyone shipping TS at work.
All five books ship in ebook, paperback, and hardcover.

Top comments (0)