Scheduled tasks are easy—until they aren’t. The first time an invoice isn’t sent, a sync silently stops, or a report runs twice and crashes your server, you realize scheduled work isn’t just "ops trivia." It’s a product risk.
For a long time, I treated scheduling as a server concern by adding lines to a crontab. But that led to major pain points:
- Tasks were configured outside the codebase (not versioned or reviewable).
- Differences between staging and production ("it works on my server").
- Overlapping jobs because a previous run didn't finish.
- Double executions when the app scaled to multiple instances.
Laravel’s Scheduler solves the real problem: Governance. It turns scheduling into something you can read, review, deploy, and reason about.
The core idea: the server triggers, Laravel orchestrates
In production, you only need one cron entry on your server:
run
php artisan schedule:runevery minute
That’s it.
From this point, Laravel decides which tasks are due. Your schedule becomes part of your application code, shifting it from a "mystery ops config" to versioned application behavior.
This shifts scheduling from “mystery ops config” to versioned application behavior.
Why I prefer Laravel Scheduler over “pure crontab”
I’m not anti-cron. Cron is great at one thing: triggering commands at a regular cadence.
The issues start when cron becomes the place where business-critical workflows live. Because then you’re maintaining system behavior in a place that:
- is not part of pull requests
- can’t be code-reviewed the same way
- varies between environments
- becomes messy over time (and nobody wants to touch it)
Laravel Scheduler fixes that by letting me express scheduling as intent, not cron syntax.
Instead of thinking “what is the crontab line for weekdays at 02:00?”, I can encode the intent directly:
- every day at 2 AM
- in the correct timezone
- never overlap
- only run once even with multiple servers
- keep output for auditing
My “production defaults”: make tasks safe by design
Almost every important scheduled task in my projects includes two safety guarantees:
1) No overlaps
If a job is still running, the next scheduled tick should not start another copy.
That’s what withoutOverlapping() gives you.
Overlaps cause the most annoying class of bugs: duplicates. Duplicate emails. Duplicate invoices. Duplicate exports. Duplicate API calls. Duplicate side effects.
2) One execution across multiple servers
When an app scales horizontally, cron runs on every instance by default.
Without protection, the same scheduled task can run N times.
That’s what onOneServer() is for.
If you’ve ever scaled to two servers and suddenly saw doubled notifications… you only need that incident once to adopt
onOneServer()forever.
A concrete example (Carbon-friendly snippet)
Let’s say you generate a daily report at 2 AM:
<?php
use Illuminate\Support\Facades\Schedule;
Schedule::command('reports:daily')
->dailyAt('02:00')
->timezone('Europe/Paris')
->onOneServer()
->withoutOverlapping()
->sendOutputTo(storage_path('logs/schedule-reports.log'));
What I like about this code is that it reads like a checklist of business intent:
- Daily at 02:00
- Correct timezone
- Single run across instances
- No overlap
- Output preserved for auditing
Observability: don’t “trust” schedules—prove they run
A scheduled task that fails silently is worse than one that fails loudly.
So I treat scheduled work like I treat anything business-critical: it needs traceability.
At minimum, I want:
- output persisted somewhere (
sendOutputTo,appendOutputTo, etc.) - a way to audit the configured schedule (
php artisan schedule:list) - visibility in logs/monitoring when something breaks
If the task is critical (payments, invoices, notifications), I go a step further and connect failures to alerts.
The point is not “more tooling”. The point is shorter time to detect.
The scheduler is an orchestrator, not a worker
Here’s a rule that saved me multiple times:
The scheduler should trigger work, not be the work.
If something can be slow, fragile, or dependent on external services, I don’t want it to run as one long synchronous command inside schedule:run.
Instead, I schedule a command that dispatches a job:
- the schedule stays quick and predictable
- the heavy work runs in the queue
- retries and failures are handled properly
- monitoring becomes easier
This is also how you avoid minute-based drift when tasks take longer than expected.
The “SQL vs PHP” equivalent in scheduling
I use a similar separation of concerns as with data transformations:
- cron/Laravel schedule: when
- job/command: what
- queue workers: how it executes reliably
When those responsibilities are mixed, maintenance becomes painful.
Common pitfalls (that look fine until production)
I’ve seen these issues repeatedly:
“It works locally but not in prod”
Often the schedule is correct, but the server isn’t actually triggering it (missing cron entry, wrong PHP path, wrong user).
Overlaps
The task runs every minute, but takes 2 minutes. Now you have two copies running. Then three.
Multi-instance duplicates
Scaling from 1 to 2 servers doubles everything—emails, webhooks, cleanup jobs.
No logs, no audit trail
You’re guessing whether it ran. Guessing is not a strategy.
A quick production checklist
If I’m shipping scheduled tasks, I want to answer these questions:
- Is there exactly one trigger cron on the server ?
- Can this task overlap? If yes, how do I prevent it ?
- Can this task run on multiple servers ? If yes, how do I enforce single execution ?
- Where do I see output ?
- If it fails, how do I find out quickly ?
- Should this be a queued job instead ?
Final thought
Laravel Scheduler doesn’t just “schedule tasks”.
It gives you a way to treat scheduled work like real application behavior:
- versioned
- reviewable
- predictable
- safer in production
- easier to audit and troubleshoot
And once you’ve dealt with a silent failure or duplicated side effects in production, that shift is worth a lot.
Top comments (0)