Everyone is building AI agents. Almost nobody is making them run without a human pressing "go."
Scroll through any developer community right now and you will find posts about agents that work night shifts, manage 18-tool workflows, and write code while their creators sleep. But here is the thing nobody talks about: the hard part is not what these agents do. It is getting them to decide when to act.
If your agent only runs when you type a message, you have built a chatbot with extra steps. The real unlock is scheduling -- making your agent show up at the right time, for the right reason, without you in the loop.
There are exactly 3 patterns for this. Most developers only know the first one. Here is how all three work, when to use each, and how to combine them.
The Problem with Manual Agents
Most AI agents today are reactive. They sit idle until someone sends a prompt. That is fine for answering questions, but it falls apart the moment you need an agent to do real, recurring work.
Consider a daily briefing agent. It pulls your emails, calendar, and open PRs, then gives you a summary every morning. Useful -- except you have to remember to run it every morning. At that point, a cron job piping curl output to your terminal is more reliable.
The value of an AI agent is not just intelligence. It is intelligence applied at the right time, automatically. The shift is from "agent as tool" to "agent as employee." Employees do not wait for you to poke them every morning. They show up on schedule, respond to events, and use judgment about when to act.
Three patterns make this possible: time-based scheduling, event-based triggers, and condition-based smart triggers.
Pattern 1: Time-Based Scheduling (Cron)
The simplest and most familiar pattern. Your agent runs at fixed intervals -- every hour, every morning at 9am, every Friday afternoon.
When to use it: Any task with a predictable cadence. Daily standups, weekly reports, morning email triage, nightly data syncs.
How it works: A cron expression defines the schedule. When the clock hits the specified time, the system injects a prompt into the agent and kicks off execution. The agent does its work and goes back to sleep.
Here is what common cron schedules look like:
# Every weekday at 8:30am Eastern
30 8 * * 1-5
# Every hour on the hour
0 * * * *
# Every Monday at 9am
0 9 * * 1
# First day of each month at 6am
0 6 1 * *
A typical setup in JSON configuration:
{
"agent": "morning-briefing",
"schedule": {
"cron": "30 8 * * 1-5",
"timezone": "America/New_York"
},
"prompt": "Summarize my unread emails, today's calendar, and any open PRs that need review. Post the summary to #daily-standup in Slack."
}
Real-world examples:
- Morning briefing agent: Runs at 8:30am PT every weekday. Pulls unread emails, today's calendar events, and open GitHub PRs. Posts a summary to Slack.
- Weekly metrics reporter: Runs Friday at 5pm. Queries your analytics platform, generates a formatted report, and emails it to the team.
- Nightly cleanup agent: Runs at midnight. Archives stale Jira tickets, closes PRs older than 30 days, and flags anything that needs human review.
The gotcha: Over-scheduling burns money. If your agent runs every 5 minutes but the underlying data only changes once a day, you are paying for 287 wasted executions. Match the frequency to how often the data actually changes.
In Nebula, setting up a cron trigger is a one-line expression in the agent config -- timezone-aware, with built-in execution logging so you always know when it ran and what it did.
Pattern 2: Event-Based Triggers (Webhooks)
Cron is great for predictable work. But some tasks cannot wait for a schedule. When a production server throws a critical error at 2am, you do not want your agent to find out about it at the next morning's scheduled run.
Event-based triggers solve this. Your agent runs in response to something happening in an external system -- a new error in Sentry, a payment in Stripe, a PR opened on GitHub, a message in a Slack channel.
When to use it: Real-time responses where timing matters. Error triage, lead processing, deployment notifications, customer support escalation.
How it works: You create a webhook URL for your agent. You point your external system's webhook settings at that URL. When the external system fires an event, the event payload lands in your agent as context, and the agent processes it immediately.
[External System] --webhook POST--> [Agent Platform] --> [Agent Execution]
|
Event payload becomes agent context
Here is a practical webhook handler configuration:
{
"agent": "error-triage",
"trigger": {
"type": "webhook",
"source": "sentry",
"url": "https://your-platform.com/webhooks/agent/error-triage"
},
"prompt": "Analyze this error. Check recent deployments. If a deployment happened in the last 2 hours, identify the likely cause and post a summary to #incidents in Slack with a suggested fix."
}
Real-world examples:
- Error triage agent: Sentry fires a critical error. The agent analyzes the stack trace, cross-references recent deployments, and posts a root cause analysis to Slack within seconds.
- Lead processing agent: A new lead arrives in HubSpot. The agent researches the company, scores the lead based on your ICP criteria, and drafts a personalized outreach email.
- PR review agent: GitHub webhook fires on a new pull request. The agent reviews code changes, checks for common issues, and posts review comments before a human even looks at it.
The key advantage over cron: Zero wasted executions. The agent only runs when there is actual work to do. No polling, no empty runs, no burned tokens on "nothing happened" checks.
Architecture tip: Do not pass raw event dumps to your agent. Parse the webhook payload and extract structured, relevant data. A Sentry error event can be 50KB of JSON -- your agent only needs the error message, stack trace, and affected users count.
Pattern 3: Condition-Based Smart Triggers
Here is where most developers stop. They set up cron or webhooks, and they are done. But there is a third pattern that separates noisy agents from useful ones.
Condition-based triggers are a hybrid. The system evaluates incoming events or periodic checks against defined conditions before deciding whether to wake the agent. If the conditions are not met, the event is dropped silently. No agent execution, no LLM tokens burned.
When to use it: When you want event responsiveness but only for events that actually matter. This is the noise-to-signal filter.
How it works:
[Event arrives] --> [Condition evaluation] --> [Pass?] --> [Agent executes]
|
[Fail?] --> [Event dropped, zero cost]
The condition evaluation is deterministic -- it uses simple rule matching, not LLM inference. This means you pay exactly zero tokens for events that do not meet your criteria.
Here is what condition configuration looks like:
{
"agent": "smart-error-monitor",
"trigger": {
"type": "conditional",
"source": "sentry",
"conditions": [
{ "field": "severity", "operator": "eq", "value": "critical" },
{ "field": "affected_users", "operator": "gt", "value": 100 }
],
"match": "all"
},
"prompt": "Critical error affecting 100+ users detected. Analyze the error, check recent deployments, and draft an incident report."
}
Real-world examples:
- Smart error monitoring: Only trigger the agent for errors with severity=critical AND affected_users > 100. Your Sentry project fires 500 events per day, but only 3 actually need agent attention.
- Budget-aware lead processing: Only process leads with deal_size > $10K. Do not waste agent time and API costs on leads your sales team handles manually anyway.
-
Selective PR review: Only review PRs that touch files in
/src/core/or modify more than 50 lines. Skip trivial README updates and dependency bumps.
Why this pattern matters: Without conditions, a webhook-triggered agent on a busy Sentry project fires hundreds of times per day. Most of those are low-severity noise. Condition triggers turn your agent from a firehose into a filter -- it only acts on signals, never on noise.
Nebula's condition triggers evaluate rules before spinning up the agent, so you only pay for executions that matter -- zero LLM tokens wasted on filtered events.
Combining Patterns: The Hybrid Approach
Real-world agents do not use just one pattern. The most effective setups combine all three.
Consider a DevOps monitoring agent:
- Cron: Runs a daily infrastructure health check every morning at 9am. Summarizes CPU, memory, and disk usage trends, then posts to Slack.
- Event trigger: Fires on every PagerDuty alert, 24/7. The agent gets immediate context about what went wrong.
- Condition filter: Only escalates to the agent if the PagerDuty alert has not been acknowledged within 5 minutes. If a human already grabbed it, the agent stays quiet.
[Morning 9am cron] ---------> [Agent: daily health check]
[PagerDuty webhook] --> [Condition: unacked > 5min?] --> [Agent: incident response]
|
[Acked?] --> [Drop, no action]
The key insight: the scheduling layer is completely separate from the agent's logic. The agent does not care how it was triggered -- it receives context and acts. This separation of concerns means you can add a new trigger, adjust a cron frequency, or tighten a condition filter without touching the agent's prompts or tools.
5 Rules for Agent Scheduling in Production
After running scheduled agents in production, these rules save the most headaches:
1. Match frequency to value.
A daily summary is fine for metrics. A critical error needs real-time response. Do not cron what should be event-driven. Ask yourself: if this data changed right now, how soon does my agent need to know? If the answer is "immediately" -- use a webhook, not a cron.
2. Set budget limits per agent.
A bug in an hourly agent can burn through your entire API budget overnight -- we are talking $500+ if the agent loops on expensive model calls. Always cap spending per agent per day. If an agent hits its ceiling, it stops executing and alerts you instead of silently draining your account.
3. Log every execution.
You need to know when agents ran, what they processed, and what they decided. If an agent makes a bad call at 3am, you need the full audit trail to understand why. Observability is not optional for autonomous agents -- it is the first thing you need when something goes wrong.
4. Start conservative, then increase frequency.
Launch with daily runs. If you need faster response, move to hourly, then to event-based. Do not start with every-5-minutes and work backward. It is much easier to speed up a working agent than to debug a runaway one that has already sent 200 Slack messages.
5. Test with dry runs first.
Before enabling a production trigger, run the agent manually with sample event data. Verify the output makes sense. A wrong action executed automatically at scale is much worse than a wrong action you catch in a dry run.
Getting Started
The shift from manual to scheduled agents is the difference between a demo and a product. A chatbot that answers questions when you ask is interesting. An agent that triages your errors at 2am, briefs you at 8am, and processes leads in real-time is genuinely useful.
The three patterns cover every scheduling need:
- Predictable, recurring work? Cron.
- Real-time response to external events? Event triggers.
- Event response with noise filtering? Condition triggers.
- Complex operational workflows? Combine all three.
Most agent frameworks focus on making agents smarter. That matters. But the real unlock is making them show up reliably, at the right time, for the right reason -- without you in the loop.
If you want to try these patterns without building the scheduling infrastructure yourself, Nebula has all three built in -- cron schedules, webhook triggers, and condition-based execution with budget limits and full execution logging out of the box.
Top comments (0)