Most founders trigger their AI agent manually. They open the chat, type the prompt, wait, review. That works for one-off tasks but it's not automation. It's a fancier way to type.
The moment your agent starts doing the same thing every day at the same time without you asking, something shifts. The business starts running on its own rhythm. You stop being the engine and start being the person who checks the dashboard.
This guide covers how to actually schedule AI agent tasks. Not theoretically. Practically, with the specific tools and patterns I use to keep Evo running 14 hours a day while I'm at work.
Why scheduling changes everything
An agent that only fires when you prompt it has the same problem as a to-do list: it depends on your attention. The point of an AI operating system isn't to help you work faster. It's to remove you from the loop entirely on the tasks that don't need your judgment.
Scheduled tasks are how you get there.
Evo currently runs six recurring tasks on a daily or weekly schedule:
- Morning briefing at 7 AM: overnight system status, priority for the day
- Daily blog post at 2 PM: SEO post written, published, cross-posted
- Twitter queue at 9 AM: five tweets drafted in my voice and queued
- Reddit growth scan at 11 AM: finds threads where Xero Scout fits naturally
- Nightly recap at 9 PM: what shipped, what broke, what to fix tomorrow
- Weekly Sunday CEO briefing: revenue, analytics, automation health
None of those require me to type a single thing. They fire, they run, they report back. That's what scheduling buys you.
What are the two main approaches to scheduling AI agent tasks?
You have two core options: cron-based scheduling, which fires at a specific time regardless of what else is happening, and workflow triggers, which fire in response to an event. Most solo founders running a real operating system need both. Cron for the predictable daily tasks, triggers for anything that reacts to external input.
There are two ways to put an AI agent on a schedule.
Cron-based scheduling means a system-level timer fires at a specific time and kicks off the agent with a predefined prompt. Unix cron, Windows Task Scheduler, cloud equivalents like GitHub Actions or Render cron jobs. The agent doesn't know or care how it got triggered. It just receives a message and runs.
Workflow triggers are event-based. Something happens (a new email lands, a form gets submitted, a Stripe event fires) and that event kicks off the agent. Tools like Make, Zapier, and n8n live in this space.
For most solo founders running an AI co-founder setup, you want both. Cron for predictable daily and weekly tasks. Event triggers for anything that reacts to external input.
How to set up cron-based scheduling in OpenClaw
If you're running OpenClaw as your agent runtime, cron scheduling is built in. There's a cron config block where you define the schedule, the prompt, and optionally which agent persona handles the task.
A basic entry looks like this:
{
"id": "daily-blog-post",
"schedule": "0 20 * * *",
"prompt": "Publish one new organic-search-focused blog post to xeroaiagency.com. [detailed instructions...]",
"channel": "telegram"
}
The schedule field is a standard cron expression. 0 20 * * * means 8 PM UTC, which maps to 2 PM Mountain time. The prompt is the exact instruction the agent receives. The channel field tells it where to send status updates.
Three things make this reliable:
- The prompt must be self-contained. The agent wakes up with no memory of what it did yesterday. Every context it needs has to be in the prompt or in a file the prompt references.
- Include explicit stopping conditions. A prompt like "publish one blog post" needs to define what one means and when done means done. Otherwise the agent can spiral.
- Send results somewhere you'll see them. Telegram works well for this. Set the channel so every scheduled run ends with a brief report you can scan in 10 seconds.
How do you schedule AI agent tasks if you're not using OpenClaw?
You don't need a specialized runtime to get this working. GitHub Actions gives you free cron scheduling, Make and Zapier handle event-based triggers, and Render offers lightweight cron workers for anything that needs to run as a proper background process. The agent only needs an API endpoint.
If you're not using OpenClaw, the same pattern works with any agent that accepts API calls.
GitHub Actions is free for public repos and the simplest option for a lot of founders. Create a workflow file with a schedule trigger, and use curl or a Python script to hit your agent's API with the prompt.
on:
schedule:
- cron: '0 14 * * *'
jobs:
run-agent:
runs-on: ubuntu-latest
steps:
- name: Trigger agent task
run: |
curl -X POST $AGENT_API_URL \
-H "Authorization: Bearer $AGENT_API_KEY" \
-d '{"prompt": "Run the daily blog post task"}'
Store your API key as a GitHub Secret. The runner picks it up, fires the curl, done.
Make or Zapier is better when the trigger is time-based but the task involves chaining multiple services. Make is cheaper at scale and more flexible. Zapier has better out-of-the-box integrations. Both let you schedule a "run every day at X" step that calls a webhook or API.
Render cron jobs are a good middle ground if you want something more durable than GitHub Actions and lighter than a full server. Create a background worker service, point it at a small script, and set the cron schedule in the Render dashboard.
What tasks are worth scheduling vs. what to keep manual
Not everything should be automated on a timer. A useful filter: if a task is the same every day with no meaningful variation based on current context, it's a scheduling candidate. If it requires judgment about something new, it probably shouldn't run unsupervised yet.
Schedule these:
- Daily content (blog, social posts, newsletters)
- Morning and evening briefings
- Monitoring and alerting scans
- Regular reports (revenue summary, analytics pull)
- Scheduled lead prospecting
Keep manual or trigger-based:
- Customer replies and support
- Anything involving money moving
- Anything that touches live production data for the first time
- Tasks where the output needs your approval before anything happens
The goal is to automate the predictable, high-repetition tasks and keep human judgment on the variable ones. This is the operating model I covered in how to track what your AI agent is doing, where I break down the three-layer monitoring system that makes unsupervised runs safe.
Why do scheduled agents repeat work or lose track of what they already did?
Because a cron trigger always starts a fresh session. The agent has no automatic memory of yesterday's run unless you build that layer in. Without it, the agent might re-publish content it already published or restart tasks it already finished. The fix is a persistent state source the agent reads at the start of every run.
Here's a thing most guides skip: when a cron job fires, the agent starts fresh. No memory of yesterday's run. No awareness of what it already did.
This creates two problems.
First, the agent might repeat work. It doesn't know it already published a post on that topic. You have to build that awareness into the prompt by pointing it at a data source it can check. For blog posts, I check the Supabase database at the start of every run to see what slugs already exist. The agent reads that list before picking a topic.
Second, there's no continuity unless you build it. If a task runs in five steps and fails at step three, the next day's run starts from scratch. You need a state file or a database row that the agent writes to after each step so it can resume.
Both of these are solvable with a few lines of logic, but they're not automatic. Build the memory layer into your prompt design before you assume the schedule will just work.
For a deeper look at the memory side, how to give an AI agent persistent memory covers the actual architecture I use, including the MEMORY.md file and Supabase state tracking.
How do you know if your scheduled tasks are actually working?
You need a lightweight reporting layer, not a full observability stack. The minimum viable setup is a single message to Telegram at the end of every run: task name, status, and one relevant stat. If the run fails, a second message with the error and last completed step. Check it once a day. That is enough.
A scheduled task that runs silently is a problem. You need to know it fired, what it did, and whether anything went wrong.
Minimum viable monitoring for a cron setup:
- Every task sends a Telegram message when it completes. One line: task name, status, any relevant stat (word count, posts published, leads found).
- If it fails, it sends a different message. Include the error and the last step it reached.
- Check those messages once a day. Not obsessively, just a quick scan. If three days go by and you haven't seen a success message for a task, something broke.
This is lightweight but it works. The goal isn't a full observability stack. It's a 10-second morning check that confirms the systems are running.
What should you schedule first if you're starting from zero?
Build a morning briefing. It has no external side effects, it is immediately useful, and it forces you to work out the scheduling plumbing before anything consequential depends on it. Once it runs reliably for a week, add one content task. Methodical beats ambitious here because a broken scheduled task that fails silently is worse than no automation at all.
Don't try to schedule six tasks on day one. Pick one, make it bulletproof, then add the next.
The best first scheduled task for most founders is a morning briefing. It's low-stakes (no external side effects), immediately useful (you start every day knowing your system status), and forces you to figure out the scheduling plumbing before anything high-consequence runs on it.
Once that fires reliably for a week, add a second task. Something with a real output: a blog post, a social post, a lead list. Run it a few times manually first to confirm the prompt is solid, then put it on the schedule.
Within three or four weeks you'll have a set of tasks running daily without your involvement. That's when the operating system feeling starts to click into place.
If you want help designing the right task schedule for your business, the Build Lab is where I work directly with founders on this exact setup.
For further reading on cron syntax and options, the crontab.guru editor is the fastest way to validate any schedule expression. GitHub's Actions documentation on scheduled workflows covers the hosted runner approach in detail. If you're evaluating Make as your trigger layer, their scenario scheduling documentation is worth a read before you commit to it.
The shift from "using AI" to "running on AI" doesn't come from a better model or a smarter prompt. It comes from the scheduler. The moment your agent stops waiting for you to show up is the moment it becomes infrastructure instead of a tool.
Build the briefing. Then build the stack.
Start Building Your Own AI System
- Your First AI Agent - $1 launch-test guide, instant download. The fastest way to get started.
- Build an AI Co-Founder - the full architecture ($19).
- AI for the Rest of Us newsletter - practical AI 3x/week for people with day jobs.
Want to build your own AI co-founder?
I'm building Xero in public — an AI system that runs distribution, content, and ops while I work a full-time job.
- Start here: Your First AI Agent — $7 guide, instant download
- Go deeper: Build an AI Co-Founder — the full architecture ($19)
- Newsletter: AI for the Rest of Us — practical AI 3x/week for people with day jobs
- Site: xeroaiagency.com
Originally published at xeroaiagency.com
Top comments (0)