DEV Community

Cover image for Event-Driven AI Agent Workflows with OpenClaw
Clamper ai
Clamper ai

Posted on • Originally published at clamper.tech

Event-Driven AI Agent Workflows with OpenClaw

Most AI agent tutorials show you how to build a chatbot that responds when you type something. That is fine for a demo. But production agents need to work without you standing over them. They need to react to events: a new email arrives, a deploy finishes, a scheduled time hits, a health check fails. This guide covers the three core event-driven patterns in OpenClaw and how to wire them into real workflows.

Why Event-Driven Matters for AI Agents

A chatbot waits for input. An agent acts on triggers. The difference between these two approaches determines whether your AI setup is a toy or a tool.

Consider what a production agent actually needs to do:

  • Check email every 30 minutes and flag urgent messages
  • Run a deployment pipeline when code is pushed to main
  • Monitor API uptime and alert you when something breaks
  • Publish scheduled social media posts at specific times
  • Process incoming webhook data from third-party services

None of these start with "the user typed a message." They start with an event. OpenClaw gives you three primitives to handle this: cron jobs, heartbeats, and webhook triggers. Each serves a different use case, and understanding when to use which one is the key to building agents that actually run your workflows.

Pattern 1: Cron Jobs for Precise Scheduling

Cron is your go-to when timing matters. "Every Monday at 9 AM." "Every 6 hours." "Once at 3 PM tomorrow." OpenClaw's cron system runs tasks in isolated sessions, which means your main agent session stays clean and responsive.

When to Use Cron

  • Exact timing required: Reports that need to go out at 9:00 AM sharp
  • Isolated execution: Tasks that should not interfere with your main session
  • Different model per task: Use a cheaper model for routine checks, a powerful one for analysis
  • One-shot reminders: "Remind me about the meeting in 20 minutes"

Setting Up a Cron Job

OpenClaw uses standard cron expressions. Here is a practical example that generates a daily standup summary from your Git commits:

# Daily standup generator - runs at 9:00 AM every weekday
schedule: "0 9 * * 1-5"
model: "anthropic/claude-sonnet-4-20250514"
task: |
  Generate a standup summary:
  1. Run `git log --since="yesterday" --oneline` in ~/projects/
  2. Summarize what was accomplished
  3. Check the Trello board for in-progress items
  4. Send the summary to the #standup channel

channel: discord
target: "#standup"
Enter fullscreen mode Exit fullscreen mode

The key detail: cron jobs spin up their own session. Your main agent keeps running normally. The cron session does its work, delivers the result, and shuts down. No context pollution.

Cron for Content Pipelines

One of the most powerful uses of cron is automating content pipelines. At Clamper, we use a cron-triggered pipeline that:

  1. Selects a topic from a rotating list
  2. Generates a featured image matching brand guidelines
  3. Writes a 1500-2500 word technical blog post
  4. Deploys to the website via git push
  5. Cross-posts to Dev.to with canonical URLs
  6. Schedules social shares across 5 platforms
# Content pipeline - runs daily at 6 AM
schedule: "0 6 * * *"
model: "anthropic/claude-sonnet-4-20250514"
task: |
  Run the blog pipeline:
  - Read pipelines/clamper-blog.md for full instructions
  - Select topic, generate image, write post
  - Deploy and schedule social shares

channel: telegram
target: "@wissem"
Enter fullscreen mode Exit fullscreen mode

This single cron entry produces a complete blog post every day. The agent handles topic selection, image generation, writing, deployment, cross-posting, and social scheduling. All without human intervention.

Pattern 2: Heartbeats for Periodic Awareness

Heartbeats are different from cron. Where cron says "do this specific thing at this specific time," heartbeats say "wake up, check if anything needs attention, and act accordingly."

Think of heartbeats as your agent's pulse. Every 15-30 minutes, OpenClaw sends a heartbeat to your agent. The agent reads its HEARTBEAT.md file, checks the items listed there, and either takes action or responds with HEARTBEAT_OK (nothing to do).

When to Use Heartbeats

  • Batch multiple checks: Email + calendar + notifications in one pass
  • Context-aware decisions: The agent has access to recent conversation history
  • Flexible timing: Exact minute does not matter, "roughly every 30 min" is fine
  • Proactive outreach: Agent notices something and reaches out to you

Configuring HEARTBEAT.md

Your HEARTBEAT.md file is the agent's checklist. Keep it focused to minimize token usage:

# HEARTBEAT.md

## Checks (rotate through these)
- [ ] Unread emails? Check inbox for urgent items
- [ ] Calendar events in next 2 hours?
- [ ] GitHub PRs waiting for review?
- [ ] Any running deployments to check?

## State Tracking
Track last check times in memory/heartbeat-state.json
Do not re-check items checked in the last 30 minutes.

## Quiet Hours
23:00 - 08:00: HEARTBEAT_OK unless truly urgent
Enter fullscreen mode Exit fullscreen mode

The beauty of this pattern is that the agent uses judgment. It does not blindly run every check every time. It reads the state file, sees that email was checked 10 minutes ago, and skips it. It notices a calendar event in 45 minutes and proactively reminds you. This is the difference between automation and intelligence.

Heartbeat vs Cron: Decision Matrix

Factor Use Cron Use Heartbeat
Timing precision Exact time matters Approximate is fine
Task isolation Separate session Main session context
Multiple checks One task per job Batch many checks
Model selection Per-job model Main session model
Context needed Standalone task Needs recent history

Pattern 3: Webhook Triggers for External Events

Webhooks let external services push events to your agent. A GitHub push, a Stripe payment, a form submission, a monitoring alert. Instead of polling for changes, your agent reacts instantly when something happens.

Architecture Overview

The pattern works like this:

  1. External service sends HTTP POST to your webhook endpoint
  2. OpenClaw gateway receives the payload
  3. Agent session processes the event data
  4. Agent takes action based on the payload content
// Example: GitHub webhook payload
{
  "action": "opened",
  "pull_request": {
    "number": 42,
    "title": "Add retry logic to API client",
    "head": { "ref": "feature/retry-logic" },
    "base": { "ref": "main" }
  }
}
Enter fullscreen mode Exit fullscreen mode
# Agent task triggered by webhook:
task: |
  A new PR was opened: #42 - Add retry logic to API client
  1. Clone the repo and checkout the PR branch
  2. Review the diff for code quality issues
  3. Check for test coverage
  4. Post a review comment on the PR
  5. If critical issues found, request changes
Enter fullscreen mode Exit fullscreen mode

Real-World Webhook Workflows

Deployment notifications: When Vercel finishes a deploy, it hits our webhook. The agent verifies the deployment is healthy (checks the URL, runs a quick smoke test), then posts a confirmation to our Telegram channel.

# Deploy verification webhook handler
on_event: "vercel.deployment.success"
task: |
  New deployment detected: ${deployment_url}
  1. Fetch the URL and verify 200 response
  2. Check that key pages render correctly
  3. Run lighthouse audit (performance > 90?)
  4. Report results to Telegram

  If any check fails:
  - Alert immediately
  - Include error details
  - Suggest rollback if critical
Enter fullscreen mode Exit fullscreen mode

Payment processing: Stripe webhook fires on new subscriptions. The agent welcomes the customer, provisions their account, and updates the CRM.

Monitoring alerts: Uptime service detects an outage. Webhook triggers the agent to check logs, identify the issue, attempt automatic recovery, and notify the team with a root cause analysis.

Combining Patterns: The Reactive Agent

The real power comes from combining all three patterns. Here is what a production OpenClaw setup looks like:

# Production agent event architecture

# CRON: Scheduled tasks with exact timing
cron:
  - schedule: "0 9 * * 1-5"     # Weekday standup
    task: "Generate standup from git log + Trello"
  - schedule: "0 6 * * *"       # Daily blog pipeline
    task: "Run content pipeline"
  - schedule: "0 */4 * * *"     # Every 4 hours
    task: "Check API health + uptime metrics"

# HEARTBEAT: Periodic awareness (every ~30 min)
heartbeat:
  checks:
    - email_inbox
    - calendar_upcoming
    - github_notifications
    - deploy_status
  quiet_hours: "23:00-08:00"

# WEBHOOKS: Real-time external triggers
webhooks:
  - github.pull_request.opened: "Review PR code"
  - stripe.checkout.completed: "Onboard new customer"
  - vercel.deployment.ready: "Verify deploy health"
  - uptime.alert.triggered: "Investigate outage"
Enter fullscreen mode Exit fullscreen mode

With this setup, your agent is never idle and never blocking. Cron handles the predictable work. Heartbeats handle the ambient awareness. Webhooks handle the real-time reactions. Together, they create an agent that genuinely runs your operations.

Implementation Tips

1. Start with Heartbeats

If you are new to event-driven agents, start with heartbeats. They are the simplest to set up (just create a HEARTBEAT.md file) and they give you immediate value. Your agent starts checking email, calendar, and notifications without any infrastructure changes.

2. Use Cron for Heavy Tasks

Do not put heavy tasks in heartbeats. If a task takes more than 30 seconds, it should be a cron job or a subagent. Heartbeats should be quick checks that occasionally trigger deeper work.

# Bad: Heavy task in heartbeat
- [ ] Generate weekly report (takes 5 minutes)

# Good: Heartbeat triggers cron/subagent
- [ ] Is it Monday 9 AM? If weekly report not yet generated,
      spawn subagent to handle it
Enter fullscreen mode Exit fullscreen mode

3. Track State Properly

Event-driven agents need state management. Without it, your heartbeat checks the same email 47 times. Use a simple JSON state file:

{
  "lastChecks": {
    "email": "2026-03-19T10:30:00Z",
    "calendar": "2026-03-19T10:00:00Z",
    "github": "2026-03-19T09:30:00Z"
  },
  "lastEmailId": "msg_abc123",
  "pendingTasks": [],
  "quietMode": false
}
Enter fullscreen mode Exit fullscreen mode

4. Implement Quiet Hours

Nobody wants their agent pinging them at 3 AM about a non-urgent email. Define quiet hours and enforce them. During quiet hours, the agent should only act on genuinely critical events (downtime, security alerts) and batch everything else for morning delivery.

5. Log Everything

Event-driven systems are harder to debug than request-response systems. Log every trigger, every decision, every action. When something goes wrong at 3 AM, you want to know exactly what happened.

## Agent Activity Log - 2026-03-19

### 06:00 - Cron: Blog Pipeline
- Topic selected: Event-driven workflows
- Image generated: event-driven-hero.png
- Blog deployed to clamper.tech
- Social shares scheduled for 11:00 AM

### 09:00 - Cron: Standup
- 3 commits found since yesterday
- Summary posted to #standup

### 09:30 - Heartbeat
- Email: 2 new (1 urgent from client)
- Calendar: Team sync in 1.5 hours
- GitHub: 1 PR awaiting review
- Action: Flagged urgent email, set reminder
Enter fullscreen mode Exit fullscreen mode

Getting Started with Clamper

Clamper makes setting up these patterns straightforward. Our skill system includes pre-built templates for common event-driven workflows: email monitoring, deployment pipelines, content publishing, and more. Install a skill, configure your triggers, and your agent starts working.

The quick-reminders skill handles one-shot cron jobs. The github skill processes webhook events from your repos. The marketing skill manages scheduled social posts. Each one follows the event-driven patterns described in this guide.

Check out the getting started guide to set up your first event-driven workflow, or browse the skills directory for pre-built automation templates.

Conclusion

Event-driven architecture transforms AI agents from interactive toys into autonomous operators. The three patterns (cron for precision, heartbeats for awareness, webhooks for reactivity) cover every trigger scenario you will encounter in production.

Start simple. Add a HEARTBEAT.md with three checks. Set up one cron job for your most repetitive task. Wire up a webhook for your most common external event. Then expand from there. Within a week, your agent will be handling dozens of tasks that used to require your manual attention.

The goal is not to automate everything. It is to automate the right things, with the right trigger patterns, so you can focus on work that actually requires a human brain.

FAQ

Can I use multiple trigger types for the same workflow?

Yes. A common pattern is using a cron job as the primary trigger with a webhook as an override. For example, your daily report runs at 9 AM via cron, but a webhook can trigger an immediate report if a critical metric crosses a threshold.

How do I prevent duplicate actions from overlapping triggers?

Use the state file pattern described above. Before taking action, check if the same action was already performed recently. Idempotency keys work well here: store the last processed event ID and skip duplicates.

What happens if my cron job fails?

OpenClaw cron sessions are independent. A failed cron job does not affect your main agent or other cron jobs. The failure is logged, and you can configure retry logic or alerting in the task definition itself.

How many heartbeat checks should I include?

Keep HEARTBEAT.md lean. 3-5 checks is ideal. Each check burns tokens, and heartbeats run frequently. If you need more checks, rotate through them using the state file to track what was last checked.

Can webhooks trigger subagents instead of the main agent?

Absolutely. For heavy webhook processing (like PR reviews), spawning a subagent is the recommended pattern. The webhook handler in the main session receives the event, spawns a subagent with the task, and returns immediately. The subagent does the heavy lifting and reports back when done.

Top comments (0)