Three days ago, Anthropic shipped Routines for Claude Code. If you missed it: a routine packages a prompt, repos, and connectors into a configuration that runs on a schedule, responds to API calls, or triggers on GitHub events. Runs on Anthropic's cloud, laptop can be closed.
I've been building similar automation workflows for months using different tooling (Nimbalyst automations, custom scripts). Routines makes this pattern accessible to every Claude Code user. Here's what I've learned about which automations actually work and where the limits are.
The Three Triggers
Routines support scheduled (hourly/daily/weekly), API (HTTP POST with bearer token), and GitHub event triggers. Each run creates a full Claude Code cloud session with shell access, skills, and connectors.
Workflows That Deliver Real Value
Issue triage (daily/nightly): Scan new GitHub issues, cross-reference with your codebase to identify affected modules, apply labels, estimate priority, post a summary to Slack. The AI doesn't just categorize text; it reads the code and makes informed severity assessments.
Documentation drift (weekly): Scan merged PRs, identify docs referencing changed APIs, open update PRs. This catches staleness before it becomes a support burden. Nobody has time to do this manually, which is exactly why it should be automated.
Deploy verification (event-triggered): After deploys, run smoke checks, scan error logs, post a go/no-go assessment. Not replacing your test suite, but adding an AI review layer that reads logs with more context than threshold checks.
Constraints to Know
- Usage caps: Pro gets 5 runs/day, Max gets 15, Team/Enterprise gets 25. Plan your cadences accordingly.
- No mid-run interaction: Routines run fully autonomously. Best for tasks producing reports, PRs, or messages. Not for work requiring human judgment during execution.
- Cloud-only: Clones your repo to Anthropic's infrastructure. No access to local tooling or network services.
Getting Started
Visit claude.ai/code/routines or type /schedule in the CLI.
Start with weekly documentation drift detection:
Scan PRs merged in the past 7 days. For each, identify docs
referencing modified functions or APIs. If outdated, open a PR
with suggested updates.
Review the first few outputs, calibrate the prompt, then let it run.
The Layered Approach
The most productive setup uses multiple layers:
- Event-driven routines for immediate responses (PR triggers)
- Scheduled routines for periodic maintenance (nightly triage, weekly doc checks)
- Local automations for environment-specific work (custom tooling, workspace context)
- Interactive sessions for complex, judgment-heavy work
Trying to push everything through one approach leaves gaps. Routines handle the cloud-native, repetitive layer well. For local environment work, you need complementary tooling.
The Bigger Picture
In the first two weeks of April, Cursor shipped Cursor 3 (agent-first workspace), Windsurf launched 2.0 (Agent Command Center + Devin), and Anthropic redesigned Claude Code with Routines. All converging on the same idea: the developer's role is shifting toward orchestrating agents, not writing every line.
Routines are the automation edge of this shift. Start with one, run it for two weeks, calibrate, then add more. Build your automation layer incrementally.
I'm Karl, building Nimbalyst, a visual workspace on top of Claude Code. I write about AI-native development workflows and what actually works in practice.
Original article on nimbalyst.com/blog.
Top comments (0)