DEV Community

Cover image for Claude Routines Aren't a Reasoning Cron. They're a Repo-Centric Subset of One.
Phil Rentier Digital
Phil Rentier Digital

Posted on • Originally published at rentierdigital.xyz

Claude Routines Aren't a Reasoning Cron. They're a Repo-Centric Subset of One.

A week after Anthropic shipped Routines, three of my cron jobs are running in production. Took thirty minutes.

The PR auto-review that was polling GitHub every half hour? Dead. The weekly doc drift script that parsed commits in crusty bash? Dead. The SEO data refresh I kept putting off for six months (no time, you know how it is)? Live in five.

Three jobs, thirty minutes, no friction. Convinced. That evening, I went for the fourth.

And then, nothing. Not an error, not a quota, not a malformed YAML. The job runs, the binary responds, but the service it queries lives on an IP Routines doesn't see and never will, by construction. The forty-seven other jobs on my server are in the same situation. Not one fits. And it's not a problem to fix (it's mechanical).

Since Routines dropped, I've mostly seen demos. Everyone shows the three jobs that work. Nobody names the boundary. What follows is how to wire this into production infra, including the parts where Routines stops and you take over.

TLDR. Routines is not a universal reasoning cron. It's a reasoning cron with a perimeter, and most automation lives outside that perimeter for reasons you can't configure your way out of. The question isn't whether Routines is good. It's where it stops, what runs there instead, and how to keep the DIY half alive without re-logging every morning.

The three jobs I migrated work better in Anthropic's cloud than on my server. PR review fires on pull_request.opened instead of polling. The doc drift script has full repo context now, output went from skimmable to useful. The SEO refresh just runs, every morning, no setup tax. That part is settled.

The article is about everything else. The forty-seven jobs I tried to migrate next, why not one of them works, and the DIY pattern that survives in 2026 once you accept which side of the line your job is on.

What "Reasoning Cron" Actually Means

Let me name what we're talking about.

A reasoning cron is a scheduler that calls an LLM to think, not just to execute deterministic code. It reads context, makes decisions, generates output that depends on what it just saw. n8n and Make and Zapier route data (they don't think). A Python cron is rigid, it breaks when the input format shifts. A reasoning cron adapts.

That's the category. Routines belongs to it. So does my DIY pattern. So does any wrapper around claude -p or gpt or gemini. The category is real, the demand is real, and Anthropic shipping a managed product for it is the right move.

What's misleading is calling Routines the reasoning cron. It's a reasoning cron with a fixed perimeter.

The perimeter has three walls. One, Routines runs in Anthropic's cloud, not on your machine. It clones a Git repo at the start of each run and that's the entire filesystem it sees. Two, it talks to the outside world through managed connectors (Slack, Linear, Jira, GitHub, GDrive) and an HTTP allowlist (default Trusted, which blocks most external APIs). Three, it has a clean slate every run. No state, no cookies, no persistent session.

Nimbalyst's practical guide arrived at the same line independently: reach for Routines when the work is repo-centric, runs on a schedule, and doesn't need your local environment. Same perimeter, different words.

Once you have those three walls in your head, what follows isn't opinion. It's consequence.

Where Routines Wins (Three Jobs I'm Not Touching Again)

Concession first, because honesty is faster than rhetoric.

These three jobs are better in Routines than they were in my cron. I'm not migrating them back.

PR auto-review. Before: a Python cron that polled GitHub every thirty minutes, ran a claude -p review on any new PR, posted a comment via the API. The polling cron lagged behind every push. After: a Routine triggered on pull_request.opened, runs the moment a PR opens, posts via the GitHub connector. Same prompt, same output quality. Less YAML, no polling waste.

Doc drift weekly. Before: a bash script that listed commits since last Monday, diffed them against the docs folder, fed both into claude -p, emailed me a summary. The summary was always slightly off because the script didn't carry repo-wide context. The model only saw what bash sed-piped into it. After: a Routine that gets the full repo cloned, reads CLAUDE.md, the docs folder, and the commit log natively, and writes a "what changed, what's stale" doc that actually reflects the codebase. The output went from skimmable to useful.

SEO data refresh. No Before. I'd been wanting to set up a weekly pull from my analytics provider, run a model over the deltas, and post a summary to Slack. Every time I sat down to wire it up, something else came in and the YAML never got written. After: a Routine, fifteen minutes of setup, runs every morning. The job that I never built in six months exists now. That's the strongest case for a managed product (the work you'd never get around to doing yourself).

These three share four traits. Repo-centric. Output goes to Slack or GitHub. Frequency is at least one hour. Nothing in the chain touches my local network.

That's exactly the Routines perimeter. My other forty-seven jobs miss at least one of those four. Not one out of forty-seven hits all four.

Six Reasons Routines Can't Replace My Cron Jobs

Six mechanical reasons. Not preferences, not edge cases. Each one closes the door before you finish typing the YAML.

1. Local MCP servers. Routines uses Anthropic's managed connectors. That's it. The MCP server I wrote myself, the one that lives on my machine and exposes my own data to my Claude Code sessions, is not available. Routines can't see it, can't talk to it, can't authenticate against it. Any workflow where the model needs to query something I built locally is out.

2. Services on private IPs. Tailscale mesh. NAS at home. Postgres on the server. Internal monitoring dashboards. Anything sitting on a 100.x address or a 192.168 LAN. Routines runs in Anthropic's cloud. It doesn't know my mesh exists. The fourth job from the opening lives here, and so do nineteen others on my list.

3. Sub-hourly frequency. Routines minimum interval is one hour. My status poller runs every fifteen minutes because that's what the alert window requires. Any job that needs to fire faster than once an hour, mechanically, can't move.

4. Daily quota. Pro is 5 runs per day. Max is 15. Team is 25. I have forty-seven jobs that need to run nightly, plus dailies, plus the sub-hourly ones. Even if every other constraint vanished, I'd hit the quota before midnight on a Max plan. The quota isn't a soft limit you can negotiate (it's the contract).

5. Persistent browser session. Routines spins up a clean environment every run. No cookies, no localStorage, no session carryover. If your job needs to log into a site once and reuse the session, Playwright automation against a service that requires auth, you can't. Nate Herk documented this on Skool when he tried to run a community automation in Routines. The login dies between runs. The job is structurally impossible.

6. Local persistent state. A job that writes to a local SQLite between runs, or maintains a file-based queue, or appends to a long-lived log. Routines starts fresh every time. Whatever your job wrote last run is gone. You can use the connector outputs as state (Linear tickets, GitHub issues), but if your state lives on disk where the cron lives, that's not portable.

A community comment under Anthropic's launch post on Threads put it bluntly: once again github centric features. That's the read from the outside, and it's right, but it's also incomplete. Routines isn't only GitHub, the connectors do more than that. The honest framing is: Routines is repo-centric and managed-connector-centric, and if your work happens outside that perimeter, the tool has nothing to offer you.

Six cases. Not six opinions.

The DIY Pattern That Survives in 2026

If your job lives outside the Routines perimeter, you build it yourself. What follows is the pattern that survives, the parts nobody documents in the dozen "Routines just dropped" tutorials saturating the feed last week.

Use shell redirect, not spawn-with-stdin.

claude -p "Summarize the input as JSON" < input.txt

echo "$LARGE_INPUT" | claude -p "Summarize"
Enter fullscreen mode Exit fullscreen mode

The pipe deadlock is the silent killer. No error, no timeout, just a process hanging on a buffered stdin that never closes. I lost a weekend on this before I traced it. Shell redirect from a file is the only reliable way to feed large input to the binary in non-interactive mode.

Unset ANTHROPIC_API_KEY in your cron environment.

unset ANTHROPIC_API_KEY
claude -p "..."
Enter fullscreen mode Exit fullscreen mode

If ANTHROPIC_API_KEY is set when you call claude -p, the binary uses it and bills your API account. Silently. The auth precedence is documented but easy to miss. You think you're running on your subscription, and actually every cron run is going through pay-per-token. Unset it explicitly. Your wallet will thank you.

Constrain JSON output via prompt, not flag.

Don't trust --output-format json to do the heavy lifting. Tell the model what schema you want, in the prompt, then validate downstream:

claude -p 'Respond ONLY with valid JSON matching: {"status": "ok|fail", "items": [...]}. No prose, no fences.' < input.txt | jq -e .status
Enter fullscreen mode Exit fullscreen mode

If jq -e fails, retry once. If retry fails, alert. The prompt-level contract holds better than the flag in my experience, and you get clean failure modes when the model drifts.

This is also why the DIY pattern doesn't go away when MCP gets richer. The CLI binary stays predictable, deterministic at the shell layer, and it composes with everything else you have. I made the longer argument for CLIs over MCP in agent stacks a few weeks back, and Routines doesn't change the conclusion. CLIs compose. Managed schedulers don't, by design.

Generate a long-lived OAuth token with claude setup-token.

This is the part missing from every cron-with-Claude tutorial I've read.

The claude-code GitHub repo is full of the same complaint. OAuth tokens expire in 8 to 24 hours in --print mode, refresh fails silently, automation dies. The DEV community post "Building Claudio: My Always-On Claude Code Box" walks through exactly this pain. V1 lasted two weeks before tokens expired. V2 abandoned cron entirely and pivoted to a desktop tool.

There's a built-in command that solves this. Run it once, interactively, on the machine where you originally logged in:

claude setup-token
Enter fullscreen mode Exit fullscreen mode

It generates a long-lived OAuth token (one year, inference scope only) designed for CI and unattended scripts. You put it in CLAUDE_CODE_OAUTH_TOKEN. The binary respects it, no refresh dance, no daily re-login.

I don't paste the token into my cron environment. I store it in a secrets manager (I use Infisical, Vault and Doppler and AWS Secrets Manager all do the same job), and the cron pulls it at run time via a machine identity scoped to that one server:

#!/usr/bin/env bash
set -euo pipefail

TOKEN=$(infisical secrets get CLAUDE_CODE_OAUTH_TOKEN --plain)
export CLAUDE_CODE_OAUTH_TOKEN="$TOKEN"
unset ANTHROPIC_API_KEY

claude -p "$(cat prompt.txt)" < input.json | jq -e . > output.json

unset CLAUDE_CODE_OAUTH_TOKEN
Enter fullscreen mode Exit fullscreen mode

The token sits in memory for the duration of the run, then disappears. If the server is compromised, I revoke the machine identity and rotate that one. The Claude OAuth token itself doesn't have to move. That's what keeps a DIY cron stack running without daily re-logins or silent breakage.

A Quick Word on Terms of Service

Anthropic clarified the policy in February 2026: using the Claude Code CLI on your own machine, daemon, cron job, all good. The CLI on your own machine, calling the official binary, is fine.

What got cut in April 2026 was different. Third-party tools spoofing the Claude Code client and using subscription auth to power external products got their access revoked. I lived through that one and rebuilt my setup the week after. The DIY pattern in this article doesn't sit on the wrong side of that line. It's the official binary, on my machine, doing what the binary is designed to do.

Routines is a research preview. The current ToS reading might shift, the quotas might shift, the connector list might shift. Check the docs every couple of months if you're building anything that depends on it. That applies to my pattern too. Local cron with the official binary has been allowed for two years and the position got reaffirmed three months ago, but "currently allowed" is not "permanent."

Three Questions That Decide Where Your Job Goes

Three binary questions. Honest answers. The decision falls out.

1. Does the job live in a Git repo you push to GitHub?
Not "could it." Does it, today, naturally. If no: DIY.

2. Does it need anything outside Anthropic's connectors?
A local MCP, a private IP, a personal database, a persistent browser session, your own filesystem state between runs. If yes: DIY.

3. Does it run more than once an hour, or more than your daily quota?
Sub-hourly polls, dozens of nightly jobs, anything past the plan ceiling. If yes: DIY.

Three no's: Routines, no hesitation, no guilt. It will run that job better than your DIY cron, and you'll save the maintenance.

One yes: stay local. Use the DIY pattern. The DIY half doesn't go away.

Actually, no, let me put it differently. Routines isn't Make, Zapier, or n8n (they're not the same tool). Routines is a scheduler with a perimeter. A Git repo, Anthropic's connectors, a one-hour minimum between runs. What lives outside that perimeter isn't worse Routines 😅

The devs who ship know the difference. You don't put a fifteen-minute poll in a scheduler with a one-hour minimum. You don't put a job that talks to your private mesh in a cloud that doesn't see your private mesh. You don't put a stateful job in a clean-slate environment. That's not taste. That's arithmetic.

Match the tool to the job. Routines is excellent inside its perimeter. Useful, but not the ultimate answer.

Sources

Top comments (0)