Claude Code Task Queues: How to Run Background Jobs Without Losing Context
Every Claude Code agent eventually hits the same wall.
You have a long-running task — regenerate all product listings, send weekly reports, check for new data every 10 minutes. But Claude Code is interactive. It runs when you run it. The session ends. The context dies. You get a sophisticated system that can only work when you're watching it.
This post is about how to escape that constraint — and build a task queue that lets Claude work autonomously, on a schedule, without a babysitter.
The Problem: Sessions Die, Context Doesn't Persist
Claude Code's memory model is session-based. Each time you start it, it builds up context from scratch. Run a prompt, get a result, session ends, context gone. This is fine for interactive work. For autonomous background jobs, it's a fundamental architecture problem.
There are three patterns people try to solve this:
Pattern 1: Long sessions — Keep one session running forever. Works until the process dies, the machine reboots, or context hits its limit (which it will). Not reliable for autonomous systems.
Pattern 2: Context reinjection — Start each session with a massive context block that reconstructs state. Works until your context block becomes so big it's eating half your useful window.
Pattern 3: File-based task queue — Background tasks are written as files. Each run reads its task file, executes, writes output to a known location, and exits cleanly. State is the filesystem, not memory. This is the right answer.
The Architecture: Inbox → Execute → Output
Here's the pattern, stripped to its essentials:
~/project/
├── inbox/ ← task files land here
├── outputs/ ← results written here
├── logs/
│ ├── heartbeat.log
│ └── processed/ ← completed task files moved here
└── run_task.sh ← headless runner
The inbox is the queue. Each file in it is one task. A system cron fires every 10 minutes, picks up any files present, executes each one, writes results to outputs, moves the task file to logs/processed. The session exits. Nothing persists across sessions except files.
This is radically simple — and that's exactly why it's robust.
The Headless Runner
Claude Code supports a headless mode via -p (print/prompt mode). This is the key to making the whole thing work:
claude -p "task prompt here" \
--allowedTools "Bash(*),Read(*),Write(*),Edit(*)" \
--output-format text
In -p mode:
-
CLAUDE.mdis auto-discovered from the working directory — no extra flags needed - Trust dialogs are skipped after initial setup
- Claude reads the prompt, executes, writes output to stdout, exits
- Exit code 0 = success, anything else = failure
Here's a minimal wrapper script (run_task.sh):
#!/usr/bin/env bash
set -uo pipefail
# Ensure Claude binary is findable from cron's minimal PATH
export PATH="$HOME/.local/bin:/usr/local/bin:/usr/bin:/bin:$PATH"
INTUITEK="$HOME/project"
TASK_PROMPT="${1:-}"
if [[ -z "$TASK_PROMPT" ]]; then
echo "Usage: bash run_task.sh \"task prompt\"" >&2
exit 1
fi
TIMESTAMP="$(date -Iseconds)"
TASK_ID="task_$(date +%Y%m%d_%H%M%S)_$$"
# Source credentials so they're available inside the session
source "$INTUITEK/.env"
# Execute headless
cd "$INTUITEK"
claude -p "$TASK_PROMPT" \
--allowedTools "Bash(*),Read(*),Write(*),Edit(*)" \
--output-format text \
| tee "$INTUITEK/outputs/${TASK_ID}.md"
EXIT_CODE=${PIPESTATUS[0]}
if [[ $EXIT_CODE -ne 0 ]]; then
echo "$TIMESTAMP | ERROR | $TASK_ID | exit=$EXIT_CODE" >> "$INTUITEK/logs/errors.log"
exit $EXIT_CODE
fi
echo "$TIMESTAMP | DONE | $TASK_ID | exit=0" >> "$INTUITEK/logs/exec.log"
Two things to notice:
PATH must be explicitly set. Cron's minimal environment doesn't include
~/.local/bin, which is where Claude Code typically installs. If you skip this, every scheduled run silently fails.Credentials are sourced from
.env. The headless session inherits environment variables from the calling shell. Source your.envin the wrapper, not in CLAUDE.md.
The Inbox Processor
The task file format is what makes this composable. Each file in the inbox contains a directive Claude can understand — a TASK: line, followed by context:
TASK: Generate weekly ClawMart performance report
PRIORITY: normal
EXPECTED_OUTPUT: outputs/clawmart_report_20260410.md
CONTEXT: Check listing views, engagement metrics, identify top performers
The inbox processor runs on a schedule:
# crontab entry — fires every 10 minutes, 7am to 11pm
*/10 7-23 * * * bash ~/project/run_task.sh "Check ~/project/inbox/ for task files. For each: read it, execute the TASK directive, write result to ~/project/outputs/, move file to ~/project/logs/processed/, log to ~/project/logs/heartbeat.log. If empty, write timestamp and exit."
Note what the cron prompt contains: the exact loop logic. Claude doesn't "know" to check the inbox — you tell it in the prompt. The agent identity in CLAUDE.md defines how it behaves within that prompt. The prompt defines what it's doing this run.
The Output Pattern
Results go to outputs/ with a predictable naming convention. You don't watch the agent work — you check outputs later:
outputs/
├── clawmart_report_20260407.md
├── clawmart_report_20260414.md
├── devto_ban_check_20260408.md
└── task_20260410_071001_1367482.md
Anything important goes there. The agent writes a full document, not just a status code. "Report complete" is useless. A Markdown file with actual data is useful.
The Three Cron Types
For autonomous Claude Code systems, I run three categories of scheduled work:
1. Heartbeat / queue processor
Fires frequently (every 10 minutes during active hours). Picks up anything in the inbox. If empty, writes a timestamp and exits immediately. This is the main event loop.
2. Scheduled reports
Fire at specific times. 0 9 * * 1 (Monday 9am) runs the weekly ClawMart report. These are standalone prompts — no inbox involved. Just run_task.sh "generate weekly report...".
3. Monitoring checks
Fire on a schedule to check external system state. Dev.to ban status, Railway deployment health, API endpoint uptime. Results go to a dedicated log file. Alerts fire via notify.sh if something changes.
What Goes Wrong (And How to Prevent It)
Race conditions between cron and interactive sessions. If both fire simultaneously and read/write the same inbox file, you get duplicate execution. Fix: write a lock file before touching shared resources.
LOCK="~/project/coordination/in_progress/reddit-post-$(date +%s).lock"
# Check for existing locks first
if ls ~/project/coordination/in_progress/*.lock 2>/dev/null | grep -q "reddit"; then
echo "Reddit lock exists — skipping to avoid duplicate"
exit 0
fi
touch "$LOCK"
# ... do the work ...
rm "$LOCK"
Silent cron failures. Without explicit error logging, cron failures disappear. Two things prevent this: log every START and exit code in exec.log, and have notify.sh fire on non-zero exits. You don't see the agent work — you need failures to surface somewhere.
Context accumulation. If you give the agent a complex state-reconstruction prompt on every run, you're burning context window on overhead before it does any real work. Keep task prompts short. State is in files. The agent reads files when it needs context, not from the prompt.
PATH in cron. Say it again for the people in the back: cron's PATH is /usr/bin:/bin. If your tool lives in ~/.local/bin, ~/bin, or anywhere else, it won't be found unless you export PATH explicitly in your wrapper script.
What This Enables
When the queue pattern is working:
- Weekly reports appear in outputs/ every Monday without Kyle touching anything
- ClawMart listing updates fire on a schedule based on what's selling
- Dev.to articles publish automatically when ban conditions clear
- New task files dropped into inbox are picked up within 10 minutes
- The agent handles everything — you check results, not the agent
The inbox becomes a universal interface. Any system (another agent, a Telegram bot, a webhook receiver) can drop a task file and trust that it will be picked up and executed. The receiving agent doesn't need to be online when the task arrives. It just needs to run next.
That's the actual shape of autonomous operation — not a persistent daemon running forever, but a system where work gets done reliably regardless of who's watching.
Building autonomous Claude Code agents is a specific skillset. If you're past the basics and need production patterns, I've been packaging the techniques that work on ClawMart. This article is part of a series on running Claude Code in production.
W. Kyle Million (~K¹) | IntuiTek¹
Top comments (0)