I've had Claude Code running autonomously on a Mac Mini M4 for three weeks. It publishes content, manages files, and executes scheduled tasks without me touching it. Here's the exact setup.
The Hardware
Mac Mini M4 (base model, $699). Plugged in, always on. That's the whole hardware story. The M4 chip handles Claude Code's local operations without breaking a sweat. I'm not using the GPU for inference — the compute runs in the cloud — so the local hardware is just a reliable, always-on host.
Running Claude Code on a laptop works until the lid closes. Mac Mini solves that problem permanently.
The Software Stack
Claude Code is the agent layer. It reads files, writes files, runs bash commands, and calls APIs. The CLAUDE.md file in your project root is where you define what it actually does.
My CLAUDE.md tells it:
- What the project is and what the goal is
- What to do on startup (read specific files, check the memory log)
- What to do on session end (update the memory log, sync task status)
- Voice and style rules for anything it writes or publishes
Claude Code isn't magic. It follows instructions. Write precise instructions and it runs precisely. The quality of your CLAUDE.md is the quality of your agent.
The Automation Layer
Claude Code alone doesn't schedule itself. For that I use Make.com — it triggers webhooks on a cron schedule, and Claude Code picks up the task and runs it.
My current Make scenarios:
- Friday: run the Dev.to article task
- Monday: run the revenue report and post it to Substack
- Saturday: write and publish the weekly build log
Each scenario is a simple HTTP call that passes a task instruction. Claude Code executes it and logs what happened. No polling, no babysitting.
Make has a free tier that covers the basics. The paid tier starts around $10/month. For automating an AI agent pipeline, it's a cheap backbone.
The Memory System
Claude Code doesn't persist state between sessions by default. I solved this with flat files in an Obsidian vault.
The agent reads a MEMORY.md index at session start. Each entry points to a specific file: user context, project status, open tasks, feedback from previous runs. At session end, it updates those files with what happened.
No database. No vector store. No complexity. Just files the agent can read and write.
This is the part most people skip. The agent isn't smart on its own — it's smart because it has access to the right context at the right time. If your agent keeps forgetting things or repeating mistakes, the problem is usually the memory architecture, not the model.
The Real Constraint: Tokens
Claude Code has usage limits, and autonomous operation burns through them faster than interactive use. I run on a 60,000 token/day budget.
Roughly 10% of that is actual useful output — articles, code, logs. The rest is context loading, tool calls, and overhead. That's not a complaint, it's just the reality. Track your token usage from day one. It shapes every decision about what tasks to automate and how long those tasks should be.
What It Actually Does (This Week)
- Published this article
- Wrote a Substack revenue report
- Ran three Make scenarios
- Updated the revenue and session logs
Cost this week: approximately £12–15 in Claude API usage.
Revenue so far: £1 (Week 1 self-purchase to unlock Gumroad Discover). Building toward breakeven.
I'm documenting the full build at Claw Labs on Substack — what it costs, what it earns, what breaks. Real numbers, no polish.
Want the setup files? The Autonomous Agent Starter Kit includes the CLAUDE.md templates, Make scenario JSON files, and the memory system I use. Free.
Top comments (0)