I have been running an AI agent autonomously for 16 days. It writes articles, optimizes product descriptions, manages GitHub repos, pitches paid publications, and reports back daily via Telegram.
Here is what I learned about building AI agents that actually DO things — not just chat.
The Problem With Most AI Agents
Most AI agent tutorials show you how to make a chatbot that calls a few tools. That is not an agent. That is a chat wrapper.
A real agent:
- Decides what to do next without being asked
- Chains 50+ actions in a single session
- Recovers from failures automatically
- Manages its own context (saves state, clears memory, continues)
- Produces measurable output (articles published, code deployed, emails sent)
My Setup
Claude Code (CLI) + MCP Servers + Bash + APIs
That is it. No LangChain. No CrewAI. No framework. Just:
- Claude Code as the reasoning engine
- MCP servers for Playwright (browser), Telegram (reporting), GitHub
- Direct API calls for Dev.to, Apify, GitHub REST API
- Bash for everything else
The agent runs in a terminal. It reads its CLAUDE.md file (instructions), checks its context-snapshot.md (memory), and starts working.
The Architecture That Works
CLAUDE.md (permanent instructions)
├── Mission & rules
├── Platform credentials
├── Task queue (infinite loop)
└── Anti-patterns to avoid
context-snapshot.md (volatile memory)
├── What was done last session
├── What to do next
└── Current metrics
journal.md (persistent log)
├── Actions taken
├── Results
└── Decisions made
Key insight: The agent does not need a database. Three markdown files give it everything: instructions (CLAUDE.md), short-term memory (context-snapshot.md), and long-term memory (journal.md).
What the Agent Actually Does (Daily)
- Reads its state files
- Checks metrics (Dev.to views, GitHub stars, Apify runs)
- Publishes 2-3 articles via Dev.to API
- Updates product descriptions on Apify Store
- Cross-links content (articles reference each other)
- Optimizes GitHub READMEs
- Sends a daily Telegram report
- Saves state and continues
In 16 days, it has:
- Published 650+ technical articles
- Updated 78 product descriptions
- Cross-linked 231 articles
- Managed 300+ GitHub repositories
- Sent 16 daily reports
The 3 Things That Make It Work
1. Infinite Task Queue
The agent never runs out of things to do. The CLAUDE.md file contains a prioritized loop:
SELL → OPTIMIZE → CREATE → repeat
SELL actions (pitching paid publications) run first. OPTIMIZE (adding CTAs, improving descriptions) runs second. CREATE (new articles) runs last and is capped at 2 per week.
2. Context Management
LLMs have finite context windows. The agent monitors its usage and when it hits 70%, it:
- Saves current state to context-snapshot.md
- Clears its context
- Reads state files and continues from where it left off
This means it can run indefinitely across multiple sessions.
3. Failure Recovery
Every action has a fallback:
- Platform blocked by captcha? Skip it, try another.
- API rate limited? Move to next task.
- Playwright timeout? Abandon that site.
- Email blocked? Use direct API instead.
The agent never stops because of a single failure.
What Does NOT Work
- Captcha/2FA: The agent cannot pass these. Blocked platforms are marked DEAD.
- OAuth flows: Google OAuth in Playwright is unreliable.
- Email sending: Without SMTP access, the agent cannot send outreach emails.
- Real-time interaction: The agent cannot moderate comments or respond to DMs.
The Numbers
| Metric | Day 1 | Day 16 |
|---|---|---|
| Articles | 0 | 650+ |
| GitHub repos | ~265 | 300+ |
| Apify actors | 78 | 78 (all optimized) |
| Dev.to views | 0 | 4,000+ |
| Revenue | /bin/bash | /bin/bash |
Yes, revenue is still /bin/bash. The agent is excellent at production but conversion remains the bottleneck. Adding CTAs to every article (100% coverage) and pitching paid publications (8 sent) are the current strategies.
How to Build Your Own
- Write a detailed CLAUDE.md with your agent instructions
- Include credentials, task queue, and anti-patterns
- Create context-snapshot.md for session continuity
- Set up MCP servers for tools you need (browser, APIs)
- Run Claude Code in terminal and let it work
The code is the CLAUDE.md file. The agent is the LLM. The framework is markdown.
The Bottom Line
You do not need LangChain, CrewAI, or AutoGPT to build a useful AI agent. You need:
- A clear mission
- A prioritized task queue
- State management (3 markdown files)
- API access to your tools
- An LLM that can chain actions
The hardest part is not the technology. It is writing good instructions.
Building AI agents or need data extraction? I have built 77+ production scrapers and this autonomous agent system. Email Spinov001@gmail.com for custom solutions.
More: awesome-web-scraping (9 stars, 150+ tools) | 650+ articles on Dev.to
Top comments (0)