I recently built something that would have sounded like science fiction a year ago: an AI system that helps manage parts of my content workflow—from discovery to publishing—across LinkedIn, BlueSky, and DEV.to.
Not a chatbot. Not a simple automation script. A multi-agent orchestration system where different AI agents coordinate to handle different parts of the content pipeline.
Here's what I learned building it.
The Problem: Context Switching is Killing Productivity
Before this system, my content workflow looked like this:
- Manually browse Hacker News, Reddit, and tech blogs for topics
- Take notes in a doc somewhere
- Write drafts when I find time
- Edit and format for each platform separately
- Remember to actually post them
- Track what performed well
Each step required context switching. Each context switch killed momentum. And worst of all, the coordination overhead meant I was spending more time managing content than creating it.
Sound familiar?
The Solution: Agents That Actually Coordinate
Instead of building one monolithic AI assistant, I built a team of specialized agents:
- Discovery Agent: Scans web sources, identifies trending topics relevant to my audience
- Content Writer: Takes approved topics and drafts initial content (which I then refine and personalize)
- Content Manager: Orchestrates publishing schedules, handles cross-platform posting
- Brand Strategist: Reviews content for brand voice and decides what's worth publishing
Each agent has a specific job. Each agent can create tasks, update status, escalate when blocked, and hand off work to other agents.
The key insight: agents don't just execute tasks—they coordinate around tasks.
Important note: The AI agents handle the boring coordination work—scheduling, formatting, API calls. The creative work, strategic decisions, and final writing? That's still me. Think of this as automation for the parts of content creation that don't require human creativity.
Architecture: Tasks as the Coordination Primitive
The core of the system is a task queue, but not like you're thinking.
┌─────────────────────────────────────────────────┐
│ Task Management Layer │
│ (Issues, Status, Assignments, Dependencies) │
└─────────────────────────────────────────────────┘
↑ ↑ ↑
│ │ │
┌────┴───┐ ┌───┴────┐ ┌───┴─────┐
│Discovery│ │Content │ │Content │
│ Agent │────>│Writer │───>│Manager │
└─────────┘ └────────┘ └─────────┘
│ │ │
↓ ↓ ↓
┌─────────────────────────────────────────┐
│ External APIs (Web, LLM, Social) │
└─────────────────────────────────────────┘
Every piece of work is an issue (think GitHub issue, JIRA ticket, Linear task). Issues have:
-
Status:
todo,in_progress,in_review,done,blocked - Assignee: which agent owns this work right now
- Parent/child relationships: for breaking down complex work
- Dependencies: explicit "blocked by" relationships between tasks
When an agent picks up work:
- It checks out the issue (atomic lock, prevents conflicts)
- Does the work
- Updates status and adds a comment explaining what changed
- Either marks it
doneor reassigns to another agent
This isn't novel in human project management. What's novel is that AI agents can now participate in the same workflow.
Real Code: Publishing to BlueSky
Here's what the Content Manager agent actually does when asked to publish a post:
# 1. Authenticate with BlueSky
AUTH=$(curl -s -X POST "https://bsky.social/xrpc/com.atproto.server.createSession" \
-H "Content-Type: application/json" \
-d "{\"identifier\": \"$BSKY_IDENTIFIER\", \"password\": \"$BSKY_APP_PASSWORD\"}")
ACCESS_JWT=$(echo "$AUTH" | python3 -c "import sys,json; print(json.load(sys.stdin)['accessJwt'])")
DID=$(echo "$AUTH" | python3 -c "import sys,json; print(json.load(sys.stdin)['did'])")
# 2. Create the post
TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")
curl -s -X POST "https://bsky.social/xrpc/com.atproto.repo.createRecord" \
-H "Authorization: Bearer $ACCESS_JWT" \
-H "Content-Type: application/json" \
-d "{
\"repo\": \"$DID\",
\"collection\": \"app.bsky.feed.post\",
\"record\": {
\"\\$type\": \"app.bsky.feed.post\",
\"text\": \"$POST_TEXT\",
\"createdAt\": \"$TIMESTAMP\"
}
}"
Nothing fancy. Standard REST API calls. The magic isn't in the API integration—it's in the orchestration layer that knows:
- When to post
- What content to use
- How to handle failures
- Who to notify when done
Real Code: The Heartbeat Pattern
Agents don't run continuously. They run in heartbeats—short execution windows triggered by events.
Each heartbeat follows this pattern:
# 1. Check what's assigned to me
INBOX=$(curl -s -H "Authorization: Bearer $API_KEY" \
"$API_URL/api/agents/me/inbox-lite")
# 2. Pick highest priority work (in_progress → todo → blocked if I can unblock)
ISSUE_ID=$(echo "$INBOX" | jq -r '.[0].id')
# 3. Checkout (atomic lock)
curl -s -X POST -H "Authorization: Bearer $API_KEY" \
-H "X-Run-Id: $RUN_ID" \
"$API_URL/api/issues/$ISSUE_ID/checkout"
# 4. Do the work
# ... agent-specific logic here ...
# 5. Update status
curl -s -X PATCH -H "Authorization: Bearer $API_KEY" \
-H "X-Run-Id: $RUN_ID" \
-d '{"status": "done", "comment": "Published to BlueSky: https://..."}' \
"$API_URL/api/issues/$ISSUE_ID"
This heartbeat pattern means:
- Agents don't fight over the same work (checkout is atomic)
- Every action is traced (run ID links actions to heartbeat executions)
- Failures are isolated (one agent crashing doesn't affect others)
- The system is naturally resumable (agents pick up where they left off)
The Hard Parts Nobody Talks About
1. Error Recovery
LLMs fail. APIs rate-limit. Networks drop. Your agents will encounter errors.
The first version of my system would just retry infinitely or silently fail. Neither is acceptable.
Now, agents follow this escalation pattern:
- Transient errors (network, rate limit): retry with exponential backoff
- Validation errors (bad input): mark issue
blocked, comment with specific error, reassign to creator - Permanent errors (auth expired): mark
blocked, escalate to me
if [ $HTTP_STATUS -eq 401 ]; then
curl -X PATCH -d '{
"status": "blocked",
"comment": "Authentication expired. Please refresh access token"
}' "$API_URL/issues/$ISSUE_ID"
exit 0
fi
2. Content Length Limits
BlueSky has a 300 grapheme limit (not characters, not bytes—graphemes). An emoji counts as one grapheme, but é might count as one or two depending on normalization.
I learned this the hard way when a post with 280 characters got rejected for being 340 graphemes.
Solution: validate before posting.
import unicodedata
text = "your post text"
grapheme_count = len(list(text)) # Simplified; use grapheme library for production
if grapheme_count > 300:
# Truncate or split into thread
3. Daily Routines That Don't Spam
I wanted daily publishing reminders, but I didn't want 7 separate tasks clogging my inbox.
Solution: routines with concurrency policies.
{
"title": "Daily content reminder",
"cronExpression": "30 8 * * *",
"concurrencyPolicy": "skip_if_active"
}
skip_if_active means: if yesterday's reminder is still open, don't create today's. This prevents reminder spam while ensuring I don't lose track.
What This Enables
With this system running:
- Discovery Agent scans Hacker News daily, creates issues for trending topics
- Brand Strategist reviews those issues, approves the ones worth writing about
- Content Writer drafts initial content based on approved topics (I refine and add my personal insights)
- Content Manager schedules and publishes across BlueSky, LinkedIn, DEV.to
My involvement: strategic decisions, creative refinement, and final approval. The boring coordination work runs autonomously.
Time saved per week: ~10 hours of coordination overhead.
Key Lessons
1. Agents Are APIs for Work
Just like you wouldn't build an API that accepts ambiguous parameters, don't give agents ambiguous tasks.
Bad: "Handle the deployment"
Good: "Deploy branch release/v2.3 to production, run smoke tests, rollback if health checks fail after 5 min"
2. Visibility > Autonomy
Early on, I optimized for "how much can agents do without me?"
Wrong question.
Better question: "how quickly can I understand what agents did and why?"
Now every agent action includes:
- What changed
- Why it changed
- What happens next
- Links to relevant resources
3. Start With Boring Tech
The orchestration layer is PostgreSQL + REST APIs. No vector DB, no RAG, no fancy AI frameworks.
Why? Because the hard part isn't the AI—it's the coordination. And coordination is a solved problem in distributed systems.
Use LLMs for what they're good at (writing, reasoning, API calls). Use boring databases for what they're good at (consistency, transactions, queries).
Try It Yourself
You don't need to build the whole system at once. Start small:
- Pick one recurring task (e.g., posting to social media)
- Create a simple task queue (even a JSON file works)
- Build one agent that reads from the queue, does the work, updates status
- Add a second agent that creates tasks for the first agent
Once you see two agents coordinating, you'll understand why this pattern scales.
What's Next
I'm working on:
- Approval workflows: some tasks need human review before publishing
- Budget controls: LLM costs add up fast; agents need spending limits
- Analytics integration: track what content performs, feed that back to the Discovery Agent
The goal isn't to remove humans from the loop—it's to remove humans from the boring parts of the loop. The creative thinking, strategic decisions, and personal touch? That's still all human.
Questions? Drop them in the comments. I'm especially curious if you've built similar multi-agent systems and what patterns worked (or didn't) for you.
Top comments (1)
Perhaps we could use old, proven technologies as a kind of process integrator? For example, Prolog predicates?