DEV Community

Cover image for Technical Tutorial
Tim Zinin
Tim Zinin

Posted on

Technical Tutorial

How I Built a 6-Agent System That Publishes to 19 Platforms Autonomously

Publishing content across 19 social platforms manually is a full-time job. I automated it with a system of 6 AI agents, and now it runs on cron every 30 minutes. Here's exactly how.

The Architecture

The system consists of:

  1. Content Generator — creates platform-specific text from a content bank
  2. Image Sourcer — fetches relevant photos from Pexels API or generates data charts
  3. Gatekeeper — validates every post before publishing (image exists? UTM present? URLs valid?)
  4. Publisher — dispatches to platform APIs (Threads, Bluesky, Mastodon, VK, Telegram, etc.)
  5. Analytics Collector — pulls engagement metrics back from each platform
  6. Dashboard Renderer — visualizes everything in a real-time web dashboard

The Content Calendar

Everything starts with content_calendar.json — a single source of truth:

{
  "entries": [
    {
      "id": "cp_threads_01",
      "platform": "threads",
      "date": "2026-03-06",
      "time": "11:00",
      "text": "Your actual post text...",
      "image_url": "https://images.pexels.com/...",
      "status": "planned"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Each entry has platform-specific text, scheduled time, and image URL. The publisher reads this file, finds due posts, and dispatches them.

The Gatekeeper Agent

This is the most critical piece. Before any post goes live, the gatekeeper checks:

  • Image accessible? HTTP HEAD request to verify the URL returns 200
  • Text length OK? Each platform has minimums (Threads: 10, Telegram: 20, articles: 300+)
  • UTM link present? Every post must link back with proper attribution
  • No broken URLs? Every URL in the text gets validated

If something fails, it auto-fixes: generates a new Pexels image, appends a default UTM link. Up to 3 retry attempts before blocking the post entirely.

def validate_and_fix(entry, attempt=1):
    issues = []
    if not check_image_accessible(entry["image_url"]):
        new_url = generate_image_for_entry(entry)
        if new_url:
            entry["image_url"] = new_url
        else:
            issues.append("CRITICAL: image generation failed")
    # ... more checks
    return len(issues) == 0, entry, issues
Enter fullscreen mode Exit fullscreen mode

Platform Adapters

Each platform gets its own adapter module. The publisher dynamically loads them:

PLATFORM_ADAPTERS = {
    "threads": "threads",
    "bluesky": "bluesky",
    "mastodon": "mastodon",
    "telegram": "telegram",
    "vk": "vk",
    "facebook": "facebook",
    # ... 13 more
}

module = importlib.import_module(f"adapters.{adapter_name}")
result = module.publish(text=text, image_url=image_url)
Enter fullscreen mode Exit fullscreen mode

Each adapter handles authentication, rate limits, and API quirks. Threads needs a two-step publish (create container → publish). Bluesky needs ATP protocol. Mastodon is straightforward REST.

Results After 2 Weeks

  • 192 posts scheduled across 19 platforms
  • Zero manual intervention after initial setup
  • Gatekeeper blocked 12 posts that would have gone out broken (missing images, dead links)
  • Average publish time: 3 seconds per post

Key Lessons

  1. Never trust your own output. The gatekeeper catches things I'd miss manually.
  2. Platform APIs are wildly inconsistent. Budget 2x time for adapter development.
  3. Cron + JSON > fancy orchestration. Simple tools, reliable results.
  4. Images matter more than text. Posts with images get 2-3x more engagement.

The full system runs on a single VPS (4 cores, 8GB RAM) and costs about $15/month. The code is Python, no frameworks, minimal dependencies.


Building AI systems that actually ship? Let's connect: sborka.work

Top comments (0)