DEV Community

Ahmed Mahmoud
Ahmed Mahmoud

Posted on

How I Used AI Agents to Automate My Marketing (With Code)

How I Used AI Agents to Automate My Marketing (With Code)

Running a solo app means wearing every hat: product, engineering, support, and marketing. Marketing is the one that most developers neglect, because it doesn't feel like "real work" and the results are invisible until they aren't.

I spent about three weeks building a multi-platform marketing automation system using AI agents, Python daemons, and the APIs for every major social platform. Here's the technical architecture and what actually worked.

The Problem: Marketing Is a Repeating Task Queue

Social media marketing for an indie app is roughly this loop:

  1. Produce content (posts, threads, articles)
  2. Post content on a schedule
  3. Engage with responses (replies, DMs, comments)
  4. Analyse what worked
  5. Repeat

Every part of this is automatable at some level. The creative/strategy layer still needs a human, but execution is a perfect target for automation.

Architecture: Daemons + Cron + AI Generation

The system I built has four layers:

┌────────────────────────────────────────────┐
│  Content Generation (AI, CrewAI pipeline)  │
└────────────────────────────────────────────┘
                        │
┌────────────────────────────────────────────┐
│  Content Queue (JSON state files + lock)   │
└────────────────────────────────────────────┘
                        │
┌────────────────────────────────────────────┐
│  Platform Posters (Python daemons, one per │
│  platform: Twitter, Threads, Facebook,      │
│  Dev.to)                                   │
└────────────────────────────────────────────┘
                        │
┌────────────────────────────────────────────┐
│  Scheduler (macOS LaunchAgents / cron)     │
└────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Each platform poster is an independent Python script with its own state file. They share no runtime state — if one fails, the others continue. The state files are JSON, stored in ~/.susan-*.state.json per platform.

The State File Pattern

Every poster follows the same state file schema:

# Default state structure
DEFAULT_STATE = {
    "posted_indices": [],     # indices of already-posted items
    "last_post_ts": 0,        # Unix timestamp of last post
    "post_count": 0,          # total posts made
}

def load_state(state_file: str) -> dict:
    if not os.path.exists(state_file):
        return dict(DEFAULT_STATE)
    try:
        with open(state_file) as f:
            data = json.load(f)
        for key, val in DEFAULT_STATE.items():
            data.setdefault(key, val)
        return data
    except (json.JSONDecodeError, OSError):
        return dict(DEFAULT_STATE)

def save_state(state: dict, state_file: str) -> None:
    """Atomic write via temp file + rename."""
    tmp = state_file + ".tmp"
    with open(tmp, "w") as f:
        json.dump(state, f, indent=2)
    os.replace(tmp, state_file)  # atomic on POSIX
Enter fullscreen mode Exit fullscreen mode

The atomic write pattern (write tmp → rename) prevents state corruption if the process is interrupted mid-write.

Twitter Thread Poster

Twitter threads (8–12 tweets chained together) consistently outperform single tweets for educational content. I built a rotation system for 8 pre-written threads, each covering a different topic. A cooldown of 30 days prevents repeating a thread too soon.

THREADS = [
    {
        "id": "spaced_repetition",
        "title": "How spaced repetition works",
        "cooldown_days": 30,
        "tweets": [
            "A thread on why spaced repetition is the most evidence-backed study technique...",
            "The core idea: your brain forgets in a predictable curve (Ebbinghaus, 1885)...",
            # ...
        ]
    },
    # ...
]

def cmd_next():
    state = load_state(STATE_FILE)
    now = time.time()

    available = [
        t for t in THREADS
        if now - state.get("last_posted", {}).get(t["id"], 0)
           > t["cooldown_days"] * 86400
    ]

    if not available:
        return {"success": False, "error": "No threads available (all in cooldown)"}

    thread = available[0]
    result = post_thread(thread["tweets"])

    if result["success"]:
        state.setdefault("last_posted", {})[thread["id"]] = now
        save_state(state, STATE_FILE)

    return result
Enter fullscreen mode Exit fullscreen mode

CrewAI for Weekly Content Generation

The manual content is a fixed rotation (fine for Twitter threads and tips, not ideal for generating fresh ideas). For weekly content generation, I built a CrewAI pipeline with four agents:

from crewai import Agent, Task, Crew

strategist = Agent(
    role="Content Strategist",
    goal="Identify the highest-value content topics for this week",
    backstory="Expert in language learning content marketing...",
    llm="anthropic/claude-sonnet-4-5-20250929",
)

writer = Agent(
    role="Technical Writer",
    goal="Write genuinely valuable technical content",
    backstory="Developer and language learning enthusiast...",
    llm="anthropic/claude-sonnet-4-5-20250929",
)

editor = Agent(
    role="Editor",
    goal="Polish content for platform-specific best practices",
    backstory="Senior editor with deep knowledge of dev community...",
    llm="anthropic/claude-sonnet-4-5-20250929",
)

distribution = Agent(
    role="Distribution Manager",
    goal="Format content for each target platform",
    backstory="Social media specialist who knows platform nuances...",
    llm="anthropic/claude-sonnet-4-5-20250929",
)

crew = Crew(
    agents=[strategist, writer, editor, distribution],
    tasks=[strategy_task, writing_task, editing_task, distribution_task],
    verbose=True,
)

result = crew.kickoff(inputs={
    "week": datetime.now().strftime("%Y-W%W"),
    "app_focus": "Pocket Linguist language learning app",
    "platforms": ["twitter", "devto", "linkedin"],
})
Enter fullscreen mode Exit fullscreen mode

The pipeline runs weekly (Monday 8 AM via LaunchAgent) and produces a content bundle that the platform-specific posters can pull from.

LaunchAgent Scheduling

On macOS, LaunchAgents are the cron replacement for user-level scheduled tasks:

<!-- ~/Library/LaunchAgents/com.pocketlinguist.twitter.threads.plist -->
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>com.pocketlinguist.twitter.threads</string>
  <key>ProgramArguments</key>
  <array>
    <string>/usr/bin/python3</string>
    <string>/path/to/twitter_thread_poster.py</string>
    <string>next</string>
  </array>
  <key>StartCalendarInterval</key>
  <array>
    <dict>
      <key>Weekday</key><integer>2</integer>  <!-- Tuesday -->
      <key>Hour</key><integer>9</integer>
      <key>Minute</key><integer>0</integer>
    </dict>
  </array>
  <key>EnvironmentVariables</key>
  <dict>
    <key>TWITTER_ACCESS_TOKEN</key>
    <string>YOUR_TOKEN</string>
  </dict>
  <key>StandardOutPath</key>
  <string>/Users/you/logs/twitter-threads.log</string>
  <key>StandardErrorPath</key>
  <string>/Users/you/logs/twitter-threads-error.log</string>
</dict>
</plist>
Enter fullscreen mode Exit fullscreen mode

Load it with launchctl load ~/Library/LaunchAgents/com.pocketlinguist.twitter.threads.plist. Unload to pause.

Engagement Bot: Automated Replies

The engagement bot is the part that requires the most care. Automated replies that look spammy get accounts flagged. The rules I follow:

  1. Only reply to posts that match a keyword from a curated list (language learning topics)
  2. Use 36 different reply templates, selected semi-randomly to avoid pattern detection
  3. Rate limit to 20 replies per hour, with random delays between actions (jitter)
  4. Never DM unsolicited
  5. Log every action for review
REPLY_TEMPLATES = [
    "That's a great point about {keyword}. In my experience...",
    "This is exactly what motivated me to build {app}...",
    # 34 more variants
]

def engage(tweet_id: str, keyword: str) -> dict:
    template = random.choice(REPLY_TEMPLATES)
    text = template.format(keyword=keyword, app="Pocket Linguist")

    # Jitter: random delay 30–120 seconds
    delay = random.uniform(30, 120)
    time.sleep(delay)

    return post_reply(tweet_id, text)
Enter fullscreen mode Exit fullscreen mode

What Actually Moved the Needle

Honest assessment after three months:

  • Twitter threads: Measurable engagement increase. Educational threads on language learning consistently reached 2–5x the impressions of single posts.
  • Dev.to articles: Slow build but compounding. Articles rank in search and bring organic traffic weeks after publication.
  • Threads (Meta): Highest organic reach of any platform, but the Threads API has reliability issues.
  • Engagement bot: Modest follower growth. The quality of followers from bot engagement is lower than organic.
  • CrewAI content generation: Saves 2–3 hours per week. Quality requires human review but the first draft is usually solid.

The biggest lesson: automation is best at distribution (consistent posting, scheduling), not at community building. The posts that drove real downloads were ones where I personally engaged with a large account's audience. Automation can prepare the content; the human moments convert.


I'm building Pocket Linguist, an AI-powered language tutor for iOS. It uses spaced repetition, camera translation, and conversational AI to help you reach conversational fluency faster. Try it free.

Top comments (0)