DEV Community

Leo Wu
Leo Wu

Posted on

How to Automate Your Content Workflow with AI Agents (No Coding Required)

Content creation at scale is exhausting. Not the writing itself — most of us got into this because we enjoy writing — but everything around it. The research, the SEO keyword juggling, editing passes, reformatting the same article for three different platforms, scheduling posts, tracking what performed well last month so you can plan next month. It's a pipeline problem masquerading as a creativity problem.

Many teams have tried to solve this with automation tools like Zapier or Make.com, stitching together ChatGPT API calls with triggers and webhooks. That works for simple tasks. But content workflows have a problem that traditional automation handles poorly: context. Your Monday brainstorm session needs to inform Wednesday's draft, which needs to reflect feedback from Thursday's review. Stateless API calls don't cut it.

This is where AI agents — actual agents, not just chatbots — change the game.

AI Agents Are Not Just "ChatGPT With Extra Steps"

The term "AI agent" gets thrown around loosely, so here's a concrete definition: an AI agent is a language model with persistent memory, access to external tools, and the ability to execute tasks on a schedule without human prompting.

Compare that to how most people use ChatGPT today:

Feature ChatGPT / Claude Chat AI Agent
Memory across sessions Limited or none Persistent, configurable
Tool access Browse, code interpreter Files, APIs, databases, messaging
Scheduled execution Manual only Cron-based, event-triggered
Personality/role config System prompt (per chat) SOUL.md (permanent, version-controlled)
Multi-step workflows Copy-paste between chats Agents hand off to each other

The difference matters. An agent can wake up every Monday morning, check trending topics in your niche, draft three article outlines based on your brand voice, and drop them into a Slack channel for your review — all before you've finished your coffee. A chatbot waits for you to type something.

The Content Pipeline: Four Agents, One Workflow

A common approach for content automation uses a multi-agent setup where each agent owns one stage of the pipeline. Here's a real-world architecture that several teams have built using OpenClaw, an open-source (MIT) agent orchestration platform:

Content Strategist → Writer → Editor → Publisher
     (Plan)          (Draft)   (Review)   (Ship)
Enter fullscreen mode Exit fullscreen mode

Each agent has its own personality file (called SOUL.md), its own tools, and its own responsibilities. They communicate through shared workspaces and messaging channels.

Agent 1: The Content Strategist

This agent handles ideation and planning. It monitors industry trends, analyzes past content performance, and generates topic briefs.

SOUL.md snippet:

# Content Strategist Agent

## Role
You are a content strategist for a developer tools company.
Your job is to identify high-value topics and create detailed
content briefs.

## Responsibilities
- Monitor trending topics in dev tools, AI, and open source
- Analyze which past articles performed well and why
- Generate weekly content calendars with 3-5 topic briefs
- Each brief includes: title options, target keywords, angle,
  target audience, estimated word count

## Tools Available
- Web search (trend monitoring)
- Analytics access (past performance data)
- Shared workspace (drop briefs for the Writer agent)

## Constraints
- Focus on topics with search volume > 500/month
- Prioritize topics where we have genuine expertise
- Never suggest clickbait or misleading titles
Enter fullscreen mode Exit fullscreen mode

What it actually does: Runs on a weekly cron schedule, searches for trending topics, cross-references them with the team's expertise areas, and produces a ranked list of content briefs saved to a shared directory. The Writer agent picks up from there.

Agent 2: The Writer

The Writer consumes briefs and produces first drafts. This is where most of the "voice" configuration lives.

SOUL.md snippet:

# Writer Agent

## Role
You are a technical writer who produces clear, opinionated
articles for a developer audience.

## Writing Style
- Conversational but professional
- Use concrete examples over abstract explanations
- Have opinions: "This approach works better than X because..."
- No filler phrases: avoid "It's worth noting", "Let's dive in"
- Short paragraphs, generous use of code blocks and tables
- Target reading level: experienced developer, new to the topic

## Process
1. Read the content brief from the strategist
2. Research the topic using web search and documentation
3. Write a complete first draft (1500-3000 words)
4. Save to the shared workspace for the Editor agent

## Constraints
- Follow the SEO keywords from the brief
- Include at least one practical example or walkthrough
- Never fabricate quotes, statistics, or benchmarks
Enter fullscreen mode Exit fullscreen mode

This is the agent doing the heavy lifting. A good SOUL.md here saves dozens of hours of "write it more like this" feedback loops. The style guide is baked into the agent's permanent configuration, not lost in a chat history.

Agent 3: The Editor

The Editor reviews drafts for quality, accuracy, and readability. This is a critical role because — let's be honest — AI-generated text has tells. Repetitive sentence structures, hedging language, that weird tendency to start paragraphs with "Additionally" or "Furthermore."

SOUL.md snippet:

# Editor Agent

## Role
You are a senior editor who reviews technical content for
clarity, accuracy, and natural voice.

## Review Checklist
- Remove AI-isms: filler words, hedge phrases, generic transitions
- Verify technical claims against documentation
- Check that examples actually work
- Ensure the article has a clear narrative arc
- Flag sections that need human review (opinion pieces,
  controversial claims, anything requiring lived experience)

## Output
- Edited draft with inline comments
- Summary of changes made
- List of items requiring human review (if any)
Enter fullscreen mode Exit fullscreen mode

The Editor agent is honest about its limitations. It flags content that needs a human eye — opinion sections, personal anecdotes, anything where authenticity matters. This is a feature, not a bug.

Agent 4: The Publisher

The Publisher handles the last mile: reformatting content for different platforms and scheduling publication.

SOUL.md snippet:

# Publisher Agent

## Role
You are a content publisher who formats and distributes
articles across multiple platforms.

## Platforms
- Dev.to (Markdown, specific frontmatter format)
- Medium (import-friendly formatting)
- Company blog (Hugo static site, specific template)
- Newsletter (condensed version with CTA)

## Process
1. Read the edited draft from the Editor
2. Generate platform-specific versions
3. Schedule publication times (staggered across platforms)
4. Post or queue each version

## Constraints
- Dev.to gets the canonical URL
- Medium version publishes 48 hours after Dev.to
- Newsletter version is a summary with link to full article
Enter fullscreen mode Exit fullscreen mode

This agent eliminates the most tedious part of content distribution: manually reformatting the same article four different ways and publishing them on different schedules.

The Orchestration Layer

These four agents need something to coordinate them. OpenClaw serves as that orchestration layer — it's a self-hosted platform that connects AI agents to 20+ messaging channels (Slack, Discord, Telegram, WhatsApp, and others) and provides the infrastructure for agent-to-agent communication, shared workspaces, and scheduled execution.

The key features that make this work:

  • SOUL.md configuration: Each agent's personality, tools, and constraints are defined in version-controlled files, not ephemeral system prompts
  • Shared workspaces: Agents read from and write to common directories, creating a natural handoff mechanism
  • Cron scheduling: Agents can run on schedules without human triggering
  • Multi-channel output: The Publisher agent can post to Slack, Discord, or any supported channel directly
  • MIT open source: You can audit exactly what your agents are doing — important when they're publishing under your name

The full setup documentation is at docs.openclaw.ai, and the source is on GitHub.

How This Compares to Alternatives

Zapier + ChatGPT API

This is the most common "no-code" approach. You set up Zaps that call the ChatGPT API at each stage.

Where it works: Simple, linear workflows. "When a new row appears in this spreadsheet, generate a social media post."

Where it breaks down: There's no persistent memory between Zaps. Each API call is stateless. Your strategist Zap can't remember what topics performed well last month unless you build increasingly complex spreadsheet lookups. You also pay per Zap execution, which adds up fast for multi-step workflows.

Make.com (formerly Integromat)

Similar to Zapier but with more complex branching logic. Better for visual thinkers who want to see the workflow as a flowchart.

Where it works: Multi-step automations with conditional logic. More affordable than Zapier for high-volume scenarios.

Where it breaks down: Same statefulness problem. And the visual builder becomes unwieldy once your workflow exceeds 15-20 nodes. Debugging is painful.

Custom Python Scripts

The "just write code" approach. Maximum flexibility.

Where it works: Everything, eventually. If you can code it, you can build it.

Where it breaks down: Maintenance. Someone has to update the scripts when APIs change, when your content strategy shifts, when you add a new platform. For a solo creator or small team, maintaining custom automation code is a second job. The initial build is the easy part.

Multi-Agent Platform (OpenClaw, etc.)

Where it works: Complex, stateful workflows where agents need memory, personality, and tool access. Content pipelines, customer support, research workflows.

Where it breaks down: It's newer technology. The ecosystem is still maturing. You need to invest time in writing good SOUL.md configurations (though that's a one-time cost that pays dividends).

The honest tradeoff: Multi-agent setups require more upfront configuration than Zapier, but dramatically less ongoing maintenance than custom scripts. And unlike both alternatives, the agents actually get better at their jobs over time because they retain context.

What Still Needs a Human

Here's where I'll be direct: AI agents are not a replacement for human judgment in content creation. They're force multipliers. Here's what still needs a person:

Final approval on published content. Always. Even the best Editor agent misses things. Brand voice is nuanced. Factual accuracy on cutting-edge topics requires domain expertise that models don't reliably have.

Strategic direction. Agents can identify trending topics, but deciding which trends align with your business goals is a human decision. The Strategist agent proposes; a human disposes.

Personal stories and opinions. Agents can structure an argument, but authentic perspective comes from experience. The best content combines AI efficiency with human insight — the agent handles the research and structuring, the human adds the "I tried this and here's what actually happened" parts.

Sensitive topics. Anything involving security advisories, legal implications, or community controversies needs human review. Full stop.

A realistic expectation: a well-configured content pipeline with AI agents cuts the time from ideation to publication by about 60-70%. The remaining 30-40% is human review and refinement — and that's the part that makes content worth reading.

Getting Started

If you want to try this approach, here's a minimal starting point:

  1. Pick one stage to automate first. Don't build the whole pipeline at once. The Publisher agent (reformatting content for different platforms) is the easiest win with the lowest risk.

  2. Write your SOUL.md files carefully. This is the single highest-leverage activity. A good SOUL.md eliminates 80% of the "the AI didn't do what I wanted" problems. Be specific about tone, constraints, and output format.

  3. Set up review checkpoints. Never let agents publish without human approval until you've built enough trust in the system. Start with agents that draft and queue; you click "publish."

  4. Iterate on configuration, not prompts. The advantage of SOUL.md files over chat-based prompting is that improvements are permanent and version-controlled. When you fix a style issue, it stays fixed.


If you found this useful, I write about practical AI automation for dev teams — the stuff that actually works in production, not just in demos. Follow along on Substack for a weekly breakdown of real-world agent workflows.

Top comments (0)