DEV Community

Cover image for Inside Our Autonomous AI Pipeline: 4 Agents, Zero Human Writers
Sebastian Chedal
Sebastian Chedal

Posted on • Originally published at fountaincity.tech

Inside Our Autonomous AI Pipeline: 4 Agents, Zero Human Writers

Why We Built an Autonomous Content Pipeline

Fountain City is a 27-year-old technology studio. We build autonomous AI systems for clients, and we need a steady stream of research-backed content to support that work. Blog posts, service pages, landing pages, SEO optimization, social distribution. The kind of output that would normally require a content strategist, a researcher, a writer, an analyst, and a social media manager.

We used to have those roles. But now our team does other things (client relationship, quality management, strategic direction) and we built AI agents to do them instead.

This post walks through the actual system: the agents, the pipeline, the handoffs, the quality gates, what it costs, and what we’ve learned running it in production. We’re publishing this because the real operational detail doesn’t exist yet. The content out there on autonomous AI pipelines is theoretical framing about the gap between conversational AI and agentic systems, anonymous Reddit posts, or academic papers. We run this pipeline every day. This post is the operational detail.

Meet the Agent Team

Four core agents and two support agents handle the full content lifecycle. Each has a defined role, specific tools, scheduled work hours, and a mailbox for communicating with the others. They run on OpenClaw, an open-source multi-agent orchestration platform, on a single AWS server.

Scott — SEO/GEO and Content Research

Scott is our autonomous SEO/GEO research agent. He monitors search rankings, tracks keywords through the Google Search Console API and Keywords Everywhere, runs competitive analysis, does full AI search citation analysis across Perplexity and other platforms, and writes detailed content briefs. He produces 40+ briefs per month across 9 scheduled weekly workflows.

Scott’s week starts Monday morning. At 8 AM, he runs a standup check on active work. By 10 AM, he’s pulling fresh data: keyword rankings, Reddit and Substack scans for industry trends, Perplexity citation sweeps to see where Fountain City does and doesn’t get mentioned in AI search results, and a full GSC performance snapshot. At noon, he synthesizes everything. He scores new topics against a five-factor weighted rubric: search volume (25%), alignment with our services (25%), competitive content gap (20%), AI search citation opportunity (15%), and timeliness (15%). By 2 PM, he’s re-ranked the content backlog and starts writing briefs for the top unbriefed topics.

Tuesday through Thursday, Scott writes two more briefs per day. By the end of the week, the backlog has 10+ fresh briefs ranked by priority, each one mapped to a specific content cluster and service page.

Scott also reads everything published by the top minds in SEO and GEO on a weekly basis. He takes notes, identifies strategic shifts, discovers new tools, and develops recommendations to improve his own capabilities. He tracks competitors too, reading everything they publish, documenting their strategies, and developing approaches to match or exceed their ranking performance.

Research learnings that Scott discovers from expert SEO and GEO blogs to self-improve his autonomous content capabilities

Each brief Scott writes is 160+ lines. It includes a recommended outline, keyword targets, internal linking tables, SERP analysis, AI search citation gaps, competitive positioning, and specific guidance for the writing agent. The briefs are the foundation. If the brief is thin, everything downstream suffers.

Redacted competitor tracking view showing how Scott monitors and analyzes competing content strategies

Aria — Content Writer and Publisher

Aria takes approved briefs and writes full blog posts, service pages, and landing pages in Fountain City’s brand voice. She loads company context files, tone of voice rules, and the communication strategy before writing a single word. She generates images, configures SEO metadata, sets internal links, and publishes directly to WordPress. Every draft goes through a self-review pass against the voice guide before it reaches a human.

Aria runs on a cron schedule with two full pipeline cycles per day. Research kicks off at 7 AM and 11 AM, writing at 8 AM and noon, self-review at 9 AM and 1 PM, and final publication at 10 AM and 2 PM. A brief that enters the pipeline in the morning can be a complete WordPress draft by the afternoon. Her most recent output: a 2,000-word buyer’s guide on AI consulting in Portland, from approved brief to WordPress draft in four hours.

Kai — CRO and Analytics

Kai analyzes GA4 and Google Search Console data, produces monthly performance reports, and identifies conversion optimization opportunities. He runs a full analytics cycle on the 1st and 15th of each month, with weekly spot checks on Mondays. When Kai spots a page with high traffic but low engagement, or a blog post missing internal CTAs, he writes a work order. That work order enters the same pipeline as Scott’s briefs. Kai’s focus is conversion: are people finding the content, and does the content move them toward a next step?

Kai’s work orders are surgical. A recent example: he identified that the AI prioritization blog post had steady traffic but zero calls-to-action. His work order specified two CTA insertion points at 40% and 70% scroll depth, linking to the contact page and the AI readiness assessment. Aria executed the edit in one pass without interpretation.

Daisy — Social Media Distribution

Daisy takes published blog posts and creates LinkedIn announcements. She runs distribution passes on weekday mornings at 9 AM, picking up anything Aria published the previous day. Her job is amplification, taking what’s already written and getting it in front of the right audience on the right platform, in our tone of voice.

The Pipeline: From Topic Discovery to Published Post

The pipeline runs in four stages, each on its own schedule. An item moves through research, writing, self-review, and publication.

Flow diagram showing how Scott briefs feed into Aria for content creation in our autonomous AI pipeline

Stage 1: Research

Scott identifies a topic through his weekly analysis cycle and writes a content brief. The brief lands in a review folder. Sebastian reviews it, approves it, requests changes, or skips it. This is the first human gate. Sebastian decides what gets written and what doesn’t.

Once approved, Aria picks up the brief and runs a research pass. She searches the company’s internal knowledge base using QMD (a local semantic search tool), reads relevant pages on the live website, pulls external sources through web search, and appends all findings directly to the brief. A typical research pass produces 1,500 to 3,500 words of organized reference material, sourced and attributed.

Stage 2: Write

Aria reads the enriched brief, loads company context (brand identity, tone of voice rules, service descriptions, target market profiles), and writes the full draft in one pass. For blog posts, that’s standard HTML. For service pages, it’s Kadence block markup that matches the site’s existing design system.

Every draft includes SEO metadata, internal links to related pages, and image placeholders. Aria doesn’t self-edit during writing. She gets the content down. The next stage handles quality.

Stage 3: Self-Review

Aria runs a structured review against the voice guide. She checks for banned patterns (guru framing, dramatic setups, bolded definition lead-ins, teacher-to-student positioning), verifies every stat has a source, confirms all required internal links are placed, and writes a review report with a specific improvement plan. A typical review catches 3 to 8 issues per draft.

Stage 4: Improve and Publish

Aria applies every fix from the review report, generates and uploads images, and creates a WordPress draft. For new content, it goes up as a draft for Sebastian to review. For edits to existing live pages that are classified as low risk, the changes go directly to the live site.

Flow diagram showing the process from Aria content writing to live publication in our autonomous AI pipeline

Sebastian gets a notification on Discord (similar to Slack) with a summary of what was done and a link to preview. He approves, requests changes, or flags issues. This is the second human gate.

How Kai’s Work Enters the Pipeline

Kai’s work orders follow the same four stages. When Kai identifies a conversion opportunity, like a high-traffic blog post with zero CTAs, or a service page missing internal links, he writes a work order with specific instructions. That work order enters Aria’s queue alongside Scott’s briefs. The pipeline alternates between Scott and Kai sources to keep both SEO-driven content and conversion-driven optimization moving.

Flow diagram showing how Kai work orders feed into Aria for conversion optimization in our autonomous AI pipeline

Quality Gates and Human Oversight

Autonomous doesn’t mean uncontrolled. The pipeline has two explicit human gates and one AI self-check.

Gate 1: Brief approval. Sebastian reviews every content brief before it enters the writing pipeline. He approves, revises, or skips. Nothing gets written without a human deciding it should.

Gate 2: AI self-review. Every draft goes through a structured review pass against the company’s voice guide, checking for tone issues, unsourced claims, missing links, and formatting problems. This catches the majority of quality issues before a human ever sees the content.

Gate 3: Publication approval. New content goes up as a WordPress draft. Sebastian reviews the preview on the actual site, with images, formatting, and links in place. He approves or sends it back. Edits to existing pages have a risk classification: low-risk changes (adding a CTA, inserting an internal link) can go live directly. Medium and high-risk changes (new pages, structural rewrites) always require approval.

The system also tracks every action in execution logs. Every research query, every API call, every draft decision is recorded. If something goes wrong, we can trace exactly what happened and why.

The Numbers: What This Pipeline Actually Produces

Real metrics from production, as of March 2026:

  • Content briefs per month: 40+ (Scott’s output across 9 weekly workflows)
  • Published pieces through the pipeline: 15 completed briefs as of mid-March 2026, including blog posts, service pages, landing pages, and optimization edits
  • Average time from approved brief to WordPress draft: Same day when the pipeline has capacity; 1 to 2 days with queue backlog
  • Cost per piece: $2 to $5 in direct AI API costs per published article
  • Full monthly stack cost: Approximately $225/month ($50/week) for the entire agent team, including AI API costs, server infrastructure, and tooling
  • Human equivalent cost: A content researcher, writer, analyst, and social media manager would run $15,000 to $25,000/month in salary costs. Managed autonomous agents typically cost $500 to $3,000/month each or in a bundle.

Content briefs generated by our autonomous AI content pipeline showing position in the creation workflow

The bottleneck is human review, not agent speed. The agents can produce a research-backed, self-reviewed blog post in under four hours across the pipeline stages. Human approval takes one to three days depending on Sebastian’s schedule. That’s by design. The human gate exists because brand voice and factual accuracy are worth the wait.

SEO and GEO rank positions for tracked AI content pipeline keywords

What We Learned Running This for Three Months

This pipeline has been running in production since early 2026. Six things stand out.

Context and input quality is everything. A great brief with good research produces a good draft. A vague brief produces content that needs heavy editing. We invested most of our development time in making Scott’s briefs detailed and structured, because that’s where quality starts. The agents downstream can only work with what they’re given.

Agent-to-agent communication needs structure. The agents communicate through a file-based mailbox system where every message follows a standard format: sender, date, message type, and structured content. When agent communication is ad-hoc, things get lost. When it follows a protocol, you can trace every handoff and debug every failure. The protocol is simple. That’s the point.

Human oversight gates are non-negotiable today, and that will change. Right now, Sebastian reviews every brief and every new piece of content. Over time, as the agents build track records and the self-review system catches more edge cases, the training wheels will come off further. The risk classification system is already handling this: low-risk edits go live without approval. Medium-risk changes still require human review. The threshold will shift as trust is earned.

The pipeline improves every week. Each piece of content teaches the system something. Scott’s briefs get more detailed because he’s learning from what performs well. Aria’s self-review catches more voice issues because the pattern library grows. Kai’s work orders get more targeted because he has more performance data. The feedback loops are real.

Failure modes are manageable. Things go wrong. A research pass comes back thin. A draft uses a stat without a source. A formatting issue slips through. The pipeline handles these through multiple passes: Aria flags areas where content or insights are thin, the self-review catches factual errors and hallucinations, and the human gate catches everything else. The multi-pass approach means no single failure mode kills a piece of content. It gets caught and fixed.

Agent specialization beats general-purpose agents. We tried the “one smart agent does everything” approach early on. It doesn’t scale. An agent optimized for research makes different tradeoffs than an agent optimized for writing in brand voice. Scott uses cost-efficient models for data gathering and analysis. Aria uses more capable models for writing and self-review. Kai runs analytics on structured data where precision matters more than creativity. Matching the model and tooling to the job produces better results at lower cost than running everything through one expensive, general-purpose agent.

Multi-agent dashboard showing Scott, Aria, Kai, and Daisy working together in our autonomous AI content pipeline ecosystem

Could This Work for Your Business?

An autonomous content pipeline makes sense if you can say yes to three of these four questions:

  • Do you publish content regularly, or want to?
  • Is content a growth lever for your business, whether that’s SEO, thought leadership, or lead generation?
  • Do you have someone who can review and approve outputs? The system needs a human gate.
  • Are you currently spending $3,000+ per month on content creation through agencies, freelancers, or staff?

If that sounds like your situation, managed autonomous agents can replace or augment your content operation at a fraction of the cost. We know because we did it for ourselves first.

The approach also applies beyond content. The same multi-agent architecture, specialized agents with defined roles, structured handoffs, quality gates, and human oversight, works for any business operation where AI agent teams can coordinate on complex workflows. Research operations, sales enablement, lead generation, customer onboarding, data analysis. The pipeline is a pattern, not just a content tool.

If you’re evaluating whether your organization is ready for this kind of system, an AI readiness assessment is a practical starting point. And if you’ve already tried AI tools and found them underwhelming, consider whether the issue was the tools themselves or the way the pilot was structured. Most AI content experiments fail because they skip the infrastructure: the research pipeline, the quality gates, the voice rules, the feedback loops. The AI is the easy part. The system around it is what makes it work.

Frequently Asked Questions

How much does an autonomous content pipeline cost to run?

Our full agent team runs on approximately $225/month, covering AI API costs, server infrastructure, and tooling. That’s roughly $50/week for a system that replaces what would cost $15,000 to $25,000/month in human salaries. For clients, we offer managed autonomous agent services where total costs typically range from $500 to $3,000/month depending on complexity and volume.

Can AI agents write content as well as human writers?

Yes, but quality takes work. The first draft from an AI agent is a starting point, not a finished product. Quality depends on three things: excellent context (the agent needs to know your brand, your audience, and your positioning), a well-defined tone of voice (specific rules, not vague guidance), and a self-review process that catches and fixes issues before a human ever sees the output. We built all three into our pipeline, and the result is content that reads like it was written by the person whose voice it represents.

What platform do you use to run AI agents?

We use OpenClaw, an open-source multi-agent orchestration platform that runs on any Linux server. Our full stack runs on a single AWS instance behind tightly secured infrastructure. OpenClaw handles scheduling, agent communication, tool access, and session management. Other flavors exist for different use cases: ZeroClaw for lightweight Rust-based deployments, and Molt Worker for Cloudflare edge. And as of yesterday, also Nemo Claw from NVIDA as a wrapper around Open Claw; which are now considering as our new standard.

How do you prevent AI hallucination in published content?

Multiple layers. During research, every data point is sourced and attributed. During writing, the agent is instructed to use placeholder tags for anything it can’t confirm rather than fabricating content. During self-review, the agent checks every stat and claim against its research sources and flags unsupported statements. The human review gate catches anything that slips through. In practice, the multi-pass approach catches hallucinations before they reach the live site.

Can I see examples of content produced by this pipeline?

You’re reading one right now. Every post on this blog, every service page, every landing page on this site is written and maintained by our AI agent team. The analysis of why AI pilots fail, the strategic framework for prioritizing AI projects, the service pages describing our offerings, all of it. This site is the proof of concept, running in production.

Top comments (0)