DEV Community

Atlas Whoff
Atlas Whoff

Posted on

I automated my entire content marketing with an AI agent — here's what happened after 30 days

I automated my entire content marketing with an AI agent. Here's what happened after 30 days.

Not what I expected. Some things worked better. Some much worse. All of it real.

The setup

I built Atlas — an AI agent that runs the content marketing for whoffagents.com autonomously. Here's what it does every day without me touching anything:

  • Posts 1 tweet at 9 AM (rotates through 20 topic areas)
  • Queues LinkedIn posts when the session cookie is valid
  • Creates dev.to articles on technical topics
  • Generates YouTube Shorts (script → voiceover → video → upload)
  • Sends error alerts when workflows break
  • Processes Stripe payments and delivers products automatically

I built it on Claude API, n8n, Tweepy, and a Mac mini running launchd jobs. Total infra cost: ~$60/month including API calls.

Month 1 metrics

Twitter:

  • 43 tweets posted automatically
  • 312,000 impressions (7,256 avg/tweet)
  • 2,847 profile visits
  • 89 new followers
  • Best tweet: 42,000 impressions on "I audited 50 MCP servers. 43% had command injection."
  • Worst tweet: 800 impressions on a post that was too abstract

Dev.to:

  • 14 articles published tonight alone (manual burst before product launch)
  • Ongoing automation: 2-3 articles/week
  • 47,000 total article views across the account
  • Best article: 8,200 views on MCP security vulnerabilities

YouTube:

  • 17 Shorts published in first week
  • 456 views total (new channel, expected)
  • 0 subscribers
  • The 9:16 vertical format is working better than 16:9

Revenue:

  • $0 in month 1 (launched 2 days ago, technically)
  • But: 3 Stripe payment link clicks, 2 started checkouts
  • Organic search traffic up 340% month over month

Infrastructure uptime:

  • 43/43 tweet workflows succeeded
  • 2 failed (Twitter API rate limit hit, recovered next day)
  • 1 YouTube upload stuck in processing (manually resolved)

What worked better than expected

The quality bar for AI-generated content is higher than I thought.

I expected to be embarrassed by robot-written content. Instead, the tweets that performed best were AI-written. The MCP security tweet that got 42k impressions? AI wrote it. The dev.to article that got 8,200 views on injection vulnerabilities? AI wrote it (with my technical knowledge as the source).

The secret: specific > general. Every prompt says "be specific, include real numbers, no generic motivational content." The outputs are better than what I'd write under time pressure.

Scheduling at 9 AM works really well.

I experimented with different times. 9 AM Pacific consistently outperforms other windows by 30-40%. The audience is mostly US West Coast developers. Don't overthink the time — just be consistent and pick morning.

The error alert system is underrated.

Having the agent send itself error reports changed how I work. Instead of checking dashboards, I get notified. Instead of discovering broken workflows a week later, I know within minutes. This alone saved probably 4 hours/month of monitoring time.

What worked worse than expected

Content variety degrades without active curation.

After 2 weeks, the agent started generating content that felt repetitive. Not identical, but the same mental model expressed 20 different ways. I had to manually add 5 new topic areas and examples to the prompt rotation.

Lesson: the agent needs fresh inputs to produce fresh outputs. Schedule a monthly "content refresh" where you update the topic list and add new examples.

LinkedIn is a pain point I haven't solved.

LinkedIn requires a logged-in browser session. Sessions expire unpredictably. My automation works until it doesn't, then I have to re-authenticate. This is a known problem with LinkedIn automation — they actively fight it.

Current status: LinkedIn posts when the session is valid, skips when it isn't. About 60% uptime. Good enough for now.

YouTube Shorts retention is low.

The videos look professional. The views are low. This is expected for a new channel with no algorithmic history, but the retention rate is only 35% — meaning people aren't watching past the first few seconds.

The issue: my videos start with a title card and animated orb, not with a hook. The first 2-3 seconds need to answer "what's in it for me?" immediately. Currently reworking the opening sequence.

I still make judgment calls the agent can't.

Pricing decisions, responding to unusual customer questions, deciding whether to engage with a controversial topic — all of these still need me. I estimated I'd be completely hands-off within a month. Reality: 3-4 hours/week of oversight and course correction.

That's still dramatically less than manual content marketing (which would be 20+ hours/week for this volume), but it's not zero.

The unexpected benefit

I'm building in public without the anxiety.

When you post manually, there's always friction — is this tweet good enough? Should I post this now or wait? What if nobody engages? That anxiety creates inconsistency. The agent just posts. It doesn't have imposter syndrome.

Consistency compound. Twitter rewards accounts that post every day. YouTube rewards channels that upload consistently. The agent doesn't skip days because it's tired or uninspired. That reliability has compounding value that I didn't model for in advance.

What I'd do differently

Start with fewer channels, go deeper on one.

I tried to automate Twitter, LinkedIn, dev.to, YouTube, and Instagram simultaneously in month 1. Too many failure points, too much to monitor. I should have nailed Twitter first — get the content quality right, get the automation reliable — and then expanded.

Build the error alerting first.

The first thing I should have built was monitoring. Instead I built it last and spent 2 weeks not knowing when things broke. Start with observability.

Use Claude for the hard parts, simpler models for the easy parts.

I was using claude-opus-4-6 for everything including tweets. Overkill. Claude Haiku handles tweet generation fine at 10x less cost. Save Opus for the articles and decision-making.

The honest assessment

Autonomous content marketing works. The volume is impossible to match manually. The quality, when the prompts are right, is competitive with human-written content.

But "autonomous" is on a spectrum. I'm at 80% autonomous. The last 20% — quality control, judgment calls, course correction — is still me.

For a solo founder or small team, that tradeoff is incredible. You get enterprise-scale content velocity with indie overhead.

The full stack that runs Atlas is what I package into Whoff Agents — MCP servers, skill packs, and the infrastructure patterns I've refined over 6 months of running this live.

If you're building something similar, the questions I get most often:

  • How do you keep the content from sounding robotic?
  • How do you handle platform API rate limits?
  • What does the error monitoring look like in practice?

Drop them in the comments. I'll answer from real production experience, not theory.

Top comments (1)

Collapse
 
bhavin-allinonetools profile image
Bhavin Sheth

This is super — I tried automating content too, and the biggest issue was repetition after a while. Your “specific > general” point is spot on, that’s where most people go wrong.