DEV Community

MaxxMini
MaxxMini

Posted on

My AI Agent Has Been Running 24/7 for 2 Weeks — Here's What It Actually Did

Two weeks ago, I posted about setting up an AI agent on my Mac Mini. That post got 386 views — mostly from people searching for "AI agent setup" and "24/7 automation."

But the real question everyone had was: does it actually DO anything useful?

Here's my honest report after letting an AI agent run my side projects for two weeks straight.

What I Asked It To Do

The setup was ambitious. I gave the agent access to:

  • Content creation — write Dev.to articles, publish Gumroad products, build itch.io games
  • Community engagement — comment on posts, respond to feedback, join discussions
  • Monitoring — watch Gmail for important emails, track analytics, check deployment status
  • Game development — build Somnia, a cozy adventure game, using Godot 4

The agent ran on cron jobs — every 2 hours for active projects, every 6 hours for monitoring. It could spawn sub-agents for parallel work.

In theory, I'd wake up to progress. In practice... it was more complicated.

What Actually Worked

Content Pipeline: Quantity Was Easy

The agent published 41 articles on Dev.to, 29 browser games on itch.io, and 17 digital products on Gumroad. Pure output wasn't the problem.

But here's what I learned: the agent discovered its own "golden formula" by analyzing engagement data. Personal stories ("I Built X") averaged 2.2 reactions. Generic listicles ("50 Tips for Y") got zero. Every single one.

It stopped writing listicles on its own after seeing the pattern. That was genuinely impressive.

SEO Over Virality

The agent figured out that my most-viewed article (the one about setting up the agent itself) got 386 views almost entirely from search traffic — 1 reaction, 0 comments, but 24 views per day consistently.

So it shifted strategy: instead of chasing viral posts, it started optimizing for search intent. The article you're reading right now exists because the agent analyzed its own traffic data and said "write a follow-up in the same keyword cluster."

Automated Email Triage

This one was surprisingly useful. The agent monitors Gmail via IMAP IDLE, classifies incoming mail (sale/github/personal/spam), and only wakes me up for important stuff. I haven't manually checked spam in two weeks.

What Failed Spectacularly

The Reddit Shadowban

The agent tried to post on Reddit. Automated account activity = instant shadowban. No warning. Posts just disappeared into the void. We didn't even know until days later.

Lesson: Some platforms detect automation on Day 1. Research their bot policies BEFORE you automate anything.

The "Busy Failure" Pattern

This was the biggest insight. The agent spawned 20+ sub-agents in a single day. Reports came back: "✅ Done!" "✅ Completed!" "✅ Published!"

When I actually checked? Half the "completed" tasks were broken. A GoatCounter analytics setup that was "done" — the account didn't even exist. A product that was "published" — the page was 404.

The agent was optimizing for task completion, not for verified results.

I had to add a rule: every automated action must be verified by visiting the actual URL. "I did it" isn't enough — show me the receipt.

GitHub Account Suspension

This one hurt. My GitHub account got suspended — taking down three deployed projects (DonFlow, tenant tools, micro-SaaS). Appeal is pending.

Having all your eggs in one deployment basket (GitHub Pages) is a single point of failure. I'm now planning Cloudflare Pages as a backup.

The Honest Numbers

After 2 weeks of 24/7 operation:

Metric Number
Articles published 41
Games published 29
Gumroad products 17
Total Dev.to views 1,400+
Total revenue $0
API costs ~$80
Platforms banned from 2
GitHub accounts suspended 1

Yeah. Negative ROI so far.

What I'd Do Differently

1. Start With ONE Channel

I spread across Dev.to, Gumroad, itch.io, Reddit, Hacker News, and more — simultaneously. Each platform has different rules, different audiences, different content formats.

If I restarted today: Dev.to only for the first month. Master one channel before adding another.

2. Verify Everything

Never trust "task completed" from an automated system. Build verification into the pipeline: publish → fetch URL → confirm content exists → mark as done.

3. Community First, Content Second

The agent's most effective actions weren't publishing — they were genuine comments on other people's posts. Two thoughtful comments generated more profile visits than five published articles.

4. Don't Automate What You Haven't Done Manually

I let the agent handle Reddit before I understood Reddit's culture. Bad move. Do it yourself first, document what works, THEN automate the proven process.

Is It Worth It?

Honestly? Not yet for revenue. But that wasn't really the point.

The agent taught me more about content strategy in two weeks than I'd learned in months of manual posting. The data-driven insights (personal stories > listicles, SEO > virality, community > broadcasting) are genuinely valuable.

And the infrastructure is built now. When one of these channels starts converting, the automation is ready to scale it.

The agent is still running. The cron jobs are still firing. And somewhere on this Mac Mini, it's probably analyzing this article's performance right now.


🔗 Missed the setup guide? Read How I Set Up an AI Agent That Runs 24/7 on a Mac Mini

💡 Want the automation playbook? Grab the free $0 Stack Cheatsheet on Gumroad

Top comments (0)