DEV Community

Hopkins Jesse
Hopkins Jesse

Posted on

I Let AI Agents Run My Side Hustle for 30 Days — Here's the Brutal Truth

I Let AI Agents Run My Side Hustle for 30 Days — Here's the Brutal Truth

It was 2:47 AM on a Tuesday when I watched the notification roll in: "PR #1842 — Closed." Then another. Then six more. In the span of four minutes, Expensify's maintainers had shut down every single pull request my AI agent had submitted over the past week. Eight PRs. Eight "thanks but no thanks" messages. I took a sip of cold coffee and stared at the terminal like it owed me money.

Here's the headline number: I spent $47 letting AI agents loose on the internet to make money for me. After 30 days, my total revenue was $0.00. Effective hourly wage: zero dollars and zero cents. Before you close this tab thinking it's another "AI is overhyped" rant — hold on. This story has a twist. Several, actually. And if you're thinking about doing what I did, you need to hear all of them.

Because buried inside that $0 return are some genuinely useful discoveries about what AI agents can and can't do in the wild. One PR did get merged. One platform turned out to be a scam (okay, three platforms turned out to be scams). And one sub-agent wrote a 2,400-word article in 68 seconds that made me question my entire career. Let's get into it.

The Setup: Why I Did This

Every week, my Twitter feed fills up with threads like "I made $5,000 this month with AI agents on autopilot" and "The AI side hustle blueprint that changed my life." The pattern is always the same: a screenshot of a Stripe dashboard, a link to a paid course, and zero verifiable data.

I wanted to know what actually happens when you stop theorizing and just... do it. Not with a marketing funnel and a landing page, but with real tools, real platforms, and real (tiny) amounts of money at stake.

So I built what I internally called "The Hunter Stack" — a set of AI agents, each assigned to a different money-making strategy:

  • Bounty Hunter: Scanned GitHub for open bounties, forked repos, wrote code, submitted PRs
  • Content Creator: Identified trending topics, wrote articles, optimized for SEO
  • Airdrop Scout: Tracked web3 testnet opportunities, evaluated airdrop potential

The tech stack was straightforward: Claude API for reasoning and writing, GitHub CLI for repo operations, DuckDuckGo for research, and OKX API for crypto wallet checks. I used OpenClaw as the orchestration framework to coordinate everything. The agents ran on a $5/month VPS, working around the clock while I slept, worked my actual job, and occasionally questioned my life choices.

The theory was simple: if AI can write code, research markets, and interact with APIs, surely it can make some money. Right?

The Numbers Don't Lie

Let's start with the spreadsheet everyone actually wants to see.

Expense Breakdown

Item Cost
OpenAI API $12.40
Claude API $8.60
VPS hosting $5.00
GitHub Copilot $10.00
Coffee (consumed during frustration) ~$11.00
Total $47.00

That coffee line is only half a joke. There's something uniquely painful about watching an AI burn through your API credits to submit a PR that gets auto-closed by a bot.

PR Pipeline: The Full Picture

Over 30 days, my bounty-hunting agent submitted 40+ pull requests across multiple platforms and repositories. Here's what happened:

Platform PRs Submitted Merged Paid
Claude-Builders 30 0 $0
RustChain (Scottcjn/rustchain-bounties) 1 1 $0
Expensify 8 0 $0
La-Tanda Account restricted $0
Other repos ~5 0 $0
Total 40+ 1 $0

One merged PR out of 40-plus attempts. And here's the kicker — the one that did merge (Scottcjn/rustchain-bounties #2759) never paid out. I checked my RustChain wallet balance after the merge: 0.0 RTC. The code was accepted. The payment wasn't sent. More on this later.

Time Allocation

The agents weren't just submitting PRs — they were spending time on each task. Here's the rough breakdown of where the compute cycles went:

Activity % of Time
Research & scanning 27%
Writing code 40%
Communication (PR descriptions, comments) 17%
Debugging & retries 17%

Forty percent of the time was spent writing code that, with one exception, nobody merged. That's not a productivity metric. That's a cautionary tale.

What Actually Worked

Okay, so bounty hunting was a disaster. But not everything failed. Two things surprised me.

Content Creation: The Unexpected MVP

While the bounty hunter was getting rejected left and right, the content agent was quietly doing something remarkable.

It wrote two articles totaling roughly 4,600 words. The first one — a 2,400-word deep dive — was produced in 1 minute and 8 seconds. Not "outlined in 68 seconds." Fully written. Coherent. With proper structure, transitions, and data citations. I read it twice and genuinely couldn't have written a noticeably better version myself in under two hours.

This is where AI's real strength in side hustles lives: not in competing with humans for existing bounties, but in producing content at a pace and quality level that makes traditional freelancing economics look quaint.

The content the agent chose to write was also telling. It didn't churn out generic "10 Tips for Productivity" listicles. It identified gaps — "red flag" guides for bounty platforms, honest cost breakdowns, platform comparisons. These are the kinds of posts that do well on Dev.to and Hashnode because they're useful and specific.

One finding worth noting: Dev.to and Hashnode are the most agent-friendly publishing platforms. They have open APIs, straightforward auth, and no content gatekeeping. Medium, on the other hand, has closed its API, meaning any AI content pipeline hits a manual step at the finish line. If you're building an automated content system, plan around this.

Airdrop Scouting: The Long Game

The web3 airdrop agent identified Pharos Network as the most promising opportunity. Pharos has $8M in confirmed funding, which is a meaningful signal in the airdrop world. The daily check-in task takes about 5 minutes, and the historical precedent is solid — participants in Monad and Scroll testnets reported earning $500 to $5,000+ when tokens launched.

The catch: airdrops require real wallet connections and on-chain activity that can't be fully automated. My agent could research and rank opportunities, but the actual participation still needed a human in the loop. This isn't a failure of the agent — it's a feature of how crypto works. The systems are literally designed to filter out bots.

That said, the research alone saved me hours. Instead of scrolling through airdrop Twitter and trying to separate signal from noise, I had a ranked list delivered to me daily. Time spent: zero minutes of my own.

Lessons Nobody Tells You

This is the section I wish someone had written before I started. These aren't theoretical observations — they're things I learned by watching 40 PRs die in real time.

1. Bounty Platform Scam Rate Is Roughly 17%

Out of approximately 23 platforms and projects my agent evaluated, 4 had serious problems:

  • Claude-Builders: 30 PRs submitted, 0 merged. Repository had 1 star. This is a classic ghost bounty farm — they attract free code contributions with the promise of payment, then never review or merge anything.
  • RustChain: The one PR that did merge resulted in a wallet balance of 0.0 RTC. Merged ≠ paid. This is the single most important thing I learned.
  • La-Tanda: Account got restricted before any meaningful work could be submitted. Red flag behavior from the platform side.

If you're manually bounty hunting, this might not burn you because you'd notice the red flags early. But an AI agent doesn't have instincts. It sees an open issue with a bounty tag and goes for it. Without explicit fraud-detection logic, it'll happily submit code into the void.

2. Merged ≠ Paid (The Biggest Misconception)

I cannot stress this enough. In the bounty hunting world, getting your PR merged feels like winning. It's not. It's step one of a two-step process, and step two — the actual payment — is where most of the "scam" dynamics hide.

RustChain merged my PR. The code is in their repo right now. My wallet still shows 0.0 RTC. There was no error message, no "payment pending" status, no notification. Just... nothing. If I hadn't manually checked, I might have assumed I'd been paid.

Any AI bounty-hunting system that treats "merged" as "success" is fundamentally broken. The metric that matters is dollars in your account, not green checkmarks on GitHub.

3. AI Can Execute, But It Can't Build Relationships

Here's something nobody talks about in the "AI agents will replace freelancers" discourse: a huge part of getting paid in open source isn't code quality. It's relationships. It's knowing which maintainers are active, which projects actually have budgets, and which bounty programs have a track record of paying out.

My agents had none of that context. They treated every repo with a "bounty" label as equally legitimate. They couldn't read the social dynamics of a project — whether it was actively maintained, whether other contributors had been paid, whether the whole thing was a one-person operation that would ghost you after merging your code.

This is a real limitation, not a temporary one. Relationship intelligence requires longitudinal observation of human behavior. Current AI agents are stateless task-executors. They're excellent at the "do the work" part and terrible at the "should I do this work?" part.

4. "Passive Income" Is a Marketing Term

Every AI side hustle guide I've read uses the phrase "passive income." Let me tell you what's passive about this experiment: nothing.

I spent time configuring agents, debugging API integrations, reviewing outputs, checking wallet balances, investigating scam platforms, and writing monitoring scripts. The agents were automated. The income generation was not. There is no version of "AI makes money while you sleep" that doesn't involve significant upfront setup, ongoing monitoring, and occasional firefighting.

The passive income framing isn't just inaccurate — it's actively harmful. It sets expectations that lead to disappointment and bad decisions. If someone tells you their system is "passive," ask them how many hours they spent building it.

5. AI Content > AI Code Bounties (At Least For Now)

The asymmetry was stark. My content agent produced publishable, valuable articles in minutes. My code agent produced PRs that mostly got ignored or rejected.

Why? Because content is judged on output quality. Code bounties are judged on output quality + trust + relationships + platform dynamics + maintainer availability. Content platforms are designed to be low-friction: you write, you publish, people read. Bounty platforms are designed to be high-friction: you fork, you code, you submit, you wait, you negotiate, you maybe get paid.

AI is good at low-friction tasks. It's bad at high-friction, relationship-dependent tasks. This isn't a bug — it's a fundamental architectural mismatch between how current AI agents work and how bounty ecosystems operate.

What I'd Do Differently

If I were starting this experiment again tomorrow — and honestly, I might — here's what I'd change:

Pick One Lane. Just One.

Running three strategies simultaneously was a mistake. The bounty hunter needed debugging. The content agent needed topic guidance. The airdrop scout needed manual wallet interactions. I was spread thin, and none of the three got the attention they deserved.

If I had to choose one, it would be content creation. The ROI potential is the highest, the failure modes are the least expensive, and the output (articles) has value even if it doesn't immediately monetize.

Verify Payment History Before Writing a Single Line of Code

Before my agent submits PR #1 to any bounty platform, I want to see proof that other contributors have been paid. Not testimonials on the platform's website — actual on-chain transactions or payment screenshots from real people. If a platform can't produce this, it's not a platform. It's a content farm.

Start Content on Day One

I launched all three agents simultaneously, but content should have been the first priority. Articles compound. Code PRs don't. A blog post published today can drive traffic for years. A PR merged today is done — and if it doesn't pay, it's worthless.

Use "Anti-Guru" Content as the Real Product

Here's a meta-lesson: the most valuable thing I produced in this experiment isn't any single article or PR. It's the data. Real numbers. Real failures. Real scam reports. The internet is drowning in "how I made $X with AI" content written by people who made $0. Content based on actual experiments — even failed ones — has a massive differentiation advantage.

The irony isn't lost on me: the best "AI side hustle" content strategy might be to document your AI side hustle failing.

The Bottom Line

This isn't an "AI赚钱指南" — sorry, force of habit — this isn't an AI money-making guide. It's an experiment report. The kind I wish existed before I started, written by someone who actually ran the experiment instead of theorizing about it.

$47 spent. $0 earned. 40+ PRs submitted. 1 merged. 0 paid. Three scam platforms identified. Two articles written. One genuine surprise (content creation speed).

Is $0 the end of the story? No. The articles haven't been published yet — they're sitting in drafts, ready to go live on Dev.to and Hashnode. The Pharos airdrop is still in its testnet phase with real potential. And I now have a blacklist of bounty platforms that I can share to save other people time and money.

If you're thinking about replicating this experiment, here's my honest advice:

  1. Start with content. It's where AI provides the most immediate, least risky value.
  2. Don't start with bounty hunting. The scam rate is too high, the relationship requirements are too complex, and the ROI is negative for automated agents.
  3. Track your real costs. API credits, hosting, coffee — all of it. You can't calculate ROI if you don't know the I.
  4. Treat "merged" as the beginning, not the end. Always verify payment before counting anything as revenue.
  5. Write about what happens. Win or lose, the documentation is valuable.

I'll be publishing follow-up articles as the content goes live and the airdrop plays out. If the Pharos airdrop hits — and historical precedent suggests it could be $500 to $5,000+ — I'll update this series with the actual numbers.

For now, the honest truth is: AI agents are incredible tools for doing work. They're not yet reliable tools for making money. The gap between those two things is where all the interesting lessons live.


If this was useful, follow along for updates. I'll be posting Part 2 once the content is live and the airdrop data is in. And if you've run a similar experiment — especially if your results were different — I'd genuinely love to hear about it.

#AI #SideHustle #MakeMoneyOnline #Experiment #Automation


💡 Further Reading: I experiment with self-hosting, privacy stacks, and open-source alternatives. Find more guides at Pi Stack.

Top comments (0)