DEV Community

Deek Roumy
Deek Roumy

Posted on

How I Built an AI Agent That Hunts GitHub Bounties While I Sleep

Most developers browse GitHub, spot an interesting issue, think "I should try that sometime," and never come back.

I got tired of that loop. So I built a bot to do the browsing for me.

Here's the full story — and the actual code.

The Problem

Expensify runs an open-source bounty program. Each issue tagged Help Wanted carries a $250 reward for the first person who fixes it and gets a PR merged. The catch: you have to request assignment before working on it, and new issues get snatched up fast.

The workflow is:

  1. Watch for new unassigned issues
  2. Post a comment requesting assignment
  3. Get assigned
  4. Fix the bug, open a PR
  5. Get paid

Step 1-2 is pure automation. I wrote a Python script to handle it.

The Bounty Watcher

Here's the core of the script that now runs every few minutes on my machine:

STATE_FILE = "/path/to/bounty-watcher-state.json"
COMMENT = (
    "Hi! I'd like to work on this — could I be assigned? "
    "I can have a PR up quickly. Thanks!"
)

def get_unassigned_bounties(seen_ids):
    result = subprocess.run(
        [
            "gh", "issue", "list",
            "--repo", "Expensify/App",
            "--state", "open",
            "--json", "number,title,assignees,createdAt",
            "--limit", "100",
        ],
        capture_output=True,
        text=True,
    )
    issues = json.loads(result.stdout)
    new_bounties = []
    for issue in issues:
        if issue["assignees"]:
            continue  # already claimed
        if issue["number"] in seen_ids:
            continue  # we've seen this before
        if re.search(r"\[\s*\$\d+\]", issue["title"]):
            new_bounties.append(issue)
    return new_bounties

def post_comment(issue_number):
    result = subprocess.run(
        [
            "gh", "issue", "comment", str(issue_number),
            "--repo", "Expensify/App",
            "--body", COMMENT,
        ],
        capture_output=True, text=True,
    )
    return result.returncode == 0, result.stderr.strip()
Enter fullscreen mode Exit fullscreen mode

It uses the gh CLI to query issues — no token management, no API wrapper, just the GitHub CLI doing what it's good at.

The Rate Limiting Problem

The first version ran every minute and posted comments aggressively. That's a good way to get flagged.

I added a proper timing system with three constraints:

def should_run(state):
    now = time.time()
    recent = [t for t in state.get("recentChecks", []) if now - t < 300]

    if not recent:
        return True, "first_run"

    last = max(recent)
    since_last = now - last

    # Hard min: 1 minute between checks
    if since_last < 60:
        return False, f"too_soon ({int(since_last)}s since last)"

    # Hard max: 3 checks per 5-min window
    if len(recent) >= 3:
        oldest_in_window = min(recent)
        wait = 300 - (now - oldest_in_window)
        return False, f"rate_limit (3/5min, reset in {int(wait)}s)"

    # Force if >4.5 min since last check
    if since_last > 270:
        return True, "overdue"

    # Probabilistic in the 1–4.5 min window (~50% chance)
    if random.random() < 0.5:
        return True, "random"
    return False, "random_skip"
Enter fullscreen mode Exit fullscreen mode
  • Min 1 min between checks (hard floor)
  • Max 3 checks per 5-min window (hard ceiling)
  • Max 5 min gap (guarantees we don't miss new issues)
  • Randomized timing in the middle zone (looks more human)

The randomization matters. Bots with perfectly regular intervals get noticed. Humans don't check GitHub at exactly :00, :05, :10 — and neither does this script.

State Persistence

The script tracks everything it's seen in a JSON file:

def load_state():
    if os.path.exists(STATE_FILE):
        with open(STATE_FILE) as f:
            return json.load(f)
    return {"recentChecks": [], "seenIssues": []}
Enter fullscreen mode Exit fullscreen mode

seenIssues is a list of issue numbers the bot has already commented on. This prevents duplicate comments if the script runs again before GitHub reflects the new state.

recentChecks is a sliding window of Unix timestamps — the timing data for should_run().

Hooking It Into a Cron Agent

The script is designed to be called repeatedly by a scheduler. In my setup, an OpenClaw agent runs it as a background job every minute, then should_run() decides whether to actually do anything.

The output is structured for easy parsing:

RUNNING (random)
Found 2 new unassigned bounty issue(s)!
  #54123 — [Help Wanted][$250] Fix mention replacement bug
    ✅ Assignment request posted
  #54089 — [Help Wanted][$250] Receipt upload shows generic error
    ✅ Assignment request posted

SUMMARY: Requested assignment on 2 issue(s): #54123, #54089
Enter fullscreen mode Exit fullscreen mode

If the bot fails to comment (e.g., GitHub API hiccup), it still marks the issue as seen. No retry storms. Better to miss one bounty than to spam and get banned.

What Happened When I Ran It

The first night, the bot caught 3 new unassigned issues and requested assignment on all of them. I woke up to assignment confirmations in my GitHub notifications.

Then came the actual work: reading the code, writing the fixes, opening PRs. That part I do myself (or with a coding agent). The bot just gets me in the door.

The issues I've gotten assigned so far:

  • @mention replacement bug — off-by-one in SuggestionMention.tsx, scanner was capturing too much text when inserting a new mention before an existing one
  • Receipt upload validation!isImage in the size check condition meant image files bypassed size validation entirely
  • Markdown in category hints — feature request for type="markdown" and autoGrowHeight on a text input

None of these were hard to fix once I was looking at the right file. The hard part was finding them first.

The Honest Part

Not all issues get merged. Some get closed because someone else beat me to it, or because the maintainers want a different approach, or because my fix had edge cases I didn't catch.

The script also only gets me to the assignment stage. What actually earns the money is writing a good PR — which is a different skill.

But it fundamentally changes the problem. Instead of "browse GitHub and hope," it's "wake up with a queue of assigned issues and decide which ones to work on." That's a much better starting position.

Running It Yourself

Requirements: Python 3, gh CLI authenticated to your GitHub account.

# Clone or download the script
# Update STATE_FILE path to somewhere on your machine
# Run it on a cron or just call it repeatedly:
python3 bounty-watcher.py

# Add to crontab to run every minute:
# * * * * * /usr/bin/python3 /path/to/bounty-watcher.py >> /tmp/bounty.log 2>&1
Enter fullscreen mode Exit fullscreen mode

The full script is in my GitHub profile: @DeekRoumy

The repo has the bounty watcher plus a few other tools in the same vein — a revenue tracker, a confidence scorer for issue tractability, and some Playwright helpers for cases where you need a real browser.


The insight that made this click: GitHub issue assignment is a race. The people who win are the ones who show up first with a reasonable-looking comment. A bot is just a faster human at that specific step. Everything after that — the actual coding — is still yours to do.

Build the tools that get you to the starting line faster. Do the real work once you're there.

Top comments (0)