Building Automated GitHub Bounty Hunting with AI Agents
Open-source bounty platforms like Algora, Gitcoin, and IssueHunt have created a market where developers earn real money by fixing bugs and building features for open-source projects. But finding the right bounties, assessing feasibility, and crafting winning proposals is time-consuming manual work. What if an AI agent could do the prospecting for you?
In this article, you will build an automated bounty hunting system that uses AI agents to discover GitHub bounties, analyze their feasibility, draft proposals, and manage submissions — turning what used to be hours of manual browsing into a continuous, automated pipeline.
This is not theoretical. The system described here is in production, and the architectural patterns apply to any scenario where you need AI agents to interact with external platforms, make decisions, and take actions on your behalf.
The Economics of Open-Source Bounties
Before writing code, it helps to understand the market. Open-source bounties typically range from $50 for documentation fixes to $2,000+ for complex feature implementations. The sweet spot for an automated system is bounties in the $150-$500 range: substantial enough to be worth pursuing, but scoped enough that a competent developer can complete them in a few hours.
The bottleneck is not the coding — it is the discovery and qualification. On any given day, hundreds of new bounties appear across platforms. Most are poorly specified, already claimed, or require deep domain knowledge in a niche framework. A human developer might spend 30 minutes reading through issues before finding one worth pursuing. An AI agent can evaluate 50 bounties in the same time.
The math works out: if your system surfaces 3-5 high-quality bounties per day and you successfully complete even one per week at an average of $300, that is $1,200/month in supplemental income — from work that scales independently of your time.
Architecture Overview
The system has four components:
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Bounty Scanner │────►│ Feasibility │────►│ Proposal │
│ (Discovery) │ │ Analyzer (AI) │ │ Generator │
└─────────────────┘ └──────────────────┘ └────────┬────────┘
│
┌────────▼────────┐
│ Submission │
│ Manager │
└─────────────────┘
Bounty Scanner crawls GitHub issues with bounty labels, queries the Algora API, and aggregates bounties from multiple sources into a normalized format.
Feasibility Analyzer uses an LLM to assess each bounty: Is the issue well-defined? What skills does it require? How many hours would it take? Is someone already working on it?
Proposal Generator drafts a tailored proposal for each qualified bounty, incorporating context from the repository's codebase, contribution guidelines, and the specific issue requirements.
Submission Manager handles the actual posting of proposals as GitHub issue comments, tracks statuses, and manages follow-ups.
Step 1: Building the Bounty Scanner
The scanner needs to find bounties across multiple sources. GitHub's issue search API is the primary discovery mechanism, since most bounty platforms use GitHub labels to tag funded issues.
import { Octokit } from "@octokit/rest";
interface Bounty {
id: string;
repo: string;
issueNumber: number;
title: string;
body: string;
labels: string[];
bountyAmount: number | null;
url: string;
createdAt: string;
commentCount: number;
}
const BOUNTY_LABELS = [
"bounty",
"💎 Bounty",
"reward",
"paid",
"💰",
"help wanted",
"good first issue",
];
export async function scanGitHubBounties(
octokit: Octokit,
options: { minAge?: number; maxResults?: number } = {}
): Promise<Bounty[]> {
const { minAge = 0, maxResults = 50 } = options;
const bounties: Bounty[] = [];
for (const label of BOUNTY_LABELS) {
const query = `label:"${label}" is:issue is:open sort:created-desc`;
const { data } = await octokit.rest.search.searchIssuesAndPullRequests({
q: query,
per_page: Math.min(maxResults, 30),
sort: "created",
order: "desc",
});
for (const issue of data.items) {
const repoFullName = issue.repository_url.replace(
"https://api.github.com/repos/",
""
);
const amount = extractBountyAmount(issue.body ?? "", issue.labels);
bounties.push({
id: `${repoFullName}#${issue.number}`,
repo: repoFullName,
issueNumber: issue.number,
title: issue.title,
body: issue.body ?? "",
labels: issue.labels.map((l) =>
typeof l === "string" ? l : l.name ?? ""
),
bountyAmount: amount,
url: issue.html_url,
createdAt: issue.created_at,
commentCount: issue.comments,
});
}
}
// Deduplicate by issue URL
const seen = new Set<string>();
return bounties.filter((b) => {
if (seen.has(b.url)) return false;
seen.add(b.url);
return true;
});
}
function extractBountyAmount(
body: string,
labels: Array<string | { name?: string }>
): number | null {
// Check for Algora-style bounty amounts in labels
for (const label of labels) {
const name = typeof label === "string" ? label : label.name ?? "";
const match = name.match(/\$(\d+)/);
if (match) return parseInt(match[1], 10);
}
// Check body for dollar amounts near bounty keywords
const patterns = [
/bounty[:\s]*\$(\d+)/i,
/reward[:\s]*\$(\d+)/i,
/\$(\d+)\s*bounty/i,
/💎\s*\$(\d+)/,
];
for (const pattern of patterns) {
const match = body.match(pattern);
if (match) return parseInt(match[1], 10);
}
return null;
}
The scanner queries GitHub's search API for issues with bounty-related labels, then normalizes the results. The extractBountyAmount function parses dollar amounts from both labels (common on Algora) and issue bodies.
A production system would also integrate directly with the Algora API:
export async function scanAlgoraBounties(): Promise<Bounty[]> {
const response = await fetch("https://console.algora.io/api/bounties", {
headers: { Accept: "application/json" },
});
const data = await response.json();
return data.bounties.map((b: any) => ({
id: b.id,
repo: b.repo_full_name,
issueNumber: b.issue_number,
title: b.title,
body: b.description,
labels: b.labels ?? [],
bountyAmount: b.reward_amount,
url: b.issue_url,
createdAt: b.created_at,
commentCount: b.comment_count,
}));
}
Step 2: AI-Powered Feasibility Analysis
Raw bounty lists are useless without qualification. The feasibility analyzer uses an LLM to evaluate each bounty against multiple criteria, producing a structured assessment.
import Anthropic from "@anthropic-ai/sdk";
interface FeasibilityReport {
bountyId: string;
score: number; // 0-100
estimatedHours: number;
requiredSkills: string[];
complexity: "low" | "medium" | "high";
isWellDefined: boolean;
hasExistingClaimant: boolean;
reasoning: string;
recommendation: "pursue" | "skip" | "monitor";
}
export async function analyzeFeasibility(
client: Anthropic,
bounty: Bounty,
repoContext: string
): Promise<FeasibilityReport> {
const prompt = `Analyze this GitHub bounty for feasibility. Return a JSON object.
## Bounty Details
- **Repository:** ${bounty.repo}
- **Title:** ${bounty.title}
- **Amount:** ${bounty.bountyAmount ? `$${bounty.bountyAmount}` : "Unknown"}
- **Comments:** ${bounty.commentCount}
- **Issue Body:**
${bounty.body.slice(0, 3000)}
## Repository Context
${repoContext.slice(0, 2000)}
## Evaluation Criteria
1. Is the issue well-defined with clear acceptance criteria?
2. What skills and technologies are required?
3. How many hours would a competent developer need?
4. Is someone already working on this (check for "I'll take this" comments)?
5. Is the bounty amount reasonable for the work involved?
6. What is the risk of scope creep?
Return ONLY a JSON object with these fields:
- score (0-100, higher = more worth pursuing)
- estimatedHours (number)
- requiredSkills (string array)
- complexity ("low" | "medium" | "high")
- isWellDefined (boolean)
- hasExistingClaimant (boolean)
- reasoning (1-2 sentences explaining your assessment)
- recommendation ("pursue" | "skip" | "monitor")`;
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: prompt }],
});
const text =
response.content[0].type === "text" ? response.content[0].text : "";
const jsonMatch = text.match(/\{[\s\S]*\}/);
if (!jsonMatch) {
throw new Error("Failed to parse feasibility report from LLM response");
}
const report = JSON.parse(jsonMatch[0]) as Omit<
FeasibilityReport,
"bountyId"
>;
return { ...report, bountyId: bounty.id };
}
The key design decision here is using structured JSON output rather than free-form text. This makes downstream processing deterministic — your pipeline can filter on score > 70 and recommendation === "pursue" without any additional parsing.
For the repository context, fetch the README and relevant source files:
async function getRepoContext(
octokit: Octokit,
repo: string
): Promise<string> {
const [owner, name] = repo.split("/");
try {
const { data: readme } = await octokit.rest.repos.getContent({
owner,
repo: name,
path: "README.md",
});
if ("content" in readme) {
return Buffer.from(readme.content, "base64").toString("utf-8");
}
} catch {
// README not available
}
// Fallback: get repo description and language stats
const { data: repoData } = await octokit.rest.repos.get({
owner,
repo: name,
});
return `${repoData.description ?? ""}\nPrimary language: ${repoData.language}\nStars: ${repoData.stargazers_count}`;
}
Step 3: Generating Winning Proposals
A good bounty proposal does three things: demonstrates understanding of the problem, outlines a concrete approach, and establishes credibility. The proposal generator creates tailored responses for each bounty.
interface Proposal {
bountyId: string;
issueComment: string;
estimatedDelivery: string;
}
export async function generateProposal(
client: Anthropic,
bounty: Bounty,
feasibility: FeasibilityReport,
authorProfile: { github: string; relevantExperience: string[] }
): Promise<Proposal> {
const prompt = `Write a GitHub issue comment proposing to work on this bounty.
The comment should be professional, concise, and demonstrate understanding.
## Bounty
- **Title:** ${bounty.title}
- **Repository:** ${bounty.repo}
- **Description:** ${bounty.body.slice(0, 2000)}
## My Assessment
- **Complexity:** ${feasibility.complexity}
- **Estimated Hours:** ${feasibility.estimatedHours}
- **Required Skills:** ${feasibility.requiredSkills.join(", ")}
## Author Profile
- **GitHub:** ${authorProfile.github}
- **Relevant Experience:** ${authorProfile.relevantExperience.join("; ")}
## Requirements for the comment
1. Start with a brief statement of interest (1 sentence)
2. Show you understand the problem by restating it in your own words (2-3 sentences)
3. Outline your proposed approach with specific technical steps (3-5 bullet points)
4. Mention relevant experience briefly (1 sentence)
5. Give an estimated delivery timeline
6. Keep the total length under 200 words — maintainers skim, they don't read essays
Do NOT use phrases like "I'd love to" or "I'm excited to" — be direct and professional.
Return ONLY the comment text, no wrapping or explanation.`;
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: prompt }],
});
const comment =
response.content[0].type === "text" ? response.content[0].text : "";
const deliveryDays = Math.ceil(feasibility.estimatedHours / 4); // ~4 productive hours/day
return {
bountyId: bounty.id,
issueComment: comment.trim(),
estimatedDelivery: `${deliveryDays} days`,
};
}
The prompt engineering here is deliberate. Many bounty proposals fail because they are too long, too generic, or too enthusiastic. The constraints in the prompt — under 200 words, no filler phrases, specific technical steps — produce proposals that stand out by being substantive and respectful of the maintainer's time.
Step 4: The Submission Pipeline
The submission manager ties everything together into an automated pipeline with safety checks.
import { writeFileSync, readFileSync, existsSync } from "fs";
interface PipelineState {
scannedAt: string;
bounties: Bounty[];
reports: FeasibilityReport[];
proposals: Proposal[];
submitted: Array<{ bountyId: string; submittedAt: string; url: string }>;
}
export async function runPipeline(config: {
octokit: Octokit;
anthropic: Anthropic;
stateFile: string;
dryRun: boolean;
maxSubmissionsPerRun: number;
minScore: number;
}): Promise<PipelineState> {
const {
octokit,
anthropic,
stateFile,
dryRun,
maxSubmissionsPerRun,
minScore,
} = config;
// Load existing state to avoid re-processing
let state: PipelineState = existsSync(stateFile)
? JSON.parse(readFileSync(stateFile, "utf-8"))
: { scannedAt: "", bounties: [], reports: [], proposals: [], submitted: [] };
const alreadyProcessed = new Set(state.reports.map((r) => r.bountyId));
const alreadySubmitted = new Set(state.submitted.map((s) => s.bountyId));
// Phase 1: Discover
console.log("Scanning for bounties...");
const newBounties = await scanGitHubBounties(octokit);
const unprocessed = newBounties.filter((b) => !alreadyProcessed.has(b.id));
console.log(
`Found ${newBounties.length} bounties, ${unprocessed.length} new`
);
// Phase 2: Analyze
console.log("Analyzing feasibility...");
for (const bounty of unprocessed.slice(0, 20)) {
try {
const context = await getRepoContext(octokit, bounty.repo);
const report = await analyzeFeasibility(anthropic, bounty, context);
state.reports.push(report);
state.bounties.push(bounty);
console.log(
` ${bounty.id}: score=${report.score} rec=${report.recommendation}`
);
} catch (err) {
console.error(` Failed to analyze ${bounty.id}:`, err);
}
}
// Phase 3: Generate proposals for qualified bounties
const qualified = state.reports
.filter(
(r) =>
r.score >= minScore &&
r.recommendation === "pursue" &&
!alreadySubmitted.has(r.bountyId)
)
.sort((a, b) => b.score - a.score)
.slice(0, maxSubmissionsPerRun);
console.log(`Generating proposals for ${qualified.length} bounties...`);
for (const report of qualified) {
const bounty = state.bounties.find((b) => b.id === report.bountyId);
if (!bounty) continue;
const proposal = await generateProposal(anthropic, bounty, report, {
github: "chengyixu",
relevantExperience: [
"Published 5 npm packages for CLI tooling",
"Experience with TypeScript, Node.js, browser automation",
"Active open-source contributor",
],
});
state.proposals.push(proposal);
// Phase 4: Submit (with dry-run safety)
if (dryRun) {
console.log(` [DRY RUN] Would submit to ${bounty.url}`);
console.log(` Proposal preview: ${proposal.issueComment.slice(0, 100)}...`);
} else {
const [owner, repo] = bounty.repo.split("/");
await octokit.rest.issues.createComment({
owner,
repo,
issue_number: bounty.issueNumber,
body: proposal.issueComment,
});
state.submitted.push({
bountyId: bounty.id,
submittedAt: new Date().toISOString(),
url: bounty.url,
});
console.log(` Submitted proposal to ${bounty.url}`);
}
}
// Persist state
state.scannedAt = new Date().toISOString();
writeFileSync(stateFile, JSON.stringify(state, null, 2));
return state;
}
The dryRun flag is critical. Always run the pipeline in dry-run mode first to review proposals before they are posted. Automated comments on GitHub issues represent you publicly — one bad proposal can damage your reputation in a community.
Step 5: Scheduling and Monitoring
Wrap the pipeline in a CLI entry point and schedule it with cron:
import { program } from "commander";
import { Octokit } from "@octokit/rest";
import Anthropic from "@anthropic-ai/sdk";
program
.command("scan")
.description("Scan for bounties and generate proposals")
.option("--dry-run", "Preview proposals without submitting", true)
.option("--min-score <n>", "Minimum feasibility score", "70")
.option("--max-submissions <n>", "Max proposals per run", "3")
.action(async (opts) => {
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
const anthropic = new Anthropic();
const state = await runPipeline({
octokit,
anthropic,
stateFile: "./bounty-state.json",
dryRun: opts.dryRun,
maxSubmissionsPerRun: parseInt(opts.maxSubmissions),
minScore: parseInt(opts.minScore),
});
console.log("\n--- Summary ---");
console.log(`Bounties scanned: ${state.bounties.length}`);
console.log(`Qualified: ${state.reports.filter((r) => r.recommendation === "pursue").length}`);
console.log(`Proposals generated: ${state.proposals.length}`);
console.log(`Submitted: ${state.submitted.length}`);
});
program
.command("status")
.description("Check status of submitted proposals")
.action(async () => {
const state = JSON.parse(readFileSync("./bounty-state.json", "utf-8"));
for (const sub of state.submitted) {
console.log(`${sub.bountyId}`);
console.log(` Submitted: ${sub.submittedAt}`);
console.log(` URL: ${sub.url}`);
}
});
program.parse();
Schedule with cron to run twice daily:
# Run bounty scanner at 9am and 3pm
0 9,15 * * * cd /path/to/bounty-hunter && node dist/cli.js scan --dry-run >> bounty.log 2>&1
Start with --dry-run always on. Review the proposals in bounty-state.json, and when you are confident in the quality, submit manually or flip the flag.
Practical Tips from Production Use
Filter aggressively. A minimum score of 70 and a "pursue" recommendation will still surface some low-quality bounties. Add your own filters: skip repos with fewer than 100 stars (often abandoned), skip bounties older than 14 days (likely already claimed), and skip any bounty requiring languages you are not proficient in.
Customize proposals per repository. The generic proposal generator works, but proposals that reference specific files, functions, or architectural patterns in the repository convert at a much higher rate. Consider adding a step that clones the repo and feeds relevant source files to the LLM.
Track your conversion rate. After a month, you will have data on which types of bounties and which proposal styles lead to accepted claims. Feed this back into your scoring model. A simple adjustment — weighting repos where you have prior contributions higher — can double your acceptance rate.
Respect rate limits. GitHub's search API allows 30 requests per minute for authenticated users. The Anthropic API has its own rate limits. Add delays between requests and implement exponential backoff.
Start with documentation bounties. They are lower-paying ($50-150) but have dramatically higher acceptance rates. Use them to build a contribution history, then graduate to feature bounties where your profile gives you credibility.
The Ethical Dimension
Automated bounty hunting raises legitimate questions. If an AI generates your proposal, should you disclose that? If your system can submit proposals faster than any human, is that fair?
My position: the AI assists with discovery and drafting, but you do the actual work. The proposal is a commitment to deliver — and you must deliver quality code. Automated prospecting is no different from using job alerts on LinkedIn; it is the execution that matters.
Disclose AI assistance if asked. Never submit AI-generated code without thorough review and testing. And never claim bounties you cannot complete — the reputation damage is not worth the payout.
Conclusion
The system described here turns bounty hunting from a manual, ad-hoc activity into a structured pipeline. The AI handles the parts that do not require human judgment — scanning, initial filtering, proposal drafting — while you focus on the parts that do: evaluating technical feasibility, reviewing proposals before submission, and writing the actual code.
The same architectural pattern — scanner, analyzer, generator, submitter — applies beyond bounties. Freelance job boards, contract RFPs, grant applications: anywhere there is a high-volume stream of opportunities that require qualification and a tailored response, this pipeline architecture works.
Start with the scanner. Get comfortable with the volume of bounties available. Add the AI analysis. Then, when you trust the system's judgment, let it draft proposals for your review. The goal is not full automation — it is leveraging AI to multiply the opportunities you can evaluate and pursue.
Key takeaways:
- Open-source bounties are a viable income stream, but discovery and qualification are the bottleneck.
- GitHub's search API and bounty platform APIs provide the raw data; an LLM provides the judgment.
- Structured JSON output from the LLM makes downstream filtering and pipeline logic deterministic.
- Always use dry-run mode and review proposals before submission — your reputation is the asset.
- Track conversion rates and feed them back into your scoring model to improve over time.
- The scanner-analyzer-generator-submitter pattern generalizes to any opportunity pipeline.
Top comments (0)