The first time I tried to contribute to a popular open source repo, I spent six hours reading code, two hours setting up the dev environment, and another four hours figuring out the test infrastructure before I could even reproduce the bug I wanted to fix. By the time I shipped the PR, I had burned an entire weekend and the maintainer asked for changes that needed two more rounds. Most aspiring contributors quit at exactly this point. The cost of the first contribution is so high that they never make a second one.
Claude Code changed the math for me. Last month I shipped 14 PRs across 9 different open source projects, none of which I had touched before. Three of them were merged within 48 hours. Two of them shipped in the next release. The total time I spent across all 14 PRs was roughly the same as my first lonely weekend trying to fix one bug. This is the workflow.
The Problem with Open Source Onboarding
Every open source project has the same hidden cost. To contribute, you need to absorb a stack of context that the maintainers built up over years. The architecture, the conventions, the test patterns, the unwritten rules about what gets merged and what gets rejected. Reading a CONTRIBUTING.md file gets you maybe 10% of what you need.
The remaining 90% is in the code itself, and humans read code slowly. A senior engineer can absorb maybe 500 lines an hour with full comprehension. A 50,000 line repo would take 100 hours just to read once. Nobody does this. We pattern-match instead. We find a similar fix in the git history, copy its shape, and hope.
Claude Code reads code at machine speed. When I clone a repo I've never seen, my first move is to point Claude Code at the directory and ask for an architectural summary. Five minutes later I have a mental model that would have taken me a full day to build by reading manually. From there, the rest of the contribution flow is mostly automation.
The barrier to open source contribution was never the code. It was the context. AI flattens the context curve so newcomers can ship from day one.
The Five Stage Workflow
I run every open source contribution through five stages, in order. Each stage has a specific Claude Code skill that handles it.
Stage 1: Repository Reconnaissance
Before I touch any code, I need a map. I run a recon skill that produces a one-page summary of the repository structure, the main abstractions, the testing approach, and the contribution conventions.
The skill reads CONTRIBUTING.md, the README, the top-level directory structure, the package manifest, and a sample of the test files. It outputs a single markdown summary that becomes my reference for the rest of the contribution. The summary takes about two minutes to generate and saves me hours of manual exploration.
The most important thing the recon stage produces is a list of unwritten rules. Conventions that are not in any docs but are obvious from the code patterns. Things like "all error messages start with the module name" or "private helpers use a leading underscore but public functions don't." Following these conventions is what separates PRs that get merged from PRs that get rejected with a polite request to "match the project style."
Stage 2: Issue Triage
Most repos have hundreds of open issues. Picking the right one to work on is its own skill. I run a triage skill that pulls the open issues, scores each one for difficulty and impact, and recommends three to start with.
The scoring takes into account how recent the issue is, how many comments it has, whether a maintainer has labeled it as good first issue, and whether the relevant code area is stable or in active flux. The skill prefers issues that are well-defined, have clear acceptance criteria, and touch code that hasn't changed in the last month. These are the issues most likely to result in a merged PR with minimal back-and-forth.
I never work on issues that are already assigned. I never work on issues that have been open for more than a year without comment. Both signals indicate the issue is harder than it looks or the maintainer has already decided not to fix it.
Stage 3: Reproduction
Before writing any code, I reproduce the bug. The reproduction skill takes the issue description, the relevant code paths, and the project's test setup, and writes a failing test that demonstrates the bug. The failing test becomes the contract for the fix. If the fix passes the test, the bug is fixed. If the test was wrong, the maintainer will catch it in review and I learn for next time.
Reproducing the bug before fixing it sounds obvious, but most contributors skip it. They read the issue, hack at the code until the symptom goes away, and ship. This is how you end up with PRs that fix the visible symptom but leave the underlying bug intact, or fix one variant of the bug while creating a new one.
Stage 4: Fix Implementation
With a failing test in hand, the fix becomes a constrained problem. I describe the bug, the failing test, and the relevant code paths to Claude Code, and ask for a fix that makes the test pass without changing any other behavior.
The output is rarely the final fix. It's a starting point that I review, refine, and adjust to match the project's conventions. The recon document from stage one is critical here. I cross-reference every change against the conventions list to make sure the fix looks like the rest of the codebase.
Stage 5: PR Authoring
The final stage is the PR description. A good PR description does three things. It explains what the bug was. It explains why the fix works. It links to the failing test that proves the fix is correct. Maintainers can merge a PR with a good description in under a minute. A PR with a bad description sits in the review queue for weeks.
I run a PR authoring skill that takes the issue, the failing test, the fix diff, and the recon document, and produces a structured PR description that follows the project's conventions. The skill knows that some projects want short descriptions and some want long ones. It knows that some projects use specific commit message formats. It produces output that fits.
The Recon Skill in Detail
The recon skill is the foundation of the whole workflow. Everything downstream depends on its output. Here's what it actually does.
---
name: oss-recon
description: Builds an architectural summary of an unfamiliar open source repo for contribution prep.
---
# OSS Repository Recon
You are doing a one-time architectural recon of an open source repository. The goal is to produce a contribution-ready summary in under five minutes.
## Inputs
- Repository root path
- (Optional) specific subdirectory to focus on
## Output Format
Produce a markdown file at `recon-{repo-name}.md` with these sections:
1. **One-paragraph project description** based on README + package manifest
2. **Architecture summary** with main directories and what lives in each
3. **Core abstractions** - the 3-7 key classes/functions that everything depends on
4. **Testing approach** - test framework, where tests live, naming conventions
5. **Conventions list** - 10-20 patterns observed in the code that aren't documented
6. **Contribution rules** - extracted from CONTRIBUTING.md
7. **Recent activity hotspots** - which areas have changed in the last 30 days
8. **Stable areas** - which areas haven't changed in 90+ days
## Process
1. Read README, CONTRIBUTING.md, and package manifest first
2. Sample 5-10 source files from main directories
3. Sample 3-5 test files
4. Run `git log --since='30 days ago' --name-only` to identify hotspots
5. Synthesize, do not just list
## Anti-patterns
- Do not just dump file lists
- Do not invent conventions you did not observe in 3+ files
- Do not skip the contribution rules section
The skill takes about three minutes to run on a medium-sized repo. The output becomes the prompt context for every subsequent skill in the workflow.
The Single Most Important Lesson
After 14 PRs, the single most important lesson I learned is this: maintainers do not want you to be impressive. They want you to be predictable.
A predictable PR has a clear scope, follows the project conventions, includes a test, and has a description that explains what and why. A predictable PR can be reviewed in five minutes. An impressive PR rewrites three modules, refactors the test infrastructure, and introduces a new abstraction the maintainer never asked for. An impressive PR sits in the review queue for months and eventually gets closed without merge.
Claude Code is excellent at producing predictable PRs. It naturally follows conventions when you give it the recon document. It writes minimal fixes when you ask for minimal fixes. It produces structured PR descriptions when you give it a template. The combination is exactly what maintainers want.
The fastest way to get a PR merged is to make it boring. AI helps you produce boring PRs at scale.
What I Would Do Differently
If I were starting from scratch today, I would do three things differently.
First, I would build the recon skill before doing anything else. Half my early failures came from missing context that was obvious in retrospect. The recon skill catches 90% of these.
Second, I would track every PR in a simple spreadsheet. Repo, issue, time spent, merge outcome, lessons learned. After 30 PRs you have enough data to identify which types of issues are worth your time and which are traps.
Third, I would publish my workflow earlier. Open source maintainers love contributors who explain how they work. A short blog post about your workflow is the single best way to build relationships with maintainers, because it shows you take contribution seriously.
FAQ
How do I avoid submitting AI generated slop to maintainers?
Read every line of every diff before you submit. Run the tests locally. If you cannot defend a change in plain English, do not submit it. The workflow uses AI to remove busywork, not to replace your judgment.
What if the repo has no good first issues?
Look at recent closed PRs and find issues that look similar. Patterns repeat in every repo. If you see five recent PRs about typo fixes in error messages, there are probably more typos waiting.
Does this work for huge monorepos?
The recon skill scales but takes longer. For a 500K-line monorepo, point the skill at one subdirectory rather than the whole repo. Architectural recon at the package level works well.
How do I handle PR review feedback?
Use the same workflow in reverse. Run a feedback summary skill on the review comments, then ask Claude Code to apply the requested changes while preserving the original fix. Always read the diff before pushing.
What This Workflow Unlocks
Open source contribution used to be a luxury for engineers with weekends to burn. The cost of the first contribution was so high that most people never started. The cost of the second one was almost as high, because every new repo meant another full onboarding cycle.
Claude Code collapses the onboarding cost from days to minutes. I now treat every open source repo as a quick scan, not a long study. If I see an interesting issue in a repo I have never used, I can be on a working PR in under two hours. The total time across 14 PRs last month was about 25 hours. That same volume would have taken me 200 hours without this workflow.
The economics have changed. Open source contribution is no longer a luxury. It is a high leverage skill that anyone with Claude Code can practice from day one. Start with the recon skill, build out from there, and ship boring PRs.
If you want the full set of skills I use for this workflow, including the recon skill, the triage skill, and the PR authoring skill, they are all in my Claude Code setup at nextools.hashnode.dev. Read the linked posts. Steal what works. Ship more PRs.
Top comments (0)