<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prakhar</title>
    <description>The latest articles on DEV Community by Prakhar (@prakhar_577d8cdbd5abd7da3).</description>
    <link>https://dev.to/prakhar_577d8cdbd5abd7da3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prakhar_577d8cdbd5abd7da3"/>
    <language>en</language>
    <item>
      <title>I built an MCP server that finds you mergeable open source issues in 30 seconds</title>
      <dc:creator>Prakhar</dc:creator>
      <pubDate>Sat, 25 Apr 2026 15:38:03 +0000</pubDate>
      <link>https://dev.to/prakhar_577d8cdbd5abd7da3/i-built-an-mcp-server-that-finds-you-mergeable-open-source-issues-in-30-seconds-3g3o</link>
      <guid>https://dev.to/prakhar_577d8cdbd5abd7da3/i-built-an-mcp-server-that-finds-you-mergeable-open-source-issues-in-30-seconds-3g3o</guid>
      <description>&lt;h1&gt;
  
  
  I built an MCP server that finds you mergeable open source issues in 30 seconds
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;I'm a CS student at IIT Guwahati. A few months ago I decided I wanted to contribute to open source. The advice was always the same: "look for the &lt;code&gt;good first issue&lt;/code&gt; label."&lt;/p&gt;

&lt;p&gt;So I did. And every single time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The issue was already assigned&lt;/li&gt;
&lt;li&gt;Someone had opened a PR yesterday&lt;/li&gt;
&lt;li&gt;The repo hadn't been touched in 8 months&lt;/li&gt;
&lt;li&gt;It needed a language I knew on paper but not in practice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After three weekends of this loop, I gave up and went back to building side projects.&lt;/p&gt;

&lt;p&gt;Then MCP (Model Context Protocol) launched and I realized: this is exactly the kind of problem an AI agent should solve. Not by &lt;em&gt;generating&lt;/em&gt; anything — just by &lt;em&gt;filtering&lt;/em&gt; GitHub data better than I can scroll through it.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;OpenCollab MCP&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;You ask Claude (or Cursor, or any MCP client):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Find me a Python good-first-issue I can finish this weekend. Make sure nobody's working on it."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;OpenCollab exposes 22 tools to the AI. The model picks the right ones, chains them together:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;match_me&lt;/code&gt; — reads my GitHub, picks my strongest language&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;find_issues&lt;/code&gt; — searches good-first-issues in that language
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;check_issue_availability&lt;/code&gt; — for each candidate, verifies no one's working on it&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;issue_complexity&lt;/code&gt; — rates difficulty 1-10&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I get back 5 actually-mergeable issues in under 30 seconds.&lt;/p&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Plan a PR for issue #456 in owner/repo."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It calls &lt;code&gt;generate_pr_plan&lt;/code&gt; which fetches the issue body, comments, CONTRIBUTING.md, repo structure, and default branch — handing the AI everything needed to draft real code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 22 tools
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;🔍 Discovery (6):&lt;/strong&gt; find_issues, trending_repos, similar_repos, find_mentor_repos, weekend_issues, match_me&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📊 Evaluation (7):&lt;/strong&gt; repo_health, contribution_readiness, impact_estimator, repo_activity_pulse, compare_repos, repo_languages, dependency_check&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🎯 Issue Intel (6):&lt;/strong&gt; check_issue_availability, issue_complexity, stale_issue_finder, label_explorer, recent_prs, generate_pr_plan&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;👤 Profile (3):&lt;/strong&gt; analyze_profile, first_timer_score, contributor_leaderboard&lt;/p&gt;

&lt;h2&gt;
  
  
  Design choices that mattered
&lt;/h2&gt;

&lt;h3&gt;
  
  
  It's a data bridge, not an AI
&lt;/h3&gt;

&lt;p&gt;Zero AI inference happens on my end. Your client (Claude/Cursor) does all the reasoning. OpenCollab just feeds it clean GitHub data.&lt;/p&gt;

&lt;p&gt;This means: zero cost to me, fully private (runs on your machine), and it inherits whatever model your client uses.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pydantic for every input
&lt;/h3&gt;

&lt;p&gt;Every tool's input is a Pydantic model with &lt;code&gt;extra="forbid"&lt;/code&gt; and &lt;code&gt;str_strip_whitespace=True&lt;/code&gt;. LLMs sometimes pass stray fields or whitespace — Pydantic catches it before any logic runs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;IssueInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;model_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ConfigDict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;str_strip_whitespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;extra&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;forbid&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(...,&lt;/span&gt; &lt;span class="n"&gt;min_length&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(...,&lt;/span&gt; &lt;span class="n"&gt;min_length&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;issue_number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(...,&lt;/span&gt; &lt;span class="n"&gt;min_length&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;issue_number: str&lt;/code&gt; is intentional — clients pass numbers as strings, and a permissive parser handles &lt;code&gt;'#123'&lt;/code&gt;, &lt;code&gt;'  123  '&lt;/code&gt;, and &lt;code&gt;'123'&lt;/code&gt; uniformly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Async + parallel for the heavy tools
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;match_me&lt;/code&gt; was originally 3 sequential API calls. Now it's &lt;code&gt;asyncio.gather&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;repos_raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nf"&gt;github_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/users/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nf"&gt;github_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/users/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/repos&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{...}),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same trick in &lt;code&gt;repo_health&lt;/code&gt; (3 calls), &lt;code&gt;compare_repos&lt;/code&gt; (4 calls per repo), &lt;code&gt;dependency_check&lt;/code&gt; (8 file lookups), &lt;code&gt;generate_pr_plan&lt;/code&gt; (5 endpoints). 3-5x latency improvement on the heavy paths.&lt;/p&gt;

&lt;h3&gt;
  
  
  In-memory TTL cache for rate limits
&lt;/h3&gt;

&lt;p&gt;GitHub's unauthenticated rate limit is brutal. Even authenticated, 5 chained tool calls per question burns through fast. I added a 5-minute in-memory cache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_cache_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;entry&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="n"&gt;expires_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;entry&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monotonic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;expires_at&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;_cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hand-rolled because &lt;code&gt;functools.lru_cache&lt;/code&gt; doesn't do TTL and I didn't want a &lt;code&gt;cachetools&lt;/code&gt; dependency for 30 lines.&lt;/p&gt;

&lt;h3&gt;
  
  
  MockTransport for fast tests
&lt;/h3&gt;

&lt;p&gt;All 45 tests run in 0.12 seconds. No real network. &lt;code&gt;httpx.MockTransport&lt;/code&gt; lets me return arbitrary status codes per path, which mattered for testing the GitHub-202-while-stats-compute case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;routes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Not Found (mock)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;spec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;routes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;spec&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;tuple&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;else &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install in 60 seconds
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;opencollab-mcp
&lt;span class="c"&gt;# or&lt;/span&gt;
uvx opencollab-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add to your Claude Desktop config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"opencollab"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"opencollab-mcp"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"GITHUB_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your_token_here"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude. Done.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;first_pr_generator&lt;/code&gt; — one-shot find + plan + draft my first PR&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;track_my_prs&lt;/code&gt; — dashboard with staleness nudges&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;skill_gap&lt;/code&gt; — compare your skills vs a target repo's stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you've been wanting to contribute to open source but couldn't find the right issue, give it a shot. And if it helps you land a PR — a ⭐ on the repo would genuinely make my week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/prakhar1605/Opencollab-mcp" rel="noopener noreferrer"&gt;https://github.com/prakhar1605/Opencollab-mcp&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;PyPI:&lt;/strong&gt; &lt;a href="https://pypi.org/project/opencollab-mcp/" rel="noopener noreferrer"&gt;https://pypi.org/project/opencollab-mcp/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>mcp</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
