<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anmol Raj Soni</title>
    <description>The latest articles on DEV Community by Anmol Raj Soni (@anmolrajsoni15).</description>
    <link>https://dev.to/anmolrajsoni15</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anmolrajsoni15"/>
    <language>en</language>
    <item>
      <title>MCP Solved Tool Access. Tool Selection Is Still Unsolved</title>
      <dc:creator>Anmol Raj Soni</dc:creator>
      <pubDate>Mon, 11 May 2026 05:00:00 +0000</pubDate>
      <link>https://dev.to/anmolrajsoni15/mcp-solved-tool-access-tool-selection-is-still-unsolved-2a4f</link>
      <guid>https://dev.to/anmolrajsoni15/mcp-solved-tool-access-tool-selection-is-still-unsolved-2a4f</guid>
      <description>&lt;p&gt;I have been building agentic workflows for the better part of a year, and the same friction keeps coming back: my agents have &lt;strong&gt;access&lt;/strong&gt; to too many tools and &lt;strong&gt;judgement&lt;/strong&gt; about almost none of them.&lt;/p&gt;

&lt;p&gt;MCP fixed the access problem. It standardised how agents call tools, how clients connect, how servers describe themselves. That is real, and it has unlocked a wave of useful servers — browser automation, file systems, search, databases, you name it.&lt;/p&gt;

&lt;p&gt;But access is not selection. When I ask Claude Code to "build me a real-time chat app with auth," it does not need a list of every MCP server in the world. It needs to know: which database, which realtime transport, which auth provider, and whether those four picks are version-compatible with each other. That is a different problem, and it is the one I built &lt;strong&gt;ToolCairn&lt;/strong&gt; to solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shape of the problem
&lt;/h2&gt;

&lt;p&gt;If you have built anything serious on top of MCP, you have probably hit one of these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tool overload.&lt;/strong&gt; Your agent has 30+ servers connected. Most are noise for the current task. Surface area becomes a context-window problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wrong-package picks.&lt;/strong&gt; The model autocompletes to a popular but wrong library. (&lt;code&gt;requests&lt;/code&gt; vs. &lt;code&gt;httpx&lt;/code&gt;. &lt;code&gt;socket.io&lt;/code&gt; vs. native &lt;code&gt;WebSocket&lt;/code&gt;. &lt;code&gt;next-auth&lt;/code&gt; vs. &lt;code&gt;better-auth&lt;/code&gt;.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version drift.&lt;/strong&gt; The picks individually look fine; together they do not install. Or they install and then crash at runtime on a peer-dep mismatch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No "why".&lt;/strong&gt; A directory listing tells you a tool exists. It does not tell you why an agent should pick it for &lt;em&gt;this specific task&lt;/em&gt;, what it composes well with, or what the trust signals are.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not exotic problems. They show up in week one of any non-trivial agent build.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;ToolCairn is an MCP server. You install it the same way you install any MCP server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add toolcairn &lt;span class="nt"&gt;--&lt;/span&gt; npx @neurynae/toolcairn-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it is connected, your agent gets a small, focused toolkit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;classify_prompt&lt;/code&gt; — decide whether a request is a single-tool need, a multi-layer stack build, a comparison, or unrelated.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;search_tools&lt;/code&gt; / &lt;code&gt;search_tools_respond&lt;/code&gt; — find the right tool for one specific need, with a clarification loop when the request is ambiguous.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;refine_requirement&lt;/code&gt; + &lt;code&gt;get_stack&lt;/code&gt; — for "build me a SaaS analytics dashboard"-shaped tasks, decompose into sub-needs and return a coherent stack with cross-tool compatibility.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;compare_tools&lt;/code&gt; — head-to-head when the user asks "X vs Y."&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;check_compatibility&lt;/code&gt; — version-aware peer-dep evaluation across picks.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;check_issue&lt;/code&gt; — last-resort known-bug lookup before the agent burns three more retries on a problem.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;report_outcome&lt;/code&gt; — close the loop after the user actually uses the recommendation, so the graph learns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Underneath, the recommendations are drawn from a graph of tools indexed across &lt;strong&gt;35+ open-source registries&lt;/strong&gt; — npm, PyPI, Cargo, Maven, Go, Composer, RubyGems, NuGet, Homebrew, and more. The current graph carries thousands of tools with usage context, registry metadata, and version data — not a flat directory listing.&lt;/p&gt;

&lt;h2&gt;
  
  
  A concrete example
&lt;/h2&gt;

&lt;p&gt;Prompt: &lt;em&gt;"Build me a real-time chat app with auth."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What the agent does, with ToolCairn connected:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;classify_prompt&lt;/code&gt; → returns &lt;code&gt;stack_building&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;refine_requirement&lt;/code&gt; → decomposes into &lt;code&gt;web-framework&lt;/code&gt;, &lt;code&gt;realtime-transport&lt;/code&gt;, &lt;code&gt;auth-provider&lt;/code&gt;, &lt;code&gt;database&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;get_stack&lt;/code&gt; → returns a ranked stack: &lt;strong&gt;Next.js + Socket.IO + NextAuth + PostgreSQL&lt;/strong&gt;, with a cross-tool compatibility matrix.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;check_compatibility&lt;/code&gt; → confirms &lt;code&gt;next@15&lt;/code&gt; ✅ &lt;code&gt;socket.io-client@4&lt;/code&gt; ✅ peer-dep evaluation across the four picks.&lt;/li&gt;
&lt;li&gt;The agent writes the project. After it ships, &lt;code&gt;report_outcome&lt;/code&gt; fires and the graph learns from the choice.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is a different shape of response than "here are 12 chat libraries, sorted by GitHub stars."&lt;/p&gt;

&lt;h2&gt;
  
  
  What this is not
&lt;/h2&gt;

&lt;p&gt;I want to be very specific about scope, because the closest comparison is "directory" and that is the wrong frame.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;It is not a directory.&lt;/strong&gt; Directories are for humans browsing. ToolCairn is for agents requesting context at task time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It is not a replacement for the official MCP Registry.&lt;/strong&gt; The MCP Registry is the canonical index of MCP servers. ToolCairn is one server inside that index, focused on selection, not listing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It is not a ranking algorithm dressed up.&lt;/strong&gt; Ranking matters, but the load-bearing piece is the &lt;em&gt;graph&lt;/em&gt; — how tools relate, what they compose with, what versions work together.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It is not finished.&lt;/strong&gt; Trust signals, integration breadth, and the per-task recommendation quality all still have a long runway. That is most of what I want feedback on.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to try it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web:&lt;/strong&gt; &lt;a href="https://toolcairn.neurynae.com" rel="noopener noreferrer"&gt;toolcairn.neurynae.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docs:&lt;/strong&gt; &lt;a href="https://toolcairn.neurynae.com/docs" rel="noopener noreferrer"&gt;toolcairn.neurynae.com/docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture / trust:&lt;/strong&gt; &lt;a href="https://toolcairn.neurynae.com/about" rel="noopener noreferrer"&gt;toolcairn.neurynae.com/about&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/neurynae/toolcairn-mcp" rel="noopener noreferrer"&gt;github.com/neurynae/toolcairn-mcp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/@neurynae/toolcairn-mcp" rel="noopener noreferrer"&gt;@neurynae/toolcairn-mcp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install (Claude Code):&lt;/strong&gt; &lt;code&gt;claude mcp add toolcairn -- npx @neurynae/toolcairn-mcp&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I want feedback on
&lt;/h2&gt;

&lt;p&gt;Genuinely, blunt feedback, not validation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Recommendation relevance.&lt;/strong&gt; Are the picks actually the picks you would have made?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing categories.&lt;/strong&gt; Where does ToolCairn return nothing useful? Which ecosystems is the graph too thin in?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust signals.&lt;/strong&gt; What would make a recommendation trustworthy enough that you would let an agent act on it without reviewing every line?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client integrations.&lt;/strong&gt; Claude Code is supported today. Cursor, Codex, Windsurf, VS Code AI — which should come first?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can leave it on a GitHub issue, a comment under this post, or my DMs. I read everything.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
