DEV Community

Prashant Patil
Prashant Patil

Posted on

MCP server for C# development with real NuGet reflection

sharp-mcp:

Roslyn-Powered C# Analysis, Real NuGet DLL Reflection, and Safe Live File Editing for Claude, On Your Machine via MCP

If you've ever watched an AI confidently call a NuGet method removed two versions ago — and only found out when your build broke — this is for you.

Or if you've pasted five service classes into a chat, watched the context fill up, and still got a half-baked answer.

Or if you work in finance or healthcare and the phrase "your code is sent to our servers" is a non-starter.

GitHub: https://github.com/patilprashant6792-official/sharp-mcp


The problem nobody talks about

Every AI coding tool in 2025 has the same three silent killers for .NET developers specifically.

Hallucinated APIs. LLMs are frozen at their training cutoff. NuGet ships breaking changes constantly. System.Text.Json changed nullable handling between 6.0 and 8.0. EF Core changed DbContext configuration between 7.0 and 8.0. IgnoreNullValues was deprecated mid-lifecycle. The model doesn't know. It generates code that looks right and doesn't compile.

Context bloat. A 500-line service class costs ~2,000 tokens raw. Load ten files and you've burned your entire context budget before writing a single line. Copilot's #codebase search is widely documented as unreliable — developers end up manually attaching files and hitting the limit anyway.

Your code leaves your machine. Copilot, Cursor, Windsurf — every cloud AI tool sends your source to an external server with every request. For finance, healthcare, or any regulated industry, that's not a theoretical concern. It's a compliance issue.

sharp-mcp fixes all three. It runs entirely on your machine, exposes your codebase as structured MCP tools, and — the part no other tool does — reflects your actual installed NuGet DLLs via MetadataLoadContext.


The novel piece: real NuGet DLL reflection

When you ask sharp-mcp how to use a method from a NuGet package, it doesn't consult training data. Here's what actually happens:

  1. Resolves the exact version pinned in your .csproj via NuGet.Protocol — no guessing
  2. Downloads the .nupkg and picks the right net*/ target framework folder with automatic fallback chain
  3. Downloads transitive dependencies — MetadataLoadContext needs them to resolve cross-assembly types correctly; without this, reflection on generics and inherited types silently fails
  4. Loads the DLL into an isolated MetadataLoadContext — binary inspection only, never executed, zero risk of static constructors or process pollution
  5. Returns valid, copy-paste-ready C# signatures from your exact binary
  6. Disposes the context immediately — no assembly leaks, no AppDomain side effects
  7. Caches the result in Redis for 7 days, keyed on packageId:version:targetFramework — second call is a Redis read, not a download

No training cutoff. No hallucinated overloads. No deprecated methods that "still work" in the model's memory. Your DLL. Your truth.

Token cost comparison for NuGet exploration:

What you're doing Raw dump sharp-mcp
Explore a namespace ~6,000 tokens ~250 tokens
Explore a class ~2,000 tokens ~400 tokens
Fetch one method ~2,000 tokens ~120 tokens

But NuGet reflection alone isn't a dev assistant

Here's the honest part: NuGet reflection solves one problem. What makes sharp-mcp actually useful for day-to-day .NET development is that all 22 tools form a closed loop. Each one makes the others more powerful.

┌──────────────────────────────────────────────────────────┐
│                     sharp-mcp loop                       │
│                                                          │
│  understand        explore         edit         verify   │
│  codebase  ──►  NuGet APIs  ──►  files   ──►   build    │
│     ▲                                              │     │
│     └──────────────────────────────────────────────┘     │
└──────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

You explore a NuGet API with real signatures → write code against those signatures → edit the right file using Roslyn-derived line numbers → build immediately to catch errors. Break any link in that chain and the whole thing degrades. This is the ecosystem.


Understand: Roslyn analysis, not grep

analyze_c_sharp_file uses a full Roslyn syntax tree walker — not text search — to extract structured metadata from every .cs file: DI constructor graphs, method signatures with exact start/end line numbers, attributes, XML doc comments, public/private toggle. Batch mode lets you pass Services/A.cs,Services/B.cs,Controllers/C.cs in one call.

fetch_method_implementation returns a complete method body with every line numbered. Those line numbers are what Claude uses directly in edit_lines patch operations — no guessing, no off-by-one errors.

analyze_method_call_graph walks every .cs file in your project before you touch a signature and returns every caller — file, class, exact line number. The difference between a safe refactor and a CI failure at 11pm.

get_project_skeleton gives you an ASCII folder tree with file sizes and NuGet package list. Pass "*" and it shows every registered project at once. search_code_globally finds classes, interfaces, methods, and properties by name across all projects simultaneously.

All of this is Redis-backed. Roslyn parses each file once on startup, serializes the AST metadata to Redis, and serves every subsequent call in milliseconds. A FileSystemWatcher with a 300ms debounce evicts and rewrites only the changed file on every save. Claude always sees your code as it exists on disk right now — never stale. This is what makes long feature implementation sessions practical — you never burn tokens re-reading files Claude already knows.


Explore: NuGet IntelliSense from your actual binary

The NuGet tools follow exactly the sequence a developer uses in an IDE — not a bulk dump of everything at once:

get_package_namespaces      ← "I installed OpenAI — what namespaces does it expose?"
get_namespace_types         ← "What types exist in OpenAI.Chat?" (~10 tokens/type)
get_type_surface(type)      ← "What can I call on ChatClient?"
get_type_shape(type)        ← "What does ChatCompletion look like?"
get_method_overloads(...)   ← expand specific overload groups on demand
Enter fullscreen mode Exit fullscreen mode

Each step returns only what the next decision requires. Nothing is dumped until asked for. The entire exploration of an unfamiliar package costs ~800 tokens total vs ~6,000 for a full namespace dump. Over a multi-hour implementation session this compounds significantly.


Edit: surgical file operations with real safety guarantees

Claude can create, edit, move, rename, and delete files — all guarded by per-file semaphore locking (concurrent writes serialized, never dropped), atomic batch-move validation (all destinations validated before any file moves; one failure aborts the entire batch), path sandboxing (traversal structurally impossible — resolved against project root), permanent blocked patterns (bin/, obj/, .git/, secrets, tokens — enforced at the service layer, not a config flag), and automatic Redis cache eviction on every write so the next analysis call sees the updated file immediately.

edit_lines applies multiple patch/insert/delete/append operations to a single file atomically. Patches are validated for overlaps then applied bottom-up — original line numbers stay correct for every patch in the batch. This is what lets Claude make multi-location changes in one shot without line drift.


Verify: build with structured Roslyn diagnostics

execute_dotnet_command runs dotnet clean + build and returns structured diagnostics — not raw stderr:

{ "severity": "error", "code": "CS0246", "file": "Services/OrderService.cs", "line": 42, "message": "..." }
Enter fullscreen mode Exit fullscreen mode

Claude reads the file path and line number, jumps straight to the problem using the analysis tools, and fixes it. The loop — explore → edit → build → fix — runs entirely inside the conversation without copy-pasting error output.


Multi-project from day one

Register as many projects as you have. Every tool accepts projectName. Claude can read the skeleton of one microservice, fetch a method from another, edit a third, build a fourth — in one conversation, without context switching. Pass "*" to any tool to scope it across all registered projects simultaneously.


The privacy angle

Claude receives structured metadata — class names, method signatures, line ranges. Not your business logic. Not your proprietary algorithms. Not your customer data. Nothing leaves your machine.

For .NET developers in finance or healthcare this isn't a nice-to-have. With the EU AI Act in phased enforcement and data residency requirements tightening globally, a tool that processes source code locally is increasingly the only compliant option.


Stack

.NET 10 · Roslyn (Microsoft.CodeAnalysis.CSharp 5.0) · Redis (StackExchange.Redis + NRedisStack) · ModelContextProtocol 0.5 · NuGet.Protocol · System.Reflection.MetadataLoadContext · ngrok SSE transport

22 MCP tools across 8 tool classes: code analysis, project exploration, NuGet reflection, file operations, dotnet CLI, utility.


Get started

git clone https://github.com/patilprashant6792-official/sharp-mcp
cd sharp-mcp/LocalMcpServer
dotnet run
Enter fullscreen mode Exit fullscreen mode

Start Redis:

docker run -d -p 6379:6379 redis:latest
Enter fullscreen mode Exit fullscreen mode

Expose with ngrok:

ngrok http 5000
Enter fullscreen mode Exit fullscreen mode

Register your projects via the web UI — no config files to hand-edit:

http://localhost:5000/config.html
Enter fullscreen mode Exit fullscreen mode

Add the ngrok /sse URL to Claude.ai → Settings → Connectors. Claude discovers all 22 tools automatically. Full setup walkthrough in the README.


What this doesn't replace

This is not inline autocomplete. It doesn't suggest the next line as you type and doesn't integrate into your IDE as an extension.

What it replaces is the reasoning session — when you open a chat to understand a codebase, plan a refactor, check what breaks if you change a signature, look up how a dependency actually works, or implement a feature that spans multiple files. That's the scope it's built for. And that's where the closed loop — Roslyn analysis, real NuGet reflection, surgical editing, live build feedback — earns its keep.


Drop a comment if you've hit any of these walls. Genuinely curious what .NET packages and patterns people are trying to get Claude to reason about.

Top comments (0)