DEV Community

Prashant Patil
Prashant Patil

Posted on • Edited on

MCP server for C# development with real NuGet reflection

Ensures data privacy and API accuracy

sharp-mcp:

Roslyn-Powered C# Analysis, Real NuGet DLL Reflection, and Safe Live File Editing for Claude, On Your Machine via MCP

If you've ever watched an AI confidently call a NuGet method removed two versions ago — and only found out when your build broke — this is for you.

Or if you've pasted five service classes into a chat, watched the context fill up, and still got a half-baked answer.

Or if you work in finance or healthcare and the phrase "your code is sent to our servers" is a non-starter.

GitHub: https://github.com/patilprashant6792-official/sharp-mcp


The problem nobody talks about

Every AI coding tool in 2025 has the same three silent killers for .NET developers specifically.

Hallucinated APIs. LLMs are frozen at their training cutoff. NuGet ships breaking changes constantly. System.Text.Json changed nullable handling between 6.0 and 8.0. EF Core changed DbContext configuration between 7.0 and 8.0. IgnoreNullValues was deprecated mid-lifecycle. The model doesn't know. It generates code that looks right and doesn't compile.

Context bloat. A 500-line service class costs ~2,000 tokens raw. Load ten files and you've burned your entire context budget before writing a single line. Copilot's #codebase search is widely documented as unreliable — developers end up manually attaching files and hitting the limit anyway.

Your code leaves your machine. Copilot, Cursor, Windsurf — every cloud AI tool sends your source to an external server with every request. For finance, healthcare, or any regulated industry, that's not a theoretical concern. It's a compliance issue.

sharp-mcp fixes all three. It runs entirely on your machine, exposes your codebase as structured MCP tools, and — the part no other tool does — reflects your actual installed NuGet DLLs via MetadataLoadContext.


The novel piece: real NuGet DLL reflection

When you ask sharp-mcp how to use a method from a NuGet package, it doesn't consult training data. Here's what actually happens:

  1. Resolves the exact version pinned in your .csproj via NuGet.Protocol — no guessing
  2. Downloads the .nupkg and picks the right net*/ target framework folder with automatic fallback chain
  3. Downloads transitive dependencies — MetadataLoadContext needs them to resolve cross-assembly types correctly; without this, reflection on generics and inherited types silently fails
  4. Loads the DLL into an isolated MetadataLoadContext — binary inspection only, never executed, zero risk of static constructors or process pollution
  5. Returns valid, copy-paste-ready C# signatures from your exact binary
  6. Parses the XML documentation file shipped alongside the DLL — before the temporary directory is deleted — and stores the full doc map in Redis keyed by member ID
  7. Disposes the context immediately — no assembly leaks, no AppDomain side effects
  8. Caches the result in Redis for 7 days, keyed on packageId:version:targetFramework — second call is a Redis read, not a download

No training cutoff. No hallucinated overloads. No deprecated methods that "still work" in the model's memory. Your DLL. Your truth.

Token cost comparison for NuGet exploration:

What you're doing Raw dump sharp-mcp
Explore a namespace ~6,000 tokens ~250 tokens
Explore a class ~2,000 tokens ~400 tokens
Fetch one method ~2,000 tokens ~120 tokens

But NuGet reflection alone isn't a dev assistant

Here's the honest part: NuGet reflection solves one problem. What makes sharp-mcp actually useful for day-to-day .NET development is that all 23 tools form a closed loop. Each one makes the others more powerful.

┌──────────────────────────────────────────────────────────┐
│                     sharp-mcp loop                       │
│                                                          │
│  understand        explore         edit         verify   │
│  codebase  ──►  NuGet APIs  ──►  files   ──►   build    │
│     ▲                                              │     │
│     └──────────────────────────────────────────────┘     │
└──────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

You explore a NuGet API with real signatures → write code against those signatures → edit the right file using Roslyn-derived line numbers → build immediately to catch errors. Break any link in that chain and the whole thing degrades. This is the ecosystem.


Understand: Roslyn analysis, not grep

analyze_c_sharp_file uses a full Roslyn syntax tree walker — not text search — to extract structured metadata from every .cs file: DI constructor graphs, method signatures with exact start/end line numbers, attributes, XML doc comments, public/private toggle. Batch mode lets you pass Services/A.cs,Services/B.cs,Controllers/C.cs in one call.

fetch_method_implementation returns a complete method body with every line numbered. Those line numbers are what Claude uses directly in edit_lines patch operations — no guessing, no off-by-one errors.

analyze_method_call_graph walks every .cs file in your project before you touch a signature and returns every caller — file, class, exact line number. The difference between a safe refactor and a CI failure at 11pm.

get_project_skeleton gives you an ASCII folder tree with file sizes and NuGet package list. Pass "*" and it shows every registered project at once. search_code_globally finds classes, interfaces, methods, and properties by name across all projects simultaneously.

All of this is Redis-backed. Roslyn parses each file once on startup, serializes the AST metadata to Redis, and serves every subsequent call in milliseconds. A FileSystemWatcher with a 300ms debounce evicts and rewrites only the changed file on every save. Claude always sees your code as it exists on disk right now — never stale. This is what makes long feature implementation sessions practical — you never burn tokens re-reading files Claude already knows.


Explore: NuGet IntelliSense from your actual binary

The NuGet tools follow exactly the sequence a developer uses in an IDE — not a bulk dump of everything at once:

get_package_namespaces      ← "I installed OpenAI — what namespaces does it expose?"
get_namespace_types         ← "What types exist in OpenAI.Chat?" (~10 tokens/type)
get_type_surface(type)      ← "What can I call on ChatClient?"
get_type_shape(type)        ← "What does ChatCompletion look like?"
get_method_overloads(...)   ← expand specific overload groups on demand
get_member_xml_doc(member)  ← "What does CompleteChat actually do? What are the params for?"
Enter fullscreen mode Exit fullscreen mode

Each step returns only what the next decision requires. Nothing is dumped until asked for. The entire exploration of an unfamiliar package costs ~800 tokens total vs ~6,000 for a full namespace dump. Over a multi-hour implementation session this compounds significantly.

get_member_xml_doc is the final step in this chain. Once you've identified a type and method, it fetches the full XML documentation for that specific member — summary, parameters, return value, remarks, exceptions, and code examples — straight from the XML doc file that shipped with the package. It covers any member kind: types, methods, properties, fields, events, constructors. Zero extra network cost — the XML file is parsed during the initial package load, before the temporary directory is cleaned up, and stored as a separate Redis entry. Every subsequent call is a pure Redis lookup. For packages that don't ship XML docs, it degrades gracefully with a clear message rather than failing.


Edit: surgical file operations with real safety guarantees

Claude can create, edit, move, rename, and delete files — all guarded by per-file semaphore locking (concurrent writes serialized, never dropped), atomic batch-move validation (all destinations validated before any file moves; one failure aborts the entire batch), path sandboxing (traversal structurally impossible — resolved against project root), permanent blocked patterns (bin/, obj/, .git/, secrets, tokens — enforced at the service layer, not a config flag), and automatic Redis cache eviction on every write so the next analysis call sees the updated file immediately.

edit_lines applies multiple patch/insert/delete/append operations to a single file atomically. Patches are validated for overlaps then applied bottom-up — original line numbers stay correct for every patch in the batch. This is what lets Claude make multi-location changes in one shot without line drift.


Verify: build with structured Roslyn diagnostics

execute_dotnet_command runs dotnet clean + build and returns structured diagnostics — not raw stderr:

{ "severity": "error", "code": "CS0246", "file": "Services/OrderService.cs", "line": 42, "message": "..." }
Enter fullscreen mode Exit fullscreen mode

Claude reads the file path and line number, jumps straight to the problem using the analysis tools, and fixes it. The loop — explore → edit → build → fix — runs entirely inside the conversation without copy-pasting error output.


Multi-project from day one

Register as many projects as you have. Every tool accepts projectName. Claude can read the skeleton of one microservice, fetch a method from another, edit a third, build a fourth — in one conversation, without context switching. Pass "*" to any tool to scope it across all registered projects simultaneously.


The privacy angle

Claude receives structured metadata — class names, method signatures, line ranges. Not your business logic. Not your proprietary algorithms. Not your customer data. Nothing leaves your machine.

For .NET developers in finance or healthcare this isn't a nice-to-have. With the EU AI Act in phased enforcement and data residency requirements tightening globally, a tool that processes source code locally is increasingly the only compliant option.


Stack

.NET 10 · Roslyn (Microsoft.CodeAnalysis.CSharp 5.0) · Redis (StackExchange.Redis + NRedisStack) · ModelContextProtocol 0.5 · NuGet.Protocol · System.Reflection.MetadataLoadContext · ngrok SSE transport

23 MCP tools across 8 tool classes: code analysis, project exploration, NuGet reflection, file operations, dotnet CLI, utility.


Get started

git clone https://github.com/patilprashant6792-official/sharp-mcp
cd sharp-mcp/LocalMcpServer
dotnet run
Enter fullscreen mode Exit fullscreen mode

Start Redis:

docker run -d -p 6379:6379 redis:latest
Enter fullscreen mode Exit fullscreen mode

Expose with ngrok:

ngrok http 5000
Enter fullscreen mode Exit fullscreen mode

Register your projects via the web UI — no config files to hand-edit:

http://localhost:5000/config.html
Enter fullscreen mode Exit fullscreen mode

Add the ngrok /sse URL to Claude.ai → Settings → Connectors. Claude discovers all 23 tools automatically. Full setup walkthrough in the README.


What this doesn't replace

This is not inline autocomplete. It doesn't suggest the next line as you type and doesn't integrate into your IDE as an extension.

What it replaces is the reasoning session — when you open a chat to understand a codebase, plan a refactor, check what breaks if you change a signature, look up how a dependency actually works, or implement a feature that spans multiple files. That's the scope it's built for. And that's where the closed loop — Roslyn analysis, real NuGet reflection, surgical editing, live build feedback — earns its keep.


Drop a comment if you've hit any of these walls. Genuinely curious what .NET packages and patterns people are trying to get Claude to reason about.

Top comments (12)

Collapse
 
ticktockbent profile image
Wes • Edited

The MetadataLoadContext approach for inspecting real binaries instead of relying on training data is a clean solution to the stale-signature problem. Pulling actual type info from the exact package version in a .csproj eliminates a whole class of errors.

But how much of the hallucination problem does this actually cover? In my experience, the more painful failures are not "the model invented a method that doesn't exist" but "the model called a real method with the wrong assumptions about its behavior." EF Core's SaveChangesAsync exists in every version, but the implicit transaction semantics, change tracker behavior, and concurrency token handling have all shifted in ways that correct signatures alone won't reveal. Reflection tells you the shape of an API but not the contract behind it. XML doc comments help, but they tend to describe parameters, not gotchas.

Do you see a path toward surfacing behavioral documentation or version-specific usage patterns alongside the signatures, or does that fall outside what reflection can reasonably provide?

Collapse
 
prashant_patil_9e62d3fa8a profile image
Prashant Patil

Hey Wes — this is a sharp and fair observation, and the EF Core example you picked is exactly
the right stress test.

The honest answer: reflection gives you the shape of an API, not the contract behind it. For
SaveChangesAsync, that means two clean signatures — but nothing about implicit transaction
wrapping, what acceptAllChangesOnSuccess actually does to tracked entity state, the MARS/savepoint
incompatibility, or how DbUpdateConcurrencyException behaves across versions. You're right that
those are the painful failures in practice.

What sharp-mcp is really doing is solving the prerequisite problem. An LLM with web search can
reason well about EF Core's behavioral nuances — Microsoft's docs are rich and current. But if
it's calling a signature that was removed or changed between 6.0 and 8.0, the correct behavioral
reasoning doesn't matter because the code won't compile. Real signatures first, then behavioral
context from training data or live search.

Your point about XML doc comments is noted as a concrete near-term improvement — the DLL
metadata does include them for most well-maintained packages, and surfacing them alongside
signatures would close part of the gap without needing a separate source. Beyond that, pairing
reflection output with a targeted web search against official docs is probably the more complete
path, and that's something I'm actively thinking about for the next iteration.

Good callout. This is exactly the kind of feedback that improves it.

Collapse
 
prashant_patil_9e62d3fa8a profile image
Prashant Patil

Hey Wes — quick update: I actually shipped get_member_xml_doc since your comment. It pulls the full XML docs for any member — summary, params, returns, remarks, exceptions — parsed directly from the XML file inside the .nupkg at load time, zero extra network cost. Closes part of the gap you described. The behavioral contract problem you raised is still real, but at least the documented gotchas are now surfaced.

Collapse
 
automate-archit profile image
Archit Mittal

This is a great use of reflection — hallucinated NuGet APIs are easily the #1 reason I lose trust in AI-generated C# code. One thing worth considering: caching the reflection output keyed by (package, version) so repeat queries don't pay the assembly-loading cost every time. Package assemblies rarely change within a version, so a simple on-disk JSON cache can 10x response time for repeat lookups.

Also curious whether you surface XML doc comments alongside the member signatures — those carry a lot of the "intended usage" context that raw reflection misses.

Collapse
 
prashant_patil_9e62d3fa8a profile image
Prashant Patil

Hey Archit — both points are already handled, and the caching is more aggressive than an on-disk JSON approach.
On caching: The entire reflection pipeline is Redis-backed with double-checked locking. The key format is {packageId}@{version}@{targetFramework} — e.g. Microsoft.EntityFrameworkCore@8.0.0@net10.0 — so hits are exact to your pinned version . The flow on every call:

Check Redis → hit → return immediately, zero assembly loading
Miss → acquire a per-key SemaphoreSlim, check Redis again (prevents thundering herd on cold start), then drop into MetadataLoadContext
Reflection completes → serialize to JSON → StringSet with a 7-day TTL

Assembly loading cost is paid exactly once per (package, version, framework) triple, ever. Every subsequent call is a Redis string read.
On XML doc comments: The .xml doc file ships inside the .nupkg alongside the DLL and is available after the extraction step — so technically surfacing , , and is within reach. It's intentionally excluded for now though. Those comment blocks can be verbose, and the priority here is keeping token cost per tool call low — the whole NuGet exploration workflow is designed to be progressive and narrow, not a dump. The tradeoff is signatures without behavioral prose, which is a real gap, but the alternative is ballooning every get_type_surface response significantly. Something to revisit once there's a clean way to make it opt-in.

Collapse
 
prashant_patil_9e62d3fa8a profile image
Prashant Patil

Hey Archit — quick update: I actually shipped get_member_xml_doc . It pulls the full XML docs for any member — summary, params, returns, remarks, exceptions.

Collapse
 
vishal_kulkarni_1994 profile image
Vishal Kulkarni • Edited

Been using this daily for a few weeks now. No more pasting entire files into Claude just to ask about one method it pulls exactly what it needs on its own. The answers are actually informative too, you can understand what the code is doing instead of getting generic guesses. And the edit tool has been superb, handles big files without any issues which has been a huge time saver.

Collapse
 
prashant_patil_9e62d3fa8a profile image
Prashant Patil

Thanks, Vishal — means a lot!

Collapse
 
itskondrat profile image
Mykola Kondratiuk

write access over live files is where I would push back on safety. local just means you cannot blame a vendor when it goes wrong. had to add explicit audit logging before I trusted mine anywhere near production code.

Collapse
 
prashant_patil_9e62d3fa8a profile image
Prashant Patil

Hey Mykola — the safety isn't just "trust the user" — it's structural.
Every write goes through ResolveAndGuard first: path traversal blocked at the resolver level, then every path segment checked against a hardcoded BlockedDirectories set (.git, bin, obj, .vs, node_modules, .ssh, backups, logs), then the filename matched against BlockedPatterns (secret, password, token, credential, etc.). There's also an extension allowlist — only source, project, doc, and safe config file types can be touched. Concurrent writes to the same file are serialized via per-path SemaphoreSlim. None of this is configurable — it's enforced at the service layer.
That said, the primary audit trail I'm leaning on intentionally is git. Every change Claude makes is a diff — visible immediately, revertible instantly. That's a mechanism that's already universally trusted, and I'd rather build on it than duplicate it with a parallel logging system.
Audit logging as an explicit opt-in is on the list as the write surface matures. Writes are functional but deliberately conservative — Roslyn analysis and NuGet reflection are the stable core, and I'm building write tooling out carefully. And when all of it comes together — real signatures from your actual binaries, surgical Roslyn-aware file edits, and live build feedback in the same loop — it makes Claude a genuinely capable C# assistant rather than a best-guess code generator.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

ok yeah, hardcoded blocklist at the resolver beats anything configurable. was pushing back on the concept not the impl - this is the right pattern.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.