TLDR; Connecting Claude with Azure Data Explorer (Kusto) gives you lightweight natural language query/reporting capabilities. The Remote MCP Server gets exposed as another protocol on the existing CQRS+ based microservices.
Introduction
The thirst for data and analytics in a company is insatiable, and many non-technical users try to learn SQL or specific dashboarding tools just to be able to answer their own data-related questions without waiting for a specialist to find time. Ideally though the business users would like to ask those questions in natural language, the same way how they are already shooting questions to an AI/LLM.
This post is all about satisfying data-hungry non-technical users by connecting Claude Desktop with a database called Azure Data Explorer by the means of a Remote MCP Server:
Diagram 1: Claude Desktop + Kusto Remote MCP Server + Azure Data Explorer (Kusto)
Additionally, the post discusses the architectural decision where to place this so-called MCP Tool within a CQRS+ Microservice Architecture.
Claude
Claude has turned nowadays (2026) into one of the best AI tools for coders (Claude Code) and non-coders (Claude Desktop/Cowork). Even though many organizations are still held up in the ecosystems of their existing vendors (e.g. Copilot in the Microsoft Office 365/Azure world or Amazon Q in the AWS world) more and more companies reach out to the better and more ergonomic Claude. For example, Claude Desktop has this amazing split-screen view where query remains on the left-hand side, and a visualization ("Interactive Artefact") is displayed on the right-hand side.
Azure Data Explorer (aka Kusto or ADX)
Kusto/ADX is an amazing, lean and inexpensive cloud-hosted (only Azure) column store database, designed for very fast reads, that nobody knows of. It reminds me a bit of ClickHouse, however it does not use SQL, but KQL or Kusto Query Language, which looks very attractive for functional developers like me (F#) as it is similar to code pipelines:
Transactions
| where CreatedOn >= last3_start and CreatedOn <= last3_end
| project TransactionId, Amount, Type
| order by CreatedOn desc
Kusto is the engine powering all Microsoft Azure logging systems and even serves as the core of many other products like Azure AI Foundry, Microsoft Sentinel, Azure Application Insights, etc.
Natural Language Query
Of course it is possible to write SQL or in case of Kusto KQL to get data, however that requires getting acquainted with some quirky keywords and symbols. Asking questions in natural language and getting the answers also in natural language + visualization is what the non-technical people actually want. LLMs like Claude are amazing at generating code as well as SQL/KQL queries based on English questions, but to copy and paste those queries into a database query editor is so 2024-ish ;)
MCP Server
MCP Servers come to the rescue, as they make a connection between the AI Tool and many systems like your database (e.g. Kusto). MCP Servers come in 2 kinds:
- Local MCP Server (aka stdio) - requires installing some software/program (e.g. python app) on your laptop, and the AI Tool is invoking this program locally. Your local config/permissions/logged in account are being used
- Remote MCP Server (over HTTP) - comes ready-to-use out-of-the-box, after some initial authentication/authorization via oAuth.
Ideally the database vendor would offer an integrated Remote MCP Server. As a matter of fact I did have a chat some time ago with a Product Manager in the ADX/Kusto team (as part of a support ticket discussion, I have no connections to MS) who told me that they are working on such, however it seems that one would be more focused on bigger/enterprise products like Azure AI Foundry, instead of the relatively low-level Kusto.
Alternatively, such a connector can be built in a way similar to building a REST Web Api. The MCP Protocol is based on JSON RPC and is relatively straightforward calling functions/methods over HTTP(s), the only complication is related to oAuth.
CQRS+, Microservices, and how MCP fits in
In my older post I explain CQRS and CQRS+ in the context of microservices. In a nutshell, a software system is split into "services" based on functional areas or domains with their own context/language/terminology (aka bounded contexts), and those are subdivided into physical "microservices" (= standalone running processes) based on technical aspects like command vs query handling vs event publishing vs event handling, etc. So the question that pops up immediately is where does MCP fit in this picture?
The first idea that comes to mind is to create a separate MCP microservice, responsible for implementing the MCP protocol.
Diagram 2: Xyz Service and its microservices, including a standalone MCP microservice
However, MCP is nothing more than another protocol for exposing the same command-handling or query-handling logic that you already have in your existing microservices. So the better choice seems to be that MCP co-exists with REST within the same microservices. This means that in the example below where a Remote MCP Server for Kusto is created for the sake of querying only the server would be co-hosted in the same microservice responsible for Query Handling within the Reporting service/bounded context:
Diagram 3: Xyz Service and its microservices, with a MCP server integrated into the existing microservices
Sample Implementation
The sample below is using .NET and F#, my favorite combo, but can be obviously written in any tech stack. Actually Microsoft's Local MCP Server for Fabric RTI written in Python "served as inspiration" to Claude Code with which I wrote 75% of the below code ;)
This is what the main function looks like:
/// Application entry point: Key Vault decryption, host builder composition, and startup.
module Program
open System.Threading
open Microsoft.Extensions.Hosting
open Framework.AzureKeyVault.Environment
open Framework.Hosting.HostBuilder
open Api.Wiring
module McpDI = MCP.DependencyInjection
// this has to be first, otherwise modules initialize with env vars which have not been decrypted yet!
Environment.overwriteEnvironmentVariablesFromKVRef () |> Async.RunSynchronously
let webApis =
[ WebApi.health
WebApi.OAuth.register
WebApi.OAuth.authorize
WebApi.OAuth.token
WebApi.OAuth.wellKnownProtectedResource
WebApi.OAuth.wellKnownAuthServer ]
let mcpTools = [ McpTools.executeKustoQuery ]
[<EntryPoint>]
let main argv =
let builder =
createDefaultBuilder argv BackgroundServiceExceptionBehavior.StopHost
|> McpDI.configureMcpServices mcpTools
|> configureWebHost webApis McpDI.OAuth.requireBearerToken (Framework.Mcp.Hosting.mapMcpEndpoints "")
use tokenSource = new CancellationTokenSource()
use host = builder.Build()
printfn "MCP Server running at %s" EnvVars.baseUrl
host.RunAsync(tokenSource.Token) |> Async.AwaitTask |> Async.RunSynchronously
0
The actual definition of the MCP Server methods/functions called tools is in Api.Wiring.fs:
module McpTools =
let executeKustoQuery: McpServerToolDef =
{ Name = "execute_kusto_query"
Description =
"Executes a KQL query against the configured Azure Data Explorer (Kusto) cluster and database. Returns query results as a JSON array. The query runs under the authenticated user's identity."
ReadOnly = true
Destructive = false
ExecuteOperation = Func<string, Task<string>>(McpDI.McpTools.executeKustoQuery) }
How MCP and REST co-exist in the same Microservice
The key insight is that MCP is literally just another set of routes alongside REST — like GraphQL or gRPC. Rather than a single monolithic setup function, the codebase separates concerns into composable pipeline stages.
Step 1 — Register MCP tools in the DI container (Mcp.Hosting.fs):
let configureMcpServices (mcpTools: McpServerToolDef list) (builder: IHostBuilder) : IHostBuilder
=
builder.ConfigureServices(fun _ services ->
services.AddHttpContextAccessor() |> ignore
let tools =
mcpTools
|> List.map (fun toolDef ->
let options =
McpServerToolCreateOptions(
Name = toolDef.Name,
Description = toolDef.Description,
ReadOnly = Nullable toolDef.ReadOnly,
Destructive = Nullable toolDef.Destructive
)
McpServerTool.Create(toolDef.ExecuteOperation, options))
|> Array.ofList
services.AddMcpServer().WithHttpTransport().WithTools(tools) |> ignore)
Step 2 — Configure the web host generically (Hosting.HostBuilder.fs). This function knows nothing about MCP — it just wires up REST endpoints, a middleware hook, and an additional-endpoints hook:
let configureWebHost
(webApiDefs: WebApiDef list)
(configureMiddleware: IApplicationBuilder -> unit)
(configureAdditionalEndpoints: IEndpointRouteBuilder -> unit)
(builder: IHostBuilder)
: IHostBuilder =
builder.ConfigureWebHostDefaults(fun webBuilder ->
webBuilder.Configure(fun context app ->
app.UseRouting() |> ignore
configureMiddleware app
app.UseEndpoints(fun endpoints ->
webApiDefs
|> List.iter (fun webApiDef ->
endpoints
.MapMethods(webApiDef.Path, [ webApiDef.Method.ToString() ],
webApiDef.ExecuteOperation)
.AllowAnonymous()
|> ignore)
configureAdditionalEndpoints endpoints)
|> ignore)
|> ignore)
Step 3 — Map MCP endpoints via a one-liner that plugs into the configureAdditionalEndpoints hook:
let mapMcpEndpoints (basePath: string) (endpoints: IEndpointRouteBuilder) =
endpoints.MapGroup($"%s{basePath}/mcp").MapMcp() |> ignore
Composition in Program.fs — everything comes together as a pipeline:
let builder =
createDefaultBuilder argv BackgroundServiceExceptionBehavior.StopHost
|> McpDI.configureMcpServices mcpTools
|> configureWebHost webApis McpDI.OAuth.requireBearerToken
(Framework.Mcp.Hosting.mapMcpEndpoints "")
The single host now serves:
-
GET /health— REST health check -
GET /.well-known/*, /oauth/*— REST OAuth endpoints -
POST /mcp— MCP JSON-RPC transport
Notice that configureWebHost has no dependency on MCP at all. MCP plugs in through the same configureAdditionalEndpoints hook that any other protocol (GraphQL, gRPC, SignalR) would use. Adding MCP to an existing service is just two pipeline stages — one for DI, one for routing — not a new deployment.
The OAuth 2.1 Proxy — The Hard Part
I mentioned above that the only complication with building a Remote MCP Server is "related to oAuth." Let me unpack that because it turned out to be a bigger than expected engineering challenge in this project.
The problem is a three-way mismatch between what Claude expects, what Azure Entra ID supports, and what is actually needed:
- Claude expects Dynamic Client Registration (DCR) per RFC 7591 — it wants to call a /oauth/register endpoint and get back a client_id and client_secret. Azure Entra ID does not support DCR. App registrations are created in the Azure Portal or via scripts, not at runtime.
- Claude authenticates to the MCP server — so the OAuth scope it requests is scoped to the MCP server itself. But what is needed is the resulting token to be scoped to Kusto, not to the MCP server, because the user's own token will be passed directly to Azure Data Explorer.
- Claude sends a dummy client_secret — the one it received during DCR. But Entra ID needs the real client_secret from the app registration.
The solution is an OAuth Proxy — our MCP server impersonates an OAuth authorization server by implementing five endpoints that intercept, rewrite, and forward Claude's OAuth requests to Entra ID.
Discovery: "I am your OAuth server"
When Claude first connects and gets a 401, it looks for .well-known metadata. 2 standard RFC endpoints are served that point Claude back at the Kusto MCP Server:
let wellKnownAuthServer (logInfo: string -> unit) (ctx: HttpContext) =
async {
let response =
{| issuer = sprintf "https://login.microsoftonline.com/%s/v2.0" EnvVars.tenantId
authorization_endpoint = sprintf "%s/oauth/authorize" EnvVars.baseUrl
token_endpoint = sprintf "%s/oauth/token" EnvVars.baseUrl
registration_endpoint = sprintf "%s/oauth/register" EnvVars.baseUrl
scopes_supported =
[| EnvVars.ADX.connectionString + "/.default"
"openid"; "profile"; "offline_access" |]
response_types_supported = [| "code" |]
grant_types_supported = [| "authorization_code"; "refresh_token" |]
code_challenge_methods_supported = [| "S256" |] |}
ctx.Response.StatusCode <- 200
do! ctx.Response.WriteAsJsonAsync(response) |> Async.AwaitTask
}
Claude now thinks our server is the authorization server itself — all OAuth requests will come to us.
Mock DCR: "Sure, here's your client_id"
Claude calls POST /oauth/register expecting DCR. The Kusto MCP Server accepts its request, echoes back its redirect_uris (per RFC 7591), and returns the pre-registered Entra ID client_id along with a dummy client_secret that Claude will use later — but which will be thrown away:
let register (logInfo: string -> unit) (ctx: HttpContext) =
async {
let! body = ctx.Request.ReadFromJsonAsync<JsonElement>().AsTask() |> Async.AwaitTask
let redirectUris =
match body.TryGetProperty("redirect_uris") with
| true, uris -> uris.EnumerateArray() |> Seq.map (fun u -> u.GetString()) |> Array.ofSeq
| _ -> [||]
let dummySecret = Guid.NewGuid().ToString("N")
let response =
{| client_id = EnvVars.OAuth.clientId
client_secret = dummySecret
grant_types = [| "authorization_code"; "refresh_token" |]
redirect_uris = redirectUris |}
ctx.Response.StatusCode <- 201
do! ctx.Response.WriteAsJsonAsync(response) |> Async.AwaitTask
}
Scope Rewriting: "You think you're authenticating to me, but you're actually authenticating to Kusto"
When Claude calls GET /oauth/authorize, the call is intercepted and the scope parameter gets rewritten before redirecting to Entra ID. Claude asked for a scope targeting our MCP server, but that is replaced with the Kusto cluster scope. This is the key trick — the access token that comes back will have the Kusto cluster as its audience, not the MCP server:
let authorize (logInfo: string -> unit) (ctx: HttpContext) =
async {
let query = ctx.Request.Query
// The magic: rewrite scope to target Kusto
let scope =
sprintf "%s/.default openid profile offline_access" EnvVars.ADX.connectionString
let entraAuthorizeUrl =
sprintf "https://login.microsoftonline.com/%s/oauth2/v2.0/authorize" EnvVars.tenantId
// Pass through PKCE (code_challenge), state, and other params
let queryParts =
[ "client_id", EnvVars.OAuth.clientId
"response_type", "code"
"redirect_uri", query.["redirect_uri"].ToString()
"scope", scope
"state", query.["state"].ToString()
"response_mode", "query" ]
@ (if String.IsNullOrEmpty(query.["code_challenge"].ToString()) then []
else [ "code_challenge", query.["code_challenge"].ToString()
"code_challenge_method", "S256" ])
|> List.map (fun (k, v) -> sprintf "%s=%s" (Uri.EscapeDataString k) (Uri.EscapeDataString v))
|> String.concat "&"
ctx.Response.Redirect(sprintf "%s?%s" entraAuthorizeUrl queryParts)
}
The user now sees the familiar Microsoft login screen. After authenticating, Entra ID redirects back to Claude's callback with an authorization code.
Credential Injection: "Let me fix that secret for you"
Claude exchanges the auth code by calling POST /oauth/token — but it sends the dummy client_secret from the mock DCR. The MCP Server strips Claude's credentials, injects the real ones from our Entra ID app registration, and forwards the request to Entra ID:
let token (logInfo: string -> unit)
(callOverHttp: string -> IDictionary<string, string> -> Async<int * string>)
(ctx: HttpContext) =
async {
let! form = ctx.Request.ReadFormAsync() |> Async.AwaitTask
let entraTokenUrl =
sprintf "https://login.microsoftonline.com/%s/oauth2/v2.0/token" EnvVars.tenantId
let formData = Dictionary<string, string>() :> IDictionary<string, string>
// Copy everything EXCEPT client credentials
for kvp in form do
if kvp.Key <> "client_secret" && kvp.Key <> "client_id" then
formData.[kvp.Key] <- kvp.Value.ToString()
// Inject real credentials
formData.["client_id"] <- EnvVars.OAuth.clientId
formData.["client_secret"] <- EnvVars.OAuth.clientSecret
let! statusCode, responseBody = callOverHttp entraTokenUrl formData
ctx.Response.StatusCode <- statusCode
ctx.Response.ContentType <- "application/json"
do! ctx.Response.WriteAsync(responseBody) |> Async.AwaitTask
}
Entra ID returns an access_token scoped to the Kusto cluster. Claude stores it and sends it as a Bearer token on every subsequent MCP request. Our MCP server then passes that same token directly to Azure Data Explorer — it never sees or stores the user's credentials.
Per-User Token Passthrough
Some tutorials and examples for MCP servers that connect to databases use a shared service principal — a single identity with broad access that executes all queries on behalf of all users. This is the easy path, but it has serious drawbacks:
- every user sees the same data regardless of their actual permissions,
- audit logs show a generic service account instead of the real person, and
- a bug or prompt injection in the LLM could expose data the user should never have access to.
Here a different approach is taken. The user's own OAuth token — the one they obtained by logging into Microsoft Entra ID through the OAuth proxy flow described above — is passed directly to Azure Data Explorer:
let executeKustoQuery
(logError: string -> exn -> unit)
(serviceUrl: string)
(getBearerToken: unit -> string)
(query: string)
: Async<string> =
async {
match QueryValidation.validate query with
| Error msg -> return sprintf "Query rejected: %s" msg
| Ok() ->
try
let userToken = getBearerToken ()
let csb =
KustoConnectionStringBuilder(serviceUrl)
.WithAadUserTokenAuthentication(userToken)
use client = KustoClientFactory.CreateCslQueryProvider(csb)
let! results = executeQuery<JsonObject> client (TimeSpan.FromSeconds 30L) query []
return JsonSerializer.Serialize(results)
with ex ->
logError "Error executing Kusto query" ex
return "Error executing query"
}
The critical line is .WithAadUserTokenAuthentication(userToken). This is not a shared service principal — it is the actual user's token. Azure Data Explorer sees the real caller. This means:
- Database roles apply per-user. If a user only has viewer access to the Sales database, they cannot query Engineering. This is enforced directly by Kusto.
- Row-level security policies work. If the company has RLS policies that restrict sales reps to seeing only their own region's data, those policies apply. The LLM cannot bypass them because the token identifies the actual user.
- Audit logs show who really queried. When compliance or security wants to know who ran a particular query, the answer is "Alice from Finance" — not "mcp-server-service-principal."
The trade-off is that tokens expire (typically after one hour), so Claude needs to handle token refresh. The OAuth proxy handles refresh_token grants through the same credential-injection pattern as the initial token exchange, so this is transparent to the user.
The Full Flow
Here is the complete OAuth dance visualized:
sequenceDiagram
participant C as Claude Desktop
participant M as MCP Server
participant E as Entra ID
participant K as Kusto
C->>M: POST /mcp
M-->>C: 401 (WWW-Authenticate → /.well-known/*)
C->>M: GET /.well-known/oauth-*
M-->>C: {endpoints point to self}
C->>M: POST /oauth/register
M-->>C: 201 {client_id, dummy secret}
C->>M: GET /oauth/authorize
M->>E: 302 (scope rewritten to kusto/.default)
E-->>C: 302 + auth code (user logs in)
C->>M: POST /oauth/token (dummy secret)
M->>E: POST /token (real secret injected)
E-->>M: access_token (scoped to Kusto)
M-->>C: access_token
C->>M: POST /mcp (Bearer token)
M->>K: KQL query (user's own token)
K-->>M: results
M-->>C: query results
This flow is reusable for any scenario where you need Claude to authenticate against an identity provider that does not support DCR, while obtaining tokens scoped to a backend service different from the MCP server itself.
Security — Multi-Layer Defense
When an LLM generates database queries, you have to assume it will occasionally generate something you do not want executed. The solution is defense in depth — multiple independent validation layers, each catching different classes of problems.
Layer 1: Token Pre-Validation (Middleware)
Before any MCP tool sees the request, middleware validates the JWT token's structure, expiration, and issuer. This is not cryptographic verification — Azure Data Explorer does that later. This is an early-rejection layer that avoids wasting Kusto resources on obviously invalid tokens:
let validateBearerToken (expectedTenantId: string) (token: string) : Result<unit, string> =
let parts = token.Split('.')
if parts.Length <> 3 then
Error "Malformed token"
else
let payloadBytes = Base64UrlEncoder.DecodeBytes(parts.[1])
let payload = Encoding.UTF8.GetString(payloadBytes)
use doc = JsonDocument.Parse(payload)
let root = doc.RootElement
// Check expiration (with 5-minute clock skew)
// Check issuer contains expected tenant ID
// → Ok() or Error "reason"
Expired token? Rejected at the door. Wrong tenant? Never reaches Kusto. No network call needed.
Layer 2: Whitelist-Based Query Validation
This is the most important layer for LLM-generated queries. Instead of blacklisting dangerous patterns (a losing game), the MCP Server whitelists exactly which KQL operators and plugins are allowed.
Only 44 safe, read-only tabular operators are permitted (where, project, summarize, join, extend, render, etc.). For evaluate plugins, only 17 safe ones are allowed — critically blocking python, r, sql_request, http_request, and anything else that could reach outside the Kusto cluster.
Several classes of queries are blocked outright:
let private blockedSourcePatterns =
[ Regex(@"\bexternaldata\b", RegexOptions.IgnoreCase ||| RegexOptions.Compiled), "externaldata is not allowed"
Regex(@"\bexternal_table\s*\(", RegexOptions.IgnoreCase ||| RegexOptions.Compiled),
"external_table() is not allowed"
Regex(@"\bcluster\s*\(", RegexOptions.IgnoreCase ||| RegexOptions.Compiled),
"cross-cluster queries are not allowed"
Regex(@"\bdatabase\s*\(", RegexOptions.IgnoreCase ||| RegexOptions.Compiled),
"cross-database queries are not allowed" ]
And any statement starting with . (a Kusto management command) is rejected immediately. When a query is rejected, the tool returns the validation error directly to Claude as text — no query ever reaches Kusto. Claude then typically understands what went wrong and rewrites the query using only allowed operators.
Layer 3: Azure Data Explorer Server-Side Enforcement
Even after passing both previous layers, Kusto enforces its own security: full cryptographic token validation, database role enforcement, row-level security policies, and audit logging with the real user's identity.
No single layer is sufficient on its own. The middleware catches bad tokens cheaply. The query validation catches dangerous queries before they leave our server. And Kusto enforces the actual data access permissions. A failure in any one layer is caught by the others.
Functional Dependency Injection
To the surprise of e.g. C# developers this F# project has no IoC container, no constructor injection, no interfaces. Dependencies are wired through plain functions and partial application.
Each module in DependencyInjection.fs creates partially applied functions where dependencies are "baked in" at startup. For example, the executeKustoQuery implementation accepts 4 parameters — log, getAuthorizationHeader, createAdxClient, and query — but the DI module partially applies the first 3, leaving only query: string -> Task<string>, exactly what the MCP framework needs:
module McpTools =
let private getAuthorizationHeader = Framework.Http.getAuthorizationHeader accessor
let private createAdxClient (userToken: string) : ICslQueryProvider =
KustoConnectionStringBuilder(EnvVars.ADX.serviceUrl)
.WithAadUserTokenAuthentication(userToken)
|> KustoClientFactory.CreateCslQueryProvider
let executeKustoQuery (query: string) : Task<string> =
MCP.Api.Functions.McpTools.executeKustoQuery log getAuthorizationHeader createAdxClient
query
|> Async.StartAsTask
This is the functional equivalent of constructor injection, but without the ceremony. Testing benefits similarly: you call the underlying function with fake loggers, test URLs, and stub token providers without requiring mocking frameworks.
A notable detail: getAuthorizationHeader is passed as a function (unit -> string) rather than invoked eagerly. It captures HttpContextAccessor, reading the current request's Authorization header only at query execution time — after validation passes. Similarly, createAdxClient is a function (string -> ICslQueryProvider), not a pre-built client — each query gets a fresh client authenticated with the current user's token. This approach avoids threading infrastructure types through domain logic.
The same pattern scales to the OAuth handlers — each one partially applies config (tenant ID, client ID, secrets, scopes) and leaves just HttpContext -> Async<unit>.
Local Development with Cloudflare Tunnel
Remote MCP servers require public URLs for OAuth callbacks. During local development, https://localhost:5001 is not reachable from the internet. While ngrok was traditionally used, ngrok's free tier now shows a browser interstitial page that breaks OAuth redirects.
Cloudflare Tunnel offers a superior alternative: free, no signup required, and no interstitial blocking. The project includes two scripts that automate the setup.
cloudflared_tunnel.sh is a reusable helper that starts a tunnel and returns the public URL. It auto-installs cloudflared if needed (via Homebrew on macOS, direct download on Linux), handles HTTPS local targets with --no-tls-verify, and outputs the tunnel URL and PID on stdout for the caller to parse and clean up:
output=$(./cloudflared_tunnel.sh --url https://localhost:5001)
TUNNEL_URL=$(echo "$output" | head -1)
CLOUDFLARED_PID=$(echo "$output" | tail -1)
trap "kill $CLOUDFLARED_PID 2>/dev/null" EXIT
dotnet_run.sh orchestrates local startup. With the --mcp flag, it:
- Loads environment variables from launchSettings.json
- Reads the listen URL from the LocalDev launch profile
- Starts a Cloudflare Tunnel via cloudflared_tunnel.sh and exports MCP_BASE_URL with the tunnel address
- Launches the .NET server
./dotnet_run.sh --mcp true
# Loading environment variables from launchSettings.json...
# Starting cloudflared tunnel to https://localhost:5001...
# cloudflared tunnel: https://random-words.trycloudflare.com -> https://localhost:5001
# MCP endpoint: https://random-words.trycloudflare.com/mcp
Without --mcp, it simply loads the env vars and runs the server — useful when working on REST-only features.
The script prints the MCP endpoint URL to add as a connector in Claude Desktop:
MCP endpoint: https://random-words.trycloudflare.com/mcp
The server differentiates between listen address (MCP_LISTEN_URL on localhost, defaulting to https://localhost:5001) and public address (MCP_BASE_URL via tunnel, defaulting to MCP_LISTEN_URL if unset). OAuth endpoints advertise the tunnel URL in .well-known metadata while actually listening locally. Claude communicates with the tunnel URL, which routes to localhost, which redirects to Entra ID, completing the OAuth flow.
Tunnel URLs regenerate on restart — this is acceptable since Claude rediscovers endpoints via .well-known on each connection.
Conclusion
What started as a simple idea — "let business users ask questions about their data in natural language" — turned into an interesting exercise in protocol bridging, security layering, and architectural decisions.
The key takeaways:
MCP is just another protocol. It sits alongside REST in the same microservice, shares the same middleware and authentication, and does not deserve its own deployment boundary. In a CQRS+ architecture, MCP tools belong in the microservice that already handles the relevant queries or commands.
The OAuth proxy pattern is reusable. If you need Claude (or any RFC-compliant OAuth client) to authenticate against Azure Entra ID — or any identity provider that does not support Dynamic Client Registration — the proxy approach of mock DCR + scope rewriting + credential injection works generically. Swap out the Kusto scope for a Microsoft Graph scope, and you have a different integration with the same plumbing.
Per-user token passthrough is worth the effort. Passing the user's own token to the database instead of using a shared service principal means that existing access controls, row-level security, and audit logging work without any additional code. The database already knows how to enforce permissions — let it.
Whitelist, do not blacklist, LLM-generated queries. When an LLM is writing your database queries, you cannot anticipate every dangerous pattern it might produce. Whitelisting allowed operators is a safer default — anything not explicitly permitted is rejected.
F# is a natural fit for this kind of work. Pipeline-style KQL queries, pipeline-style F# code, partial application for dependency injection, discriminated unions for validation results — the language aligns well with both the problem domain and the implementation patterns.
The complete source code is available on GitHub.
If you are building something similar for a different database or identity provider, the OAuth proxy and query validation patterns should translate directly — only the query language and connection builder change.





Top comments (0)