In the previous post, we tunneled a local MCP server with ngrok to expose internal services externally (for testing and integration, demo access, branch office access and other scenarios). Now let’s do the same for a Kubernetes-hosted workload managed by ToolHive. This is very much a production scenario in which exposed MCP servers are also exposed via Kubernetes clusters; but with ToolHive and ngrok, we can keep the approach simple. Once you’ve got ToolHive and ngrok up-and-running, just follow the steps below:
1. Deploy ToolHive to your cluster, then the fetch MCP server
Follow the ToolHive Kubernetes Operator quickstart to install the operator and deploy an MCP server in your cluster (I’m using the fetch server here). The operator turns MCP servers into first-class Kubernetes resources you can manage declaratively:
kubectl apply -f https://raw.githubusercontent.com/stacklok/toolhive/refs/heads/main/examples/operator/mcp-servers/mcpserver_fetch.yaml
After applying your manifests/CRs, you’ll see Services like:
kubectl get service -n toolhive-system
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# mcp-fetch-headless ClusterIP None <none> 8080/TCP 12m
# mcp-fetch-proxy ClusterIP 10.96.166.106 <none> 8080/TCP 12m
These are ClusterIP Services, which are intentionally in-cluster only (no host access yet). We’ll bridge them to the host next.
2. Port-forward the Service to your laptop
Use kubectl port-forward to map the Service’s port to localhost:8080 so you can reach it from your machine:
kubectl -n toolhive-system port-forward svc/mcp-fetch-proxy 8080:8080
Now http://127.0.0.1:8080 is a portal to the in-cluster Service.
3. Add a simple ngrok Traffic Policy (HTTP Basic Auth)
Before we open this to the internet, let’s require a username/password via ngrok Traffic Policy. Save a policy file like /tmp/policy.yaml:
on_http_request:
- actions:
- type: basic-auth
config:
credentials:
- stacklok:p4ssw0rd
ngrok’s Basic Auth policy validates the Authorization: Basic … header, returning 200 OK when credentials match, and 401 Unauthorized otherwise
Tip: echo -n 'stacklok:p4ssw0rd' | base64
helps you generate the header value locally.
4. Launch the tunnel with ToolHive’s proxy
With the Service forwarded to 127.0.0.1:8080, start a ToolHive tunnel pointing at that local address, telling ToolHive to use ngrok and your policy file:
thv proxy tunnel http://127.0.0.1:8080 test \
--tunnel-provider ngrok \
--provider-args '{"auth-token":"${NGROK_TOKEN}","traffic-policy-file":"/tmp/policy.yaml"}'
ToolHive will bring up an ngrok HTTPS endpoint and print the public URL for the fetch MCP server — something like:
"fetch": {
"url": "https://bf18062fef8a.ngrok-free.app/mcp",
"description": "Fetch MCP server for testing",
"headers": {
"Authorization": "Basic c3RhY2tsb2s6cDRzc3cwcmQ="
},
"type": "http"
}
Send requests with the Authorization header and you’ll get through; omit it and you’ll see a 401 by design.
Summarizing the benefits of this approach
- Kubernetes-native management: ToolHive’s operator defines and manages MCP servers as Kubernetes resources, which is great for multi-user and production workflows
-
Safe local bridge:
kubectl port-forward
exposes the internal Service to your host without changing cluster networking. - Hardened public edge: ngrok’s Traffic Policy adds Basic Auth at the edge so your tunnel isn’t wide open during tests/demos.
With these few steps, you’ve taken a Kubernetes-hosted MCP server, bridged it to your localhost safely, and published it behind a secure, temporary ngrok URL. This is perfect for quick external tests, demos, or sharing an endpoint without touching production.
We’re excited about the integration of ToolHive and ngrok and how it quickly and elegantly solves a problem that more enterprises will encounter as they adopt MCP. If you have questions or ideas, we’d love to hear from you. Please checkout ToolHive and ngrok, and connect with us on Discord.
Top comments (0)