One URL for any agent runtime: the hosted Hashlock MCP endpoint
When the Model Context Protocol spec landed, the canonical transport was stdio: an agent spawns the MCP server as a child process and talks to it over standard input and output. That choice was reasonable. Stdio is fast, has no network surface, and on a developer's laptop it's the path of least resistance.
It is also a poor fit for almost every production deployment of an agent.
Hashlock Markets ships the same six MCP tools over both transports. The npm package, hashlock-tech/mcp (scoped on npmjs), is the stdio path. The other path is a hosted endpoint at https://hashlock.markets/mcp, served as Streamable HTTP MCP. This post is about the second one — what it is, why it matters, and when to reach for it instead of stdio.
The protocol home page is at https://hashlock.markets and the canonical repository is at https://github.com/Hashlock-Tech/hashlock-mcp.
Why a hosted endpoint matters
The short version: most agent runtimes that run anywhere other than a developer laptop have a hard time spawning local subprocesses, and a really hard time keeping one alive across requests.
Browser extensions can't spawn subprocesses at all. Claude in Chrome, for example, talks to MCP servers over HTTP, not stdio. Serverless platforms — Vercel functions, Cloudflare Workers, Lambda — either can't spawn subprocesses at all or spin up a fresh container per invocation, which means a stdio server would have to be cold-booted on every tool call. That's not workable for a tool surface that needs auth state, in-flight RFQs, and live HTLC subscriptions.
Hosted orchestration platforms — the kind of system a team builds to run agents at scale — usually centralize MCP routing through an HTTP layer for the same reasons they centralize everything else: load balancing, observability, secret management, and not having one fork-bomb of an agent take down the host.
For all of those, "spawn npx hashlock-tech/mcp and pipe its stdout" is the wrong question. The right question is "what URL do I point my MCP client at?" The answer is https://hashlock.markets/mcp.
What the endpoint is, technically
It's a Streamable HTTP MCP server. The transport is the standard one defined in the MCP specification: long-lived HTTP requests carrying JSON-RPC messages, with server-sent events for streaming responses back. Any MCP client that implements Streamable HTTP — which is most of them at this point — can connect.
The server exposes exactly the same six tools as the npm package:
-
create_rfq— declare a trading intent and open the auction window -
respond_rfq— used by market makers to submit sealed bids -
create_htlc— lock funds on-chain against a hash -
get_htlc— read HTLC state across the supported chains -
withdraw_htlc— claim with the preimage -
refund_htlc— reclaim if the counterparty doesn't complete in time
Same names, same shapes, same semantics. The MCP client doesn't know it's talking to a hosted server versus a local one — that's the whole point of MCP as a protocol.
Authentication is SIWE — Sign-In with Ethereum — across both transports. The agent signs a message with the wallet you've delegated to it, the server verifies the signature, and a session is opened. There are no API keys, no bearer tokens to leak, no .env file to babysit.
When to pick which transport
The decision is mostly about where the agent runs.
Reach for stdio (npx -y hashlock-tech/mcp configured as a local MCP server) when:
- You're developing or prototyping on a workstation. The local-process boot is fast and you don't pay any network round-trip.
- Your agent runtime is a desktop application that already manages MCP server processes (Claude desktop is the canonical example).
- You want zero network surface between the agent and the MCP server.
- You're running the agent in a long-lived host where one process per agent is fine.
Reach for https://hashlock.markets/mcp (Streamable HTTP) when:
- You're deploying to a hosted runtime, a serverless platform, or a browser extension. Anything that doesn't have a friendly relationship with subprocess spawning.
- Your agent is going to be called with high concurrency and you don't want to hold N stdio servers open per tenant.
- You want a single environment-independent URL — same one across local, staging, and production. The MCP client config doesn't change between environments.
- You're integrating with a hosted orchestrator or platform that exposes MCP servers as HTTP routes.
A non-trivial number of teams use both. Local development against stdio for fast iteration; production against the hosted URL because the runtime can't spawn subprocesses. The MCP client config is the only thing that changes between the two.
What it doesn't change
It's worth being explicit about a few things that are identical between transports because the question comes up.
The trading guarantees are the same. Sealed-bid RFQ means quotes stay private until the trader picks one. HTLC settlement means both legs of a cross-chain trade complete or both refund. The chain coverage is the same — Ethereum, Bitcoin, and Sui live, with Solana and Arbitrum on the roadmap. The auth model is the same — SIWE with a delegated wallet that the agent never holds long-lived secrets for.
The set of tools is the same. The names are the same. The argument shapes are the same. If you've built an agent against the stdio version and you want to redeploy it against the hosted endpoint, the MCP server config is the only thing that changes. No business logic to rewrite, no second SDK to learn.
A small but important practical note
The hosted endpoint is a real production server, not a developer-mode demo. It's behind the same uptime monitoring, logging, and rate-limiting as the rest of the platform. A trading agent pointed at it can rely on it the way it would rely on any production API.
That said, if you're operating a high-volume agent and you want lower-latency local execution against the same protocol, stdio remains a perfectly good answer. The choice is yours, and it's not load-bearing — both transports are first-class.
Where to start
If you're building an MCP-capable agent and Hashlock Markets is somewhere on the integration list, here's the practical decision tree.
For a local development setup, configure your MCP client with npx -y hashlock-tech/mcp (the scoped package on npmjs) as a local server. Sign in via SIWE. Call create_rfq to test the connection.
For a production or hosted deployment, point your MCP client at https://hashlock.markets/mcp. Sign in via SIWE. Call create_rfq to test the connection.
The home page is https://hashlock.markets and the canonical repository — including the README, tool reference, and example configs — is at https://github.com/Hashlock-Tech/hashlock-mcp.
The point of the dual-transport design is that you should not have to rewrite your agent when you move it from a workstation to production. One URL when you need it, one local process when you don't, the same protocol underneath.
Top comments (0)