DEV Community

Cover image for MCP in Practice — Part 9: From Concepts to a Hands-On Example
Gursharan Singh
Gursharan Singh

Posted on

MCP in Practice — Part 9: From Concepts to a Hands-On Example

MCP in Practice — Part 9: From Concepts to a Hands-On Example

Part 9 of the MCP in Practice Series · Back: Part 8 — Your MCP Server Is Authenticated. It Is Not Safe Yet.

In Part 5, you built a working MCP server. Three tools, two resources, two prompts, and one local client — all connected over stdio. The protocol worked. The order assistant answered questions, looked up orders, and cancelled one.

Then Parts 6 through 8 explained what changes when that server leaves your machine: production deployment, transport decisions, auth, and the security risks that come with the protocol itself. Those were concept articles. They explained the thinking. They did not change the code.

This part closes the gap. We take the same TechNova order assistant and move it from stdio to Streamable HTTP. Same tools. Same business logic. Same protocol messages. Different transport, different deployment model, and a different set of concerns around it.

This is not Part 5 again. It is the transition that Parts 6–8 prepared you for.

Why This Part Exists

Part 5 gave you a working local server. Parts 6 through 8 explained what changes in production. This final part brings those two sides together.

Part 9 fills that gap with one focused example. It is not trying to build a production-ready deployment. It is trying to show the transition clearly enough that a developer who has followed the series can see exactly what changes and what stays the same.

If Part 5 was "build it locally," this part is "now run it as a service."

The Same Example, a Different Deployment Model

Part 5 stdio deployment with client and server on the same machine versus Part 9 Streamable HTTP deployment with server running independently and clients connecting over HTTP

Left: Part 5 — host launches server as a child process on the same machine. Right: Part 9 — server runs independently, clients connect over HTTP.

The TechNova order assistant is the same. The same three tools: get_order_status, get_order_items, cancel_order. The same two resources: order by ID and recent orders summary. The same two prompts. The same seeded order data. The same business workflow.

What changes is how the server runs and how clients reach it. In Part 5, the host launched the server as a child process. Communication happened over stdin and stdout. Trust was inherited from the local machine. No network was involved.

In this part, the server runs as an independent HTTP service. It listens on a port. Clients connect to it over the network — or, for this walkthrough, over localhost. The MCP messages are identical. The deployment model is completely different.

What Changes When You Move from stdio to Streamable HTTP

The protocol does not change. The same JSON-RPC messages flow between client and server. The same initialize → list → call sequence happens. The server still exposes tools, resources, and prompts. The client still discovers them and invokes them.

What changes is everything around the protocol. In stdio, the host controlled the server's lifecycle — it started the process and stopped it. With Streamable HTTP, the server is already running. The client does not launch it; the client connects to it.

That single shift — from launching a process to connecting to a service — is why Parts 6 through 8 exist. Once the server is an independent service, you need to think about who can connect, how they prove identity, what each caller is allowed to do, and whether the server's tool descriptions can be trusted.

For this walkthrough, we skip auth and security. We are running on localhost. The goal is to see the transport transition clearly, without production concerns clouding the picture. Parts 6–8 already covered what you would add next.

The Server Side

The Part 5 server (server.py) ended with one line that chose the transport. The Part 9 server (server_http.py) changes that single line:

# Part 5 — stdio (local process)
app.run(transport="stdio")
Enter fullscreen mode Exit fullscreen mode
# Part 9 — Streamable HTTP (independent service)
app.run(transport="streamable-http")
Enter fullscreen mode Exit fullscreen mode

The server now runs as an HTTP service at http://127.0.0.1:8000/mcp — the default endpoint for this example. When a client sends a POST request to that endpoint with a JSON-RPC message, the server processes it and returns the response.

Everything above that line stays the same. The tool definitions, the resource handlers, the prompt templates, the data helpers — none of that changes. The server's business logic does not know or care which transport is carrying its messages.

That is the whole point of MCP's transport abstraction. You write your tools once. The transport is a deployment decision, not a code decision. Part 7 explained this conceptually. Here you see it in practice: one line changed, and the server is now a network service instead of a child process.

Running and Testing It Locally

Open two terminals. In the first, start the server:

bash run_server.sh
Enter fullscreen mode Exit fullscreen mode

On first run, the script creates a virtual environment, installs dependencies, seeds the order data, and starts the Streamable HTTP server. You should see: "Endpoint: http://127.0.0.1:8000/mcp" — the server is now listening.

If you want to validate the endpoint with MCP Inspector before running the client, the GitHub README includes a short Inspector walkthrough and an example of what a successful connection looks like.

The Client Side

In Part 5, client.py launched the server as a subprocess and communicated over stdio. The connection was implicit — stdin and stdout were the channel.

In Part 9, client_http.py connects to a URL instead. Where Part 5 imported stdio_client, the new client imports streamable_http_client from the MCP SDK and points it at http://127.0.0.1:8000/mcp. The connection is explicit: you tell the client where to find the server.

In the second terminal, run:

bash run_client.sh
Enter fullscreen mode Exit fullscreen mode

Once connected, the client's code is nearly identical to Part 5. It calls session.initialize(), then session.list_tools(), then session.call_tool() — the same sequence, the same methods, the same results. The only difference is how the session was established.

That is the transition in one sentence: the client stops launching a process and starts connecting to a service.

One Focused End-to-End Walkthrough

Two-column comparison showing what stayed the same (tools, resources, prompts, protocol, business logic) versus what changed (transport, server lifecycle, client connection, testing setup)

Same tools, same protocol, same business workflow. Different transport, different deployment, different operational concerns.

Here is one complete workflow that runs through the full MCP cycle over Streamable HTTP, using the same order data from Part 5. This is exactly what client_http.py does when you run it against the server.

Step 1: The client connects to http://127.0.0.1:8000/mcp and initializes the MCP session. The server responds with its capabilities — the same tools, resources, and prompts the stdio version exposed.

Step 2: The client discovers the server's tools. It sees get_order_status, get_order_items, and cancel_order — exactly as before.

Step 3: The client calls get_order_status for order ORD-10042. The server reads the local order data and returns the status, carrier, and delivery estimate. The JSON-RPC exchange is identical to Part 5 — only the transport layer underneath has changed.

Step 4: The client calls get_order_items for the same order to see what is in it.

Step 5: The client calls cancel_order for order ORD-10099. The server marks the order as cancelled and returns confirmation.

Step 6: The client calls get_order_status for ORD-10099 again to confirm the cancellation took effect.

Every step in this walkthrough would produce the same result over stdio. The difference is that the server was already running, the client connected to it over HTTP, and no subprocess was involved. That is the entire transition.

If you compare this with Part 5, the business workflow is identical. What changed is not the order assistant — it is how the client reaches it.

What This Still Does Not Solve

Moving from stdio to Streamable HTTP is a real step forward. The server is now an independent service that multiple clients can reach. But running over HTTP on localhost is not the same as being production-ready.

For a real deployment, you would add TLS to encrypt the connection. You would add authentication so the server knows who is calling. You would add authorization so each caller only accesses the tools they should. You would separate the server's backend credentials from the client's token. And you would review tool descriptions and monitor for changes, because the security risks from Part 8 apply the moment your server is reachable over a network.

This walkthrough deliberately skips those layers to keep the transport transition clear. Parts 6 through 8 already explained each one. The goal here was not to build a production system — it was to show the transition that makes those production concerns real.

Three Takeaways

First, the protocol stayed the same. The same JSON-RPC messages, the same initialize → list → call sequence, the same tools and resources. Moving from stdio to Streamable HTTP did not change a single tool definition.

Second, the deployment changed everything around it. The server went from a child process to an independent service. The client went from launching a process to connecting to an endpoint. That shift is why transport, auth, and security needed their own articles.

Third, this is where the series comes together. Part 5 gave you the local build. Parts 6 through 8 gave you the production thinking. This part showed the transition between them. The protocol is the easy part. The deployment decisions are where the real engineering happens.

The Part 9 repo on GitHub includes server_http.py, client_http.py, the original Part 5 files, and a README with complete local setup instructions.

With this final hands-on example, the MCP in Practice series comes full circle. The full series — from fundamentals through production — is available on the series hub page.

If this series helped you understand MCP, or if there is a topic you would like covered next, I would love to hear it in the comments.

Top comments (0)