DEV Community

Cover image for DEV Track Spotlight: Compile blazing-fast MCP servers in Rust (DEV405)
Gunnar Grosch for AWS

Posted on

DEV Track Spotlight: Compile blazing-fast MCP servers in Rust (DEV405)

Model Context Protocol (MCP) servers have become essential tools for extending LLM capabilities, but most implementations come with a familiar pain point: dependency management. Python environments with uv, JavaScript projects requiring npm, Docker containers to orchestrate it all. What if you could compile your MCP server into a single standalone binary that runs anywhere, starts instantly, and requires zero dependency installation?

Darko Mesaros, Principal Developer Advocate at AWS and self-described Rust enthusiast, delivered an engaging code talk exploring exactly this possibility. His session combined live coding, real hardware demonstrations (including a network receipt printer), and practical insights from building production MCP servers in Rust.

Watch the Full Session:

Why Rust for MCP Servers?

Darko opened with a compelling performance comparison. His Rust-based MCP servers loaded in under a second, while a TypeScript equivalent took over four seconds. When you're spinning up coding assistants multiple times per day, those seconds add up.

But performance is just one advantage. The real value proposition centers on distribution and deployment:

Standalone binaries - No npm, pipx, or uvx required. Just point your MCP configuration at a compiled binary on your filesystem.

Zero runtime dependencies - The binary contains everything it needs. No virtual environments, no package managers, no version conflicts.

Type safety - Rust's type system catches errors at compile time, not when your LLM tries to invoke a tool with incorrect parameters.

Cross-platform compilation - Build once, run on Linux, macOS, or Windows without modification.

As Darko put it: "Since it's no longer hard to write Rust with a coding assistant, it's so easy. So I keep telling all my friends, stop writing Python."

The Rust MCP SDK

The session focused on the official Rust implementation of the Model Context Protocol, available as the rmcp crate. While relatively new (version 0.10 at the time of the session), it supports the full MCP specification and provides an elegant API for building servers.

The SDK consists of two main crates: rmcp for the core implementation and rmcp-macros for the procedural macro library. When adding the dependency to your Cargo.toml, you'll need to specify features like server and transport-io to enable the functionality you need.

The core architecture requires just a few key components:

Tool Router struct - Contains the routing logic for your MCP tools

Tool Router macro - Automatically generates routing code from decorated functions

Server Handler implementation - Handles MCP protocol requirements like tool listing and invocation

Tool functions - Individual functions decorated with the #[tool] macro

Darko emphasized the beauty of Rust's macro system here. The #[tool_router] and #[tool_handler] macros generate all the boilerplate protocol handling code, letting developers focus on business logic.

Building a To-Do MCP Server (With Receipt Printing!)

The live coding demonstration built a complete MCP server that interacted with a local to-do API and controlled a network receipt printer. The progression showed three stages of complexity:

Stage 1: Basic Tool Without Parameters

The first tool simply fetched all to-dos from a JSON API. Key implementation details:

Using CallToolResult as the return type for all tool functions

Returning results as vectors of content (JSON or text)

Proper error handling with MCPError types

Separating business logic (API calls) from tool definitions

Stage 2: Tools With Parameters

The second stage added a tool to create new to-dos, introducing parameterized tool calls. This required:

Parameter structs - Type-safe parameter definitions using JSON Schema

Environment variables - Accessing configuration through standard Rust env vars

Structured responses - Returning formatted text instead of raw JSON to preserve LLM context

Darko highlighted an important best practice here: "Do not just blow up JSON back. Return the data that you need. Maybe a little bit of instructions with your data. So this helps the LLM at the other side get the correct stuff."

He shared an example from Microsoft where an internal MCP server returned kilobytes of unnecessary JSON, quickly filling up context windows. Thoughtful response design matters.

Stage 3: Hardware Integration

The final stage demonstrated the real power of compiled binaries: direct hardware control. Using the recibo crate for thermal printer support, the MCP server could:

Query the to-do API for specific tasks

Format the output for receipt printing

Send commands over the network to a physical printer

Cut and feed the paper automatically

The result? An LLM that could analyze code, create improvement tasks, and physically print them as tear-off receipts. Impractical? Perhaps. Impressive demonstration of MCP capabilities? Absolutely.

Configuration and Distribution

One of Rust's biggest advantages emerged in the configuration discussion. A typical MCP configuration in Kiro or Cursor simply points to the compiled binary path with optional environment variables. No npx, no pipx, no cargo install needed. If the binary is in your PATH, you can reference it directly by name.

This simplicity extends to distribution. Rust binaries can be:

Compiled once and copied to any compatible system

Stored in version control (if size permits)

Distributed through package managers

Hosted on GitHub releases for easy download

Development Workflow and Tooling

Darko demonstrated his development workflow using several tools:

MCP Inspector - Essential for testing MCP servers during development. Run with npx @modelcontextprotocol/inspector /path/to/binary to get a web UI for testing tool calls, viewing messages, and debugging responses.

Just - A modern alternative to Make for task automation. Darko's justfile included commands for building, testing, and running the MCP Inspector automatically.

Kiro - His preferred coding assistant, configured to load the MCP server and provide AI-powered development assistance.

The development cycle was remarkably fast: make code changes, recompile (often under a second for incremental builds), and test immediately in the MCP Inspector or coding assistant.

Production Considerations

While the session focused on local STDIO-based MCP servers, Darko addressed several production topics:

Remote MCP servers - The Rust SDK supports HTTPS-based remote servers, though not all clients support them yet. The IAM Policy Autopilot MCP server (open source from AWS) demonstrates both STDIO and HTTPS implementations.

Security - Local MCP servers running over STDIO have direct filesystem access. Remote servers require careful authentication and authorization design.

Error handling - Proper error types and messages help LLMs understand what went wrong and potentially retry with corrected parameters.

Context management - Using Rust's Arc (Atomic Reference Counter) for shared state across asynchronous tool calls ensures data consistency.

Real-World Examples

Darko shared two production Rust MCP servers:

IAM Policy Autopilot - An open source AWS project that generates IAM policies by analyzing code. It wraps a Rust CLI tool with MCP interfaces for both local and remote use.

Grimoire MCP - A personal tool for storing and retrieving code patterns. When Darko finds a useful Rust pattern, he tells his coding assistant to save it, and the MCP server stores it in a markdown file for future reference.

Both examples demonstrate practical applications beyond toy demonstrations, showing how Rust MCP servers can enhance real development workflows.

Key Takeaways

Performance matters - Sub-second startup times make Rust MCP servers feel instant compared to interpreted language alternatives.

Distribution is simpler - Standalone binaries eliminate dependency management headaches for both developers and users.

Type safety prevents errors - Rust's type system catches parameter mismatches and other errors at compile time.

Hardware integration is possible - Compiled binaries can interact with any hardware or system resource your computer can access.

Start with MCP Inspector - Always test your MCP servers with the inspector before integrating with coding assistants.

Design responses carefully - Return only necessary data to preserve LLM context windows.

Separate concerns - Keep business logic (API calls, hardware control) separate from tool definitions for cleaner code.

Darko's closing advice captured the spirit of the session: "With the world of LLMs, with coding assistants, with everything at your fingertips, you don't need to write Python anymore. You can write Rust."

For developers building MCP servers, Rust offers a compelling combination of performance, safety, and deployment simplicity. The rmcp crate has matured rapidly, and the ecosystem of supporting tools makes development straightforward. Whether you're building internal tools, open source projects, or even hardware-integrated demonstrations, Rust provides a solid foundation for production-ready MCP servers.


About This Series

This post is part of DEV Track Spotlight, a series highlighting the incredible sessions from the AWS re:Invent 2025 Developer Community (DEV) track.

The DEV track featured 60 unique sessions delivered by 93 speakers from the AWS Community - including AWS Heroes, AWS Community Builders, and AWS User Group Leaders - alongside speakers from AWS and Amazon. These sessions covered cutting-edge topics including:

  • 🤖 GenAI & Agentic AI - Multi-agent systems, Strands Agents SDK, Amazon Bedrock
  • 🛠️ Developer Tools - Kiro, Kiro CLI, Amazon Q Developer, AI-driven development
  • 🔒 Security - AI agent security, container security, automated remediation
  • 🏗️ Infrastructure - Serverless, containers, edge computing, observability
  • Modernization - Legacy app transformation, CI/CD, feature flags
  • 📊 Data - Amazon Aurora DSQL, real-time processing, vector databases

Each post in this series dives deep into one session, sharing key insights, practical takeaways, and links to the full recordings. Whether you attended re:Invent or are catching up remotely, these sessions represent the best of our developer community sharing real code, real demos, and real learnings.

Follow along as we spotlight these amazing sessions and celebrate the speakers who made the DEV track what it was!

Top comments (0)