Laboratory instruments are some of the most sophisticated hardware on the planet. A modern liquid handler can dispense volumes down to nanoliters with sub-percent precision. A mass spectrometer can identify molecules in a mixture at parts-per-billion concentrations. Yet the software controlling these instruments often looks like it was designed in 2005 - because it was.
Meanwhile, AI agents are connecting to databases, APIs, and cloud services through increasingly standardized protocols. The Model Context Protocol (MCP) - originally released by Anthropic for connecting AI assistants to tools - is rapidly becoming the standard for agent-to-tool integration.
We asked a simple question: what if we connected AI agents to physical lab instruments the same way?
This article explains how we did it, what we learned, and why it matters for the future of laboratory automation.
Lab Instrument Software Is Stuck
If you work in a lab, you know the pain. Every instrument ships with its own desktop application. These applications rarely talk to each other. Integrating two instruments into a single workflow means one of three things:
- Manual copy-paste - a scientist exports data from instrument A, opens instrument B's software, and manually enters parameters. Error-prone and slow.
- Vendor-specific SDK - if the manufacturer provides one. Typically a COM interface, a .NET DLL, or a REST API documented in a 200-page PDF. Each vendor's approach is different.
- Middleware platforms - commercial lab automation middleware that costs six figures annually and still requires custom scripting for each instrument.
None of these approaches scale. And none of them are accessible to the scientists who actually run the experiments.
The underlying problem is interoperability. Lab instruments speak dozens of different protocols - SiLA2, OPC-UA, SCPI, proprietary serial commands, HTTP APIs - and there is no universal adapter.
What MCP Brings to the Table
The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools. It defines a simple contract: a server exposes tools (callable functions), resources (readable data), and prompts (reusable templates). A client - typically an AI agent - discovers these capabilities and uses them to accomplish tasks.
What makes MCP interesting for lab instruments is not the AI part. It is the standardization.
MCP gives us a uniform interface layer. Instead of teaching every AI agent about SiLA2 commands, OPC-UA node structures, and SCPI syntax, we wrap each instrument in an MCP server. The agent sees a clean set of tools:
@mcp_server.tool()
def aspirate(well: str, volume_ul: float) -> str:
"""Aspirate liquid from a specified well.
Args:
well: Well position (e.g., 'A1', 'B3')
volume_ul: Volume in microliters (0.1 - 1000)
"""
# Translate to instrument-specific protocol
result = hardware_api.execute_aspirate(well, volume_ul)
return f"Aspirated {volume_ul} uL from {well}: {result.status}"
@mcp_server.tool()
def dispense(well: str, volume_ul: float) -> str:
"""Dispense liquid into a specified well."""
result = hardware_api.execute_dispense(well, volume_ul)
return f"Dispensed {volume_ul} uL into {well}: {result.status}"
@mcp_server.tool()
def get_plate_status() -> dict:
"""Get current status of all wells on the plate."""
return hardware_api.read_plate_map()
The agent does not need to know whether the instrument speaks SiLA2 or serial commands. It calls aspirate(well="A1", volume_ul=50.0) and the MCP server handles the translation.
Architecture: From Natural Language to Hardware
We built this architecture in LiquidBridge, our open demo for AI-controlled liquid handling. Here is how it works:
Scientist: "Transfer 50 uL from wells A1-A4 to wells B1-B4"
│
▼
┌───────────────┐
│ AI Agent │ Plans multi-step workflow
│ (CrewAI) │ Validates against constraints
└───────┬───────┘
│
▼
┌───────────────┐
│ MCP Server │ Translates tool calls to
│ (FastMCP) │ instrument protocol
└───────┬───────┘
│
▼
┌───────────────┐
│ Hardware API │ SiLA2 / OPC-UA / Serial
│ (FastAPI) │ commands to instrument
└───────┬───────┘
│
▼
┌───────────────┐
│ Liquid │ Physical execution
│ Handler │
└───────────────┘
The key design decisions:
MCP as the integration layer, not the control layer. The MCP server does not talk directly to hardware. It calls a Hardware API that handles the actual instrument protocol. This separation means we can swap instruments without touching the agent or MCP layer.
One MCP server per instrument class. A liquid handler MCP server exposes tools for aspiration, dispensing, tip handling, and plate management. A plate reader MCP server exposes tools for reading absorbance, fluorescence, and luminescence. Each server is a self-contained adapter.
Agent plans, human approves, hardware executes. The AI agent receives the natural language request, breaks it into a sequence of MCP tool calls, and presents the plan for approval before any physical action happens.
The Human-in-the-Loop Problem
This is where lab instruments diverge from typical MCP use cases. When an AI agent calls a tool to query a database, the worst case is a bad query that returns wrong data. When an AI agent calls a tool to aspirate liquid from a well, the worst case is a contaminated sample, a broken tip, or a ruined experiment.
In regulated environments - GxP, GLP, GMP - every action on a sample must be traceable. An AI agent making autonomous decisions about physical operations is a non-starter for compliance.
Our approach: the agent proposes, the human disposes.
Agent generates plan:
1. Pick up tips from rack position 1
2. Aspirate 50 uL from A1
3. Dispense 50 uL into B1
4. Aspirate 50 uL from A2
5. Dispense 50 uL into B2
... (8 steps total)
Scientist reviews: [Approve] [Modify] [Reject]
The UI shows the full execution plan with volumes, positions, and sequence. The scientist reviews and either approves the entire plan, modifies specific steps, or rejects it and re-describes their intent. Only after explicit approval does the agent execute against the hardware.
Every step is logged with timestamps, the original prompt, the agent's reasoning, the human's decision, and the hardware response. This gives you an audit trail that satisfies GxP requirements - arguably a better audit trail than manual pipetting, where documentation depends on the scientist remembering to record each step.
What We Learned Building This
Tool granularity matters more than you think. Our first version exposed fine-grained tools - move_to_position, lower_tips, aspirate, raise_tips. The agent struggled with sequencing. Our second version exposed workflow-level tools - aspirate_from_well, dispense_to_well, pick_up_tips - where each tool encapsulates a safe sequence of hardware actions. The agent's success rate went from ~60% to over 95%.
State management is fundamentally different from software tools. A database MCP server is stateless - each query is independent. A liquid handler MCP server is deeply stateful. The agent needs to know: are tips loaded? What volume is currently held? Which wells have been used? We solved this by making the MCP server expose a get_instrument_state resource that the agent checks before planning.
Error handling must be physical-world-aware. Software errors are recoverable - retry the API call. Hardware errors have physical consequences. A tip collision can damage the instrument. Our MCP server implements pre-execution validation: before calling the hardware, it checks positions, volumes, and tip status against known constraints.
What is Next: The Lab of 2030
We are building toward a future where any lab instrument with a digital interface can be wrapped as an MCP server and controlled through natural language. Not to replace scientists - to give them a better interface to their own equipment.
The technical foundation is here. MCP provides the standardization. AI agents provide the natural language understanding. Human-in-the-loop provides the safety. What is missing is coverage - more instrument protocols wrapped, more edge cases handled, more labs running this in production.
If you are building instrument control software, or if you are a scientist frustrated with vendor GUIs, this is the architecture to watch. The protocol adapter pattern - instrument protocol to MCP to AI agent - works. We have shipped it. And we are open to collaborating with instrument manufacturers who want to make their hardware AI-accessible.
Iacob is the Technical Lead at QPillars, a Zurich-based team building agentic AI for scientific instruments. QPillars delivers instrument control systems, data platforms, and AI integration for biotech and medtech companies. Learn more about LiquidBridge or reach out at iacob@qpillars.com.
Top comments (1)
The interoperability angle is the real insight here. We've been using MCP for connecting AI agents to software tools — never considered wrapping physical hardware the same way. The aspirate/dispense tool interface is surprisingly clean.