Introduction
In Part 1, we have fine tuned a neural network to detect coffee first crack from audio using PyTorch and the Audio Spectrogram Transformer. In Part 2, we have built two MCP (Model Context Protocol) servers - one to control my Hottop KN-8828B-2K+ roaster and another to detect first crack using a microphone in real-time.
This is where put it all together. But first: can .NET Aspire orchestrate Python MCP servers and n8n workflows to autonomously roast coffee?
Spoiler alert: Yes, it can. And the coffee tastes spot on.
The Challenge
Autonomous coffee roasting isn't just about detecting when first crack happens. It's a complex orchestration problem involving:
- Multiple systems: Python MCP servers to interact with hardware, an agent layer for orchestration (n8n workflows to begin with), containerised services.
- Real-time decision making: Monitoring sensors every few seconds and deciding on actions depending on the status.
- Safety-critical control: Managing heat and fan speed to avoid burning / wasting green beans.
- Precise timing: Detecting bean charge event (when beans were added during pre heating stage), first crack, and hitting target development time percentage by adjusting controls available.
- Observability: Tracking telemetry across Python, n8n, and .NET components.
The solution? .NET Aspire 13 orchestrating everything.
Why Aspire 13?
Aspire 13.0 (released with .NET 10) brings significant improvements for Python integration and container orchestration—perfect for this use case:
Simplified Python Hosting with AddPythonModule
Aspire 13 replaces the old AddPythonApp API with three specialized methods:
-
AddPythonModule: Runs Python modules with-mflag (e.g.,python -m src.mcp_servers.roaster_control.sse_server). -
AddPythonScript: Runs standalone Python scripts. -
AddPythonExecutable: Runs executables from virtual environments (e.g.,uvicorn,gunicorn).
For MCP servers running as modules, AddPythonModule is cleaner and more explicit:
// Old way (Aspire 9)
builder.AddPythonApp("roaster-control", projectRoot, "-m", venvPath)
.WithArgs("src.mcp_servers.roaster_control.sse_server")
// New way (Aspire 13)
builder.AddPythonModule("roaster-control", projectRoot, "src.mcp_servers.roaster_control.sse_server")
.WithVirtualEnvironment(venvPath)
Cleaner AppHost Project Structure
The new Aspire.AppHost.Sdk/13.0.0 simplifies project files:
- No separate
<Sdk Name="..." />element needed. -
Aspire.Hosting.AppHostpackage included automatically. - Removes
IsAspireHostproperty (implicit).
Enhanced Container Orchestration
- Better lifecycle management for containers (n8n in this project).
- Improved health check support.
- More granular control over container runtime arguments.
Built-in OpenTelemetry Integration
Out-of-the-box observability with .WithOtlpExporter() for:
- Structured logging from Python processes.
- Distributed tracing across MCP calls.
- Real-time metrics in the Aspire dashboard.
Architecture Overview
Here's what .NET Aspire orchestrates:
Why Aspire?
-
Single command startup:
dotnet runstarts all 3 services with proper dependency ordering - Shared configuration: Environment variables, Auth0 credentials, OpenTelemetry
-
Python support: Built-in virtual environment management with
AddPythonModule - Container orchestration: Manages n8n container
- Observability: Unified dashboard with logs, traces, and metrics from all components
- Development velocity: Changes to Python code auto-reload, no container rebuilds needed
The n8n Autonomous Roasting Workflow
As a first step, n8n is selected for the agent layer. The visual workflow setup and the constructs provided by n8n allowed for radio verification of the agentic roasting process. The heart of the system is an n8n workflow that acts as the "roasting brain." Here's what it does:
Phase 1: Initialisation & Preheating (Preheating Agent)
Start → Read Roaster Status → Start Roaster → Monitor Temperature
The workflow:
- Connects to both MCP servers via SSE (Server-Sent Events).
- Starts the roaster at 100% heat, 30% fan.
- Monitors bean temperature rising toward ~170°C during preheating.
- Uses an AI Agent node (Preheating Agent) with custom instructions to detect preheating completion.
Key metrics tracked:
- Bean temperature.
- Rate of Rise (°C/min).
- Fan speed (%).
- Heat level (%).
Phase 2: Bean Charge Detection (Preheating Agent)
Monitor Temp → Detect Temperature Delta threshold → Mark T0 Timestamp
When green beans are added to the hot roaster, the temperature suddenly drops (e.g., from 170+°C → less than 90°C). Then the workflow:
- Tracks rolling temperature averages.
- Detects sudden drops > 40°C.
- Marks "T0" - the beginning of roast time.
- All subsequent metrics are relative to T0.
From the logs:
{
"t0_detected": true,
"beans_added_temp_c": 96,
"t0_timestamp_utc": "2025-11-15T21:21:56.490259+00:00"
}
Phase 3: First Crack Detection (Roast Agent)
Loop: Poll First Crack MCP → Check Status → Wait
The workflow continuously calls the First Crack Detection MCP server:
- Streams microphone audio to the PyTorch model.
- Uses sliding window inference (10-second windows).
- Implements "pop-confirmation" logic (minimum 3 pops within 30 seconds).
- Reports when first crack is confirmed.
Detection event:
{
"first_crack_temp_c": 184.0,
"first_crack_time_display": "08:42",
"roast_elapsed_seconds": 522
}
Phase 4: Development Time Management (Roast Agent)
This phase is important as it can lead to under-roasted or over-roasted beans. The agent's objective is to adjust fan and heat to extend development time.
Development time percentage is the percentage of the time spent between first crack and end of roast compared to the overall roasting time. The goal is to get this period around 15-20%. On my machine, I have noticed that this needs to be achieved before bean temperatures go above 196°C to get the results I prefer.
Loop: Adjust Heat/Fan → Monitor Development % → Check Target
Once first crack is detected, the critical development phase begins. The workflow's AI agent:
Monitors:
- Current bean temperature
- Rate of Rise (to prevent stalling or rushing)
- Development time percentage (target: 15-20%)
- Time since first crack
Controls:
- Reduces heat (100% → 60% → 40%)
- Increases fan speed (30% → 50% → 70%)
- Slows the roast to extend development time
Decision logic (via AI agent):
IF development_time_percent >= 15% AND development_time_percent <= 20%:
IF bean_temp_c >= 190 AND bean_temp_c <= 195:
→ DROP BEANS (optimal light roast)
ELSE IF bean_temp_c > 195:
→ DROP BEANS (approaching medium roast)
ELSE:
→ CONTINUE MONITORING
Actual output from workflow:
{
"phase": "development",
"action": "monitor",
"bean_temp_c": 191,
"message": "Development: 191°C, 8.9%"
}
Then moments later:
{
"phase": "cooling",
"action": "drop",
"bean_temp_c": 193,
"message": "Optimal! Dropping beans."
}
Phase 5: Completion
Drop Beans → Set Cooling Fan to 100% → Stop Heat → Cool
The workflow:
- Commands the roaster to drop beans into cooling tray.
- Sets the cooling fan to 100% for maximum cooling.
- Cuts heat to 0%.
- Records final metrics for analysis.
Final roast profile:
{
"roast_elapsed_seconds": 584,
"roast_elapsed_display": "09:44",
"beans_added_temp_c": 175.0,
"first_crack_temp_c": 184.0,
"first_crack_time_display": "08:42",
"development_time_seconds": 62,
"development_time_display": "01:02",
"development_time_percent": 10.6,
"total_roast_duration_seconds": 584
}
The Aspire Orchestration Code
Here's how .NET Aspire 13 makes this all work (from Program.cs):
Python MCP Servers
// Roaster Control MCP Server
var roasterControl = builder.AddPythonModule(
"roaster-control",
projectRoot,
"src.mcp_servers.roaster_control.sse_server")
.WithVirtualEnvironment(sharedVenvPath)
.WithHttpEndpoint(port: 5002, env: "ROASTER_CONTROL_PORT")
.WithEnvironment("AUTH0_DOMAIN", auth0Domain)
.WithEnvironment("AUTH0_AUDIENCE", auth0Audience)
.WithEnvironment("USE_MOCK_HARDWARE", useMockHardware)
.WithEnvironment("OTEL_EXPORTER_OTLP_PROTOCOL", "grpc")
.WithOtlpExporter();
// First Crack Detection MCP Server
var firstCrackDetection = builder.AddPythonModule(
"first-crack-detection",
projectRoot,
"src.mcp_servers.first_crack_detection.sse_server")
.WithVirtualEnvironment(sharedVenvPath)
.WithHttpEndpoint(port: 5001, env: "FIRST_CRACK_DETECTION_PORT")
.WithEnvironment("AUTH0_DOMAIN", auth0Domain)
.WithEnvironment("AUTH0_AUDIENCE", auth0Audience)
.WithEnvironment("OTEL_EXPORTER_OTLP_PROTOCOL", "grpc")
.WithOtlpExporter();
What's happening here:
-
AddPythonModule: New in Aspire 13, replaces the oldAddPythonAppAPI -
WithVirtualEnvironment: Points to shared Python 3.11 venv at repo root -
WithHttpEndpoint: Configures SSE endpoints for n8n to connect -
WithOtlpExporter: Sends telemetry to Aspire dashboard - Modules run with
-mflag implicitly (e.g.,python -m src.mcp_servers.roaster_control.sse_server)
Container Services
// n8n Workflow Engine
var n8n = builder.AddContainer("n8n", "n8nio/n8n", "latest")
.WithHttpEndpoint(port: 5678, targetPort: 5678, name: "n8n-ui")
.WithBindMount("./n8n-data", "/home/node/.n8n")
.WithEnvironment("N8N_HOST", "0.0.0.0")
.WithEnvironment("N8N_PORT", "5678")
.WithEnvironment("WEBHOOK_URL", "http://localhost:5678/")
.WithEnvironment("N8N_METRICS", "true");
Key features:
- Bind mount for persisting workflows and credentials.
- Exposes port 5678 for web UI.
- Metrics enabled for observability.
- Auto-restarts on failure.
Real-World Results
The First Autonomous Roast
Stats:
- Total roast time: 9:44 (584 seconds)
- First crack: 8:42 at 184°C
- Development time: 1:02 (10.6% - slightly under target but acceptable)
- Final temperature: 193°C
- Result: Light roast, consistent colour and smooth taste.
What worked:
- Temperature drop detection caught bean addition instantly..
- First crack detection was accurate (within 20 seconds of my ears). This is why 10% development percentage is not an issue.
- Heat/fan adjustments prevented burning.
- Development % monitoring kept roast in safe zone.
What could improve:
- Development time was 10.6% instead of target 15-20%.
- Could start reducing heat earlier after first crack.
- Rate of Rise could be smoother in final phase.
The Aspire Dashboard Experience
The unified Aspire dashboard shows:
Services:
- roaster-control (Python) - Running
- first-crack-detection (Python) - Running
- n8n (Container) - Running
Metrics:
Lessons Learned
1. MCP Server Design Matters
Current design has two MCP servers. One for roaster control, one for first crack detection. The original idea was, the roaster control MCP server could run on a low powered device connected to the roaster and the First Crack Detector could run on the laptop due to hardware requirements.
This design adds coordination overhead to the agent and makes it more complicated than necessary. A unified MCP server that returns all metrics in a single call would simplify the agent logic and likely lead to more predictable behaviour. Before moving onto multiple agent framework comparison, this will be one area to improve.
2. Aspire's Python Support is Production-Ready
Before Aspire:
- Multiple terminal windows or docker compose
- Manual venv activation
- Additional effort to add open telemetry collectors and dashboards
With Aspire:
- One command:
dotnet run. - Automatic venv management.
- Shared configuration.
- Structured logging and tracing.
- Custom metrics.
3. n8n is Powerful for Agent Orchestration
Why n8n worked well:
- Visual debugging: See workflow execution in real-time.
- Built-in AI Agent node: Uses OpenAI with tool calling.
- MCP client support: Native SSE connections.
- Error handling: Built-in retry logic and error branches.
- State management: Workflow variables persist between runs.
4. MCP Protocol Makes Tool Integration a Breeze
The MCP servers exposed simple HTTP/SSE endpoints:
Roaster Control Tools:
-
read_roaster_status→ Returns current sensors + metrics -
adjust_heat(level: int)→ Sets heat 0-100% -
adjust_fan(speed: int)→ Sets fan 0-100% -
stop_roaster()→ Emergency stop -
start_roaster()→ Begin roast
First Crack Tools:
-
start_first_crack_detection()→ Start audio monitoring -
get_first_crack_status()→ Check if first crack detected -
stop_first_crack_detection()→ Stop monitoring
The n8n AI Agent called these tools naturally:
Agent: "I need to check the roaster status"
→ Calls read_roaster_status
→ Receives JSON with temp, fan, heat, metrics
→ Makes decision
→ Calls adjust_heat(60) to reduce heat
5. Observability is Critical
When the roast is in progress, you need:
- Real-time monitoring: See temperature changing every 2 seconds
- Error visibility: Know immediately if MCP server crashes
- Performance metrics: Ensure control commands complete in <500ms
- Historical data: Review roast profile after completion
Aspire's OpenTelemetry integration gave us all of this for free.
The Development Experience with Warp Agent
Throughout this project, I used Warp Agent extensively:
For Aspire upgrade (9 → 13):
- Warp Agent searched Microsoft Learn docs via MCP
- Found breaking changes in
AddPythonApp→AddPythonModule - Generated migration plan with test steps
- Verified builds and runtime behavior
For n8n workflow debugging:
- Analyzed MCP server logs to diagnose connection issues
- Suggested retry logic for transient network errors
- Helped structure AI agent prompts for decision-making
For Python model optimization:
- Profiled inference latency
- Suggested caching strategies for feature extraction
- Optimized sliding window parameters
What made Warp Agent effective:
- Context awareness: Understood the full stack (C#, Python, n8n)
- MCP integration: Could fetch latest Microsoft docs and Context 7 for n8n.
- Iterative debugging: Quickly test → analyse → fix cycles.
- Code generation: Created boilerplate while I focused on logic.
Next Steps
Short Term
- Roast profile tuning: Adjust heat/fan curves to hit 15-20% development consistently.
-
Data collection: Log every roast for analysis (temp curves, timestamps, outcomes).
- Add support for automatically exporting roast statistics and ability to rate roasts later.
- Improved first crack detection: Capture manual recording sessions using different environmental setup to improve detection. What we have is impressive given we only had 9 roasting sessions for fine tuning. But we can do better.
- Implement multiple agent frameworks and compare pros and cons.
- Test MCP servers running on a Raspberry PI 5.
Medium Term
- Train an emulation roast model using historical roast logs.
- This will allow experimentation without using actual hardware and will also allow realistic response taking heat, fan and time variables to emulate roaster heating process.
- Machine learning on roast profiles: Train model to predict optimal heat/fan adjustments once there is enough roast samples and ratings.
- Custom UI: Build dedicated roasting interface (replace n8n for end users) to allow unified experience across agent frameworks.
- Multi-origin support: Adjust profiles based on bean origin (Kenya vs Brazil)
Conclusion
Can .NET Aspire roast coffee? Absolutely.
More importantly, it provided:
- Unified orchestration for polyglot services (C#, Python, Node.js containers)
- Developer productivity with single-command startup and hot reload
- Production observability with unified logs, traces, and metrics
- Flexibility to iterate quickly on both code and workflows
The combination of:
- PyTorch model for first crack detection (Part 1)
- MCP servers for hardware control and detection (Part 2)
- .NET Aspire orchestration with n8n workflows (Part 3)
...resulted in a fully autonomous coffee roasting system that produces genuinely good coffee.
From 9 raw audio recordings to autonomous coffee roasting—all orchestrated with a single command: dotnet run
The coffee tastes great. The code is open source. And yes, .NET Aspire can definitely roast coffee.
For reference, Today's roast incurred $0.76 OpenAI API usage cost.
Resources
Code and Articles
.NET Aspire Documentation
- Aspire Overview: .NET Aspire documentation
- Upgrade to Aspire 13: Upgrade guide
- Python Hosting in Aspire: Orchestrate Python apps
Tools and Protocols
- n8n Workflow Automation: n8n.io
- Model Context Protocol: modelcontextprotocol.io
- OpenTelemetry: opentelemetry.io
Model & ML:
- Audio Spectrogram Transformer (AST) - Pre-trained model
- AST Documentation
- Fine-Tuning AST Tutorial
- Original AST Paper - Gong et al., 2021





Top comments (0)