<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jui-Hung Yuan</title>
    <description>The latest articles on DEV Community by Jui-Hung Yuan (@juihungyuan).</description>
    <link>https://dev.to/juihungyuan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/juihungyuan"/>
    <language>en</language>
    <item>
      <title>Event-Driven AI Orchestration: Local-First Smart Home Automation</title>
      <dc:creator>Jui-Hung Yuan</dc:creator>
      <pubDate>Tue, 10 Mar 2026 14:25:07 +0000</pubDate>
      <link>https://dev.to/juihungyuan/my-apartment-now-dims-the-lights-on-guests-on-purpose-598l</link>
      <guid>https://dev.to/juihungyuan/my-apartment-now-dims-the-lights-on-guests-on-purpose-598l</guid>
      <description>&lt;p&gt;In my previous blog, I built a &lt;a href="https://dev.to/juihungyuan/from-local-to-cloud-what-i-learned-building-a-remote-mcp-server-on-aws-for-smart-home-control-3a24"&gt;remote MCP server connected to Claude so I could control my smart light bulb by just sending a chat message&lt;/a&gt;. It worked! But looking back, the architecture was a glorious overkill.&lt;/p&gt;

&lt;p&gt;All of this, for two people who just want to dim the lights in the evening 🫠..&lt;/p&gt;

&lt;p&gt;Around the same time, I had been hearing a lot about the trend of “Skills over MCP” and the hype around OpenClaw. OpenClaw is popular because it’s a local-first AI assistant — the whole orchestration runs on your own devices, no cloud infrastructure needed, just enough flexibility to build something genuinely useful. That sounded like exactly the right learning opportunity. &lt;/p&gt;

&lt;p&gt;So I decided to build an &lt;strong&gt;OpenClaw-inspired personal assistant&lt;/strong&gt;  for smarthome control — simpler, local, and tailored to just what I needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlgg1i0pi8klv4al74hz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlgg1i0pi8klv4al74hz.png" alt="light_schedule_illustratation" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spoiler⚠️&lt;/strong&gt;: it worked out well. My partner and I can now control the light bulb from Slack, and we can schedule light adjustments via chat. I am obsessed with healthy lifestyle. So naturally, I scheduled the light to slowly dim itself over the evening. It’s very peaceful. The only unintended side effect is that when we have guests over, the room gets darker and darker as the night goes on, and somehow they always end up leaving earlier than planned. I feel a little bad about it. But also — they’re getting more sleep now, so really, I’m doing them a favor.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenClaw basics
&lt;/h2&gt;

&lt;p&gt;To understand how OpenClaw works, I reimplemented its &lt;strong&gt;4 core components&lt;/strong&gt; with the help of Claude — purely for learning. No shortcuts, no copy-paste. Just building each piece by hand until it clicked.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Beauty is in the eye of the beholder — or as the teacher in Spy x Family would say, &lt;strong&gt;🕺ELEGANTO🕺&lt;/strong&gt;. In the following sections, I'll share what I learned from each component, the key insights that surprised me, and what I find elegant about the design.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7enerdw4ywh7t8a4lx1o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7enerdw4ywh7t8a4lx1o.jpg" alt="alt image" width="686" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Memory system
&lt;/h3&gt;

&lt;p&gt;OpenClaw keeps the assistant’s memory as simple Markdown files on disk. The assistant’s personality and tone live in &lt;code&gt;SOUL.md&lt;/code&gt;, user preferences in &lt;code&gt;USER.md&lt;/code&gt;, and notable conversations get appended to a daily log. During a conversation, the agent can selectively write to these files, and the system prompt guides which information goes where. It’s not over-engineered — just enough structure to make memory feel real and &lt;strong&gt;🕺ELEGANTO🕺&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🤓 In my implementation, this is exposed via two tools: &lt;code&gt;memory_search&lt;/code&gt; (hybrid keyword + vector search over past logs) and &lt;code&gt;memory_write&lt;/code&gt; (append or overwrite a memory file).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2. Skill registry
&lt;/h3&gt;

&lt;p&gt;Skills give the agent new capabilities in a Markdown-based format — like a cheatsheet it can pull up on demand. What I find &lt;strong&gt;🕺ELEGANTO🕺&lt;/strong&gt; is the &lt;strong&gt;progressive disclosure&lt;/strong&gt; mechanism. At startup, the agent only sees a one-line summary of each available skill. When it needs to use one, it calls &lt;code&gt;describe_skill&lt;/code&gt; to load the full cheatsheet — actions, parameters, examples — and that doc gets injected into the system prompt for the rest of the session. This keeps the context lean until it’s actually needed, and the same pattern naturally extends to revealing new tools or even new skills over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🤓 In my implementation, each skill is a folder under &lt;code&gt;skills/&lt;/code&gt; with a &lt;code&gt;SKILL.md&lt;/code&gt; file (frontmatter = summary, body = full docs) and a Python script that exposes an &lt;code&gt;execute(action, params)&lt;/code&gt; function. The agent calls &lt;code&gt;describe_skill&lt;/code&gt; first, then &lt;code&gt;execute_skill&lt;/code&gt;. My light bulb skill lives in &lt;code&gt;src/smarthome/agent/skills/light-control&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. Scheduled jobs (CRON and Heartbeat)
&lt;/h3&gt;

&lt;p&gt;Beyond just responding to messages, OpenClaw can also proactively run tasks on a schedule. CRON-style jobs fire at specific times, while the Heartbeat is better suited for background monitoring — it batches tasks together so they can share context and inform each other’s results.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🤓 My implementation is closer to CRON: the scheduler wakes up on a regular interval, checks if any task’s scheduled time falls within the elapsed window, and fires it directly — no LLM call involved. That works &lt;strong&gt;🕺ELEGANTO🕺&lt;/strong&gt; enough for simple device commands like dimming the light at 9pm. If I wanted smarter tasks (e.g. checking my calendar and sending a notification), it would need to trigger an actual agent turn. Tasks are managed via the &lt;code&gt;schedule_task&lt;/code&gt; tool (add/remove/list) and persist in &lt;code&gt;SCHEDULE.md&lt;/code&gt; across restarts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  4. Channel Adapter
&lt;/h3&gt;

&lt;p&gt;OpenClaw supports many messaging apps — WhatsApp, Telegram, Slack, Discord, and more. The channel adapter is the translator between a specific app and the agent: it normalizes incoming messages into a unified format, and formats the agent’s response back into whatever the app expects. It also acts as the authentication boundary — rather than managing per-user credentials like MCP does, OpenClaw’s approach is perimeter-based: “as long as you can reach me, I trust you.”&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🤓 I only implemented Slack, which supports both direct messages and &lt;code&gt;@mentions&lt;/code&gt; in channels. One part I find particularly &lt;strong&gt;🕺ELEGANTO🕺&lt;/strong&gt; is the color picker: when the user wants to change the bulb color, a &lt;code&gt;show_palette&lt;/code&gt; action returns a Slack dropdown block. The agent loop intercepts this block and passes it directly to the Slack adapter, which renders it as an interactive dropdown — no extra roundtrip needed. The user picks a color, and it triggers &lt;code&gt;set_color&lt;/code&gt; directly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Key learning
&lt;/h2&gt;

&lt;p&gt;After building this, my biggest takeaway is that all these components — memory, skills, scheduling — work as well as they do because of how good modern LLMs are at tool calling. Once you wrap the right functionality as tools, the agent just... uses them. Memory feels persistent, skills feel modular, scheduling feels effortless. Especially the skill system: because you can add a new folder with a Markdown file and a Python script and the agent picks it up automatically — that expandability is truly &lt;strong&gt;🕺ELEGANTOOOO🕺&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;One more thing I want to share: I used Claude Code throughout this build, and it went better when I slowed down at the start. My first instinct was to clone the OpenClaw repo, dump it into context, and ask Claude to plan everything at once. That didn’t go well — I didn’t know enough about OpenClaw myself to tell if the plan made sense, and Claude can’t read your mind about what trade-offs matter to you. What actually helped was spending time upfront to clarify scope and intent together, before handing off any implementation. The clearer the brief, the better the output. That’s not specific to Claude — it’s just good collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick start for everyone
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Clone the repo&lt;/span&gt;
git clone https://github.com/jui-hung-yuan/smarthome-mcp-lab
cd smarthome-mcp-lab

&lt;span class="gh"&gt;# Add your Anthropic API key&lt;/span&gt;
 mkdir -p ~/.smarthome
 echo 'ANTHROPIC_API_KEY=sk-...' &amp;gt;&amp;gt; ~/.smarthome/.env

&lt;span class="gh"&gt;# Seed memory files (optional but recommended)&lt;/span&gt;
 mkdir -p ~/.smarthome/memory
 echo "# Memory" &amp;gt; ~/.smarthome/memory/MEMORY.md
 echo "# User Preferences" &amp;gt; ~/.smarthome/memory/USER.md

&lt;span class="gh"&gt;# Run with mock bulb (no hardware needed)&lt;/span&gt;
uv run python -m smarthome.agent --mock

&lt;span class="gh"&gt;# Run with real bulb — add `TAPO_USERNAME`, `TAPO_PASSWORD`, `TAPO_IP_ADDRESS` to `~/.smarthome/.env`, then:&lt;/span&gt;
uv run python -m smarthome.agent

&lt;span class="gh"&gt;# Run as a Slack bot — add `SLACK_BOT_TOKEN`, `SLACK_APP_TOKEN`, `SLACK_SIGNING_SECRET` to `~/.smarthome/.env`, then:&lt;/span&gt;
uv run python -m smarthome.agent --slack --mock   # mock bulb
uv run python -m smarthome.agent --slack          # real bulb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thanks for reading! 🙏 This was an &lt;strong&gt;🕺ELEGANTO🕺&lt;/strong&gt; project to build and to write about. If you've built something similar, or have thoughts on the architecture, I'd love to hear from you — drop a comment below or open an issue on the repo. Always happy to exchange ideas.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>openclaw</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>From Local to Cloud: What I Learned Building a Remote MCP Server on AWS for Smart Home Control</title>
      <dc:creator>Jui-Hung Yuan</dc:creator>
      <pubDate>Fri, 20 Feb 2026 14:21:59 +0000</pubDate>
      <link>https://dev.to/juihungyuan/from-local-to-cloud-what-i-learned-building-a-remote-mcp-server-on-aws-for-smart-home-control-3a24</link>
      <guid>https://dev.to/juihungyuan/from-local-to-cloud-what-i-learned-building-a-remote-mcp-server-on-aws-for-smart-home-control-3a24</guid>
      <description>&lt;p&gt;I wanted to tell Claude to turn off my bedroom light. Not just from my laptop at home — but from &lt;strong&gt;anywhere&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxophwtvufsya3j9mgb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxophwtvufsya3j9mgb5.png" alt="illustration of smart home control" width="800" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What started as "&lt;em&gt;let's try MCP&lt;/em&gt;" became "&lt;em&gt;why does OAuth keep failing&lt;/em&gt;" and "&lt;em&gt;how do cloud services reach devices behind my router&lt;/em&gt;".&lt;/p&gt;

&lt;p&gt;This post walks through the architecture decisions and the stuff that tripped me up. Not every choice was obvious, and some things only made sense after I'd already built them wrong once.&lt;/p&gt;

&lt;p&gt;The full code is &lt;a href="https://github.com/jui-hung-yuan/smarthome-mcp-lab" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;I recently attended a session organized by the Berlin AWS User Group about Amazon Bedrock AgentCore (&lt;em&gt;shoutout to the organizers — the session was really helpful!!&lt;/em&gt;). I'd been using MCP for a while but never built one myself. When I looked around, most posts covered the concept and building local MCP servers — not much about &lt;strong&gt;deploying remote MCP servers to the cloud&lt;/strong&gt; or what actually trips you up when you build one. So I picked a concrete use case: control my TAPO smart light bulb via Claude, and build the whole thing on AWS.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The goal was simple and deliberately small. &lt;strong&gt;One bulb. A handful of tools. But with enough real infrastructure to actually learn from.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Phase 1: Local Prototype with FastMCP
&lt;/h2&gt;

&lt;p&gt;I started with &lt;a href="https://gofastmcp.com" rel="noopener noreferrer"&gt;FastMCP&lt;/a&gt; and Claude Desktop. Getting a working local MCP server took one afternoon.&lt;/p&gt;

&lt;p&gt;The developer experience is genuinely impressive. You define a tool like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@app.tool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;turn_on&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Turn on the TAPO smart light bulb.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;bulb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;get_bulb&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bulb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;turn_on&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Light turned on&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. FastMCP reads your function and auto-generates everything Claude needs — the tool name, description, and input schema. You write the function. FastMCP handles the rest.&lt;/p&gt;

&lt;p&gt;Claude Desktop runs the FastMCP server as a local subprocess. The full path is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxnyqaq5uvzx91tvy7nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxnyqaq5uvzx91tvy7nt.png" alt="local MCP" width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It worked. I could chat with Claude and control my light. But then I left home.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Turning Point
&lt;/h2&gt;

&lt;p&gt;Claude Desktop's MCP only works with local subprocesses. The moment you close your laptop or step outside, it's gone. I wanted the Claude web app to work too — partly because it's more convenient, partly because building the remote version is where the real learning happens.&lt;/p&gt;

&lt;p&gt;That's a fundamentally different problem. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude web app needs a &lt;strong&gt;publicly accessible&lt;/strong&gt; MCP server with &lt;strong&gt;proper authentication&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;MCP server somehow needs to &lt;strong&gt;reach a device on your local network&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where AWS comes in.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bzm0ktawd121c5l2rdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bzm0ktawd121c5l2rdn.png" alt="Remote MCP" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each component has a specific job:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cognito&lt;/strong&gt;: Handles user login.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AgentCore Gateway&lt;/strong&gt;: The MCP server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda&lt;/strong&gt;: Serverless handler (No always-on server to manage).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IoT Core&lt;/strong&gt;: Cloud-to-device message broker.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local Bridge&lt;/strong&gt;: Runs at home and controls the bulb.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Key Lesson 1: Why AWS IoT Core (and not just Lambda → HTTP)?
&lt;/h2&gt;

&lt;p&gt;AWS IoT Core is a managed cloud service that acts as a message broker between cloud services and physical devices. It uses &lt;strong&gt;MQTT&lt;/strong&gt; — a lightweight protocol designed for devices with unreliable connections — to route messages through a publish/subscribe model. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Think of it like a radio station: Lambda broadcasts on a channel, and any device tuned to that channel receives the message.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The obvious question: &lt;em&gt;why not have Lambda call the local bridge directly over HTTP?&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1. NAT Traversal — Lambda can't reach your home network
&lt;/h4&gt;

&lt;p&gt;Lambda lives in AWS. Your home bridge lives behind your router. Your router blocks all inbound connections — Lambda has no address to call. To make direct HTTP work, you'd need a static IP and port forwarding (security risk), a VPN tunnel (operational overhead), or a reverse tunnel like ngrok (fragile, costs money).&lt;/p&gt;

&lt;p&gt;IoT Core flips the direction. Your local bridge reaches out to AWS and holds a persistent MQTT connection open. Lambda publishes to a topic, IoT Core delivers it over that already-open connection. Your home network never needs a public address.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Cost and efficiency — MQTT is event-driven, HTTP requires polling
&lt;/h4&gt;

&lt;p&gt;With HTTP, if Claude wants to know the current brightness, Lambda would need to make a request every time — or poll regularly to keep state fresh. That's an HTTP call (and cost) for every status check.&lt;/p&gt;

&lt;p&gt;With MQTT, the bridge reports state changes to IoT Core's Device Shadow automatically. Claude asks for brightness? The Shadow answers instantly from cache. No new request needed. The bridge only sends updates when something actually changes.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Resilience — MQTT handles disconnections automatically
&lt;/h4&gt;

&lt;p&gt;HTTP assumes both sides are reliably reachable. If your bridge restarts or your internet hiccups during a Lambda call, the request just fails.&lt;/p&gt;

&lt;p&gt;MQTT is designed for unreliable connections. If your bridge goes offline, IoT Core queues messages. When it reconnects, pending commands are delivered automatically. No retry logic to write, no state to track manually.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Lesson 2: FastMCP vs. AgentCore Gateway
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock AgentCore Gateway is a fully managed service that turns your backend functions into an MCP-compliant server that AI clients like Claude can talk to. It handles OAuth authentication, protocol translation, and tool discovery — so you only write business logic. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Think of it as the bouncer and translator standing between Claude and your Lambda: it checks credentials, speaks MCP fluently, and routes the right instructions inward.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's how it compares to FastMCP as an MCP hosting approach:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Winner&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tool Schema&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tie🤝&lt;/td&gt;
&lt;td&gt;FastMCP auto-generates from decorators (better DX). AgentCore requires explicit JSON (&lt;em&gt;~70 lines for 4 tools&lt;/em&gt;), but makes the contract reviewable.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hosting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AgentCore🏆&lt;/td&gt;
&lt;td&gt;FastMCP needs a persistent runtime (container, EC2, Fargate) — costs money even when idle. AgentCore is serverless — Lambda only runs when invoked.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Auth Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tie🤝&lt;/td&gt;
&lt;td&gt;Both handle OAuth well now (FastMCP 2.11+ added JWT, OAuth proxy, full OAuth server).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Key Lesson 3: OAuth — The Part That Actually Took Time
&lt;/h2&gt;

&lt;p&gt;If I'm honest, the infrastructure was the easy part. Authentication is where I spent most of my debugging time.&lt;/p&gt;

&lt;p&gt;The problem started with how I set up my Cognito configuration. It defaulted to the &lt;code&gt;client_credentials&lt;/code&gt; flow — machine-to-machine (M2M) auth where a service exchanges a client ID and secret directly for a token. No login page, no user interaction.&lt;/p&gt;

&lt;p&gt;That works fine for service-to-service communication. But the Claude web app is a browser-based client. It needs to redirect the user to a login page, have them authenticate, and receive an authorization code back — the &lt;code&gt;authorization_code&lt;/code&gt; flow. These are fundamentally different OAuth patterns. It took me a while to figure out I was using the wrong flow entirely.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Again think of it like a nightclub with a strict bouncer. The bouncer (&lt;strong&gt;AgentCore Gateway&lt;/strong&gt;) doesn't know you personally, but trusts the ID checker down the street (&lt;strong&gt;Cognito&lt;/strong&gt;). You walk down to &lt;strong&gt;Cognito&lt;/strong&gt;, prove who you are, and &lt;strong&gt;Cognito&lt;/strong&gt; gives you a wristband. You bring that wristband back to the bouncer at &lt;code&gt;/auth_callback&lt;/code&gt;, and now you're in.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That callback address is the key. The &lt;code&gt;authorization_code&lt;/code&gt; flow exists precisely because Claude (the browser client) needs a human to authenticate interactively. The code is the shop's way of receiving confirmation from Cognito without the user handing over their password directly.&lt;/p&gt;

&lt;p&gt;Once I understood that distinction, I knew what to fix: create a separate Cognito app client configured for &lt;code&gt;authorization_code&lt;/code&gt; flow with the correct callback URL.&lt;/p&gt;

&lt;p&gt;There was a second, subtler issue. When the AgentCore Gateway's resource metadata (&lt;code&gt;/.well-known/oauth-protected-resource&lt;/code&gt;) doesn't specify a scope, Claude falls back to Cognito's OIDC discovery endpoint (&lt;code&gt;/.well-known/openid-configuration&lt;/code&gt;), which advertises standard scopes: &lt;code&gt;openid&lt;/code&gt;, &lt;code&gt;email&lt;/code&gt;, &lt;code&gt;phone&lt;/code&gt;, &lt;code&gt;profile&lt;/code&gt;. But my Cognito app client only allowed &lt;code&gt;openid&lt;/code&gt;, &lt;code&gt;smarthome-gateway/read&lt;/code&gt;, and &lt;code&gt;smarthome-gateway/write&lt;/code&gt;. Claude requesting &lt;code&gt;email&lt;/code&gt; and &lt;code&gt;phone&lt;/code&gt; caused an &lt;code&gt;invalid_scope&lt;/code&gt; error. The fix: explicitly configure the allowed scopes on the client to match exactly what Claude will request.&lt;/p&gt;

&lt;p&gt;Neither of these issues had anything to do with MCP itself. They were pure OAuth configuration problems. But you can only diagnose them if you understand the handshake well enough to know which step is failing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-World Performance
&lt;/h2&gt;

&lt;p&gt;Once everything was wired up, I measured actual round-trip times from Claude's perspective:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Round-trip&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;get_status&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1,131 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;turn_on&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;3,367 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;get_status&lt;/code&gt; only hits the IoT Device Shadow — no trip to the physical bulb. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;turn_on&lt;/code&gt; goes the full path: Lambda → IoT Core → bridge → bulb → confirmation back. Three seconds is noticeable but acceptable for a chat experience with the light switch. For anything latency-sensitive, you'd want to think harder about this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The monthly cost for 1,000 tool calls across all services: &lt;strong&gt;$0.07&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;A few things I want to clean up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; for infrastructure provisioning. Right now I have five boto3 scripts that need to be run in a specific order. It works, but it's tedious. Terraform would make this reproducible and shareable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dockerize the local bridge&lt;/strong&gt; to run on a Raspberry Pi or similar edge device, so it doesn't depend on a laptop being on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More device types.&lt;/strong&gt; The architecture already supports it — &lt;code&gt;BaseDevice&lt;/code&gt; is an abstract interface, and the &lt;code&gt;DeviceRegistry&lt;/code&gt; manages multiple devices. Adding a smart plug or thermostat is mostly a new implementation, not a new architecture.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;I started this project to understand MCP better. I ended up learning more about AWS IoT, OAuth flows, and serverless architecture than I expected. That's usually the sign of a good learning project — the stated goal was an excuse to dig into something real.&lt;/p&gt;

&lt;p&gt;A few things I'd tell someone starting this from scratch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Get the local FastMCP version working first&lt;/strong&gt;. It takes an afternoon and gives you immediate feedback. Only then add the AWS layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The OAuth debugging will take longer than the infrastructure&lt;/strong&gt;. Learn the &lt;code&gt;authorization_code&lt;/code&gt; vs &lt;code&gt;client_credentials&lt;/code&gt; distinction before you start configuring Cognito — it'll save you hours.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AgentCore Gateway is genuinely easy to set up compared to what I expected&lt;/strong&gt;. The tool schema verbosity is real, but it's a one-time cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IoT Core is the right tool for this specific problem&lt;/strong&gt;. The NAT traversal alone justifies it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;If you're building something similar or have questions about any of the architectural decisions, I'd love to hear from you in the comments. And if you spot something I got wrong — even better.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://github.com/jui-hung-yuan/smarthome-mcp-lab" rel="noopener noreferrer"&gt;Full code on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mcp</category>
      <category>sideprojects</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
