DEV Community

Jui-Hung Yuan
Jui-Hung Yuan

Posted on

My Apartment Now Dims the Lights on Guests (On Purpose🙈)

In my previous blog, I built a remote MCP server connected to Claude so I could control my smart light bulb by just sending a chat message. It worked! But looking back, the architecture was a glorious overkill.

All of this, for two people who just want to dim the lights in the evening 🫠..

Around the same time, I had been hearing a lot about the trend of “Skills over MCP” and the hype around OpenClaw. OpenClaw is popular because it’s a local-first AI assistant — the whole orchestration runs on your own devices, no cloud infrastructure needed, just enough flexibility to build something genuinely useful. That sounded like exactly the right learning opportunity.

So I decided to build an OpenClaw-inspired personal assistant for smarthome control — simpler, local, and tailored to just what I needed.

light_schedule_illustratation

Spoiler⚠️: it worked out well. My partner and I can now control the light bulb from Slack, and we can schedule light adjustments via chat. I am obsessed with healthy lifestyle. So naturally, I scheduled the light to slowly dim itself over the evening. It’s very peaceful. The only unintended side effect is that when we have guests over, the room gets darker and darker as the night goes on, and somehow they always end up leaving earlier than planned. I feel a little bad about it. But also — they’re getting more sleep now, so really, I’m doing them a favor.

OpenClaw basics

To understand how OpenClaw works, I reimplemented its 4 core components with the help of Claude — purely for learning. No shortcuts, no copy-paste. Just building each piece by hand until it clicked.

Beauty is in the eye of the beholder — or as the teacher in Spy x Family would say, 🕺ELEGANTO🕺. In the following sections, I'll share what I learned from each component, the key insights that surprised me, and what I find elegant about the design.

alt image

1. Memory system

OpenClaw keeps the assistant’s memory as simple Markdown files on disk. The assistant’s personality and tone live in SOUL.md, user preferences in USER.md, and notable conversations get appended to a daily log. During a conversation, the agent can selectively write to these files, and the system prompt guides which information goes where. It’s not over-engineered — just enough structure to make memory feel real and 🕺ELEGANTO🕺.

🤓 In my implementation, this is exposed via two tools: memory_search (hybrid keyword + vector search over past logs) and memory_write (append or overwrite a memory file).

2. Skill registry

Skills give the agent new capabilities in a Markdown-based format — like a cheatsheet it can pull up on demand. What I find 🕺ELEGANTO🕺 is the progressive disclosure mechanism. At startup, the agent only sees a one-line summary of each available skill. When it needs to use one, it calls describe_skill to load the full cheatsheet — actions, parameters, examples — and that doc gets injected into the system prompt for the rest of the session. This keeps the context lean until it’s actually needed, and the same pattern naturally extends to revealing new tools or even new skills over time.

🤓 In my implementation, each skill is a folder under skills/ with a SKILL.md file (frontmatter = summary, body = full docs) and a Python script that exposes an execute(action, params) function. The agent calls describe_skill first, then execute_skill. My light bulb skill lives in src/smarthome/agent/skills/light-control.

3. Scheduled jobs (CRON and Heartbeat)

Beyond just responding to messages, OpenClaw can also proactively run tasks on a schedule. CRON-style jobs fire at specific times, while the Heartbeat is better suited for background monitoring — it batches tasks together so they can share context and inform each other’s results.

🤓 My implementation is closer to CRON: the scheduler wakes up on a regular interval, checks if any task’s scheduled time falls within the elapsed window, and fires it directly — no LLM call involved. That works 🕺ELEGANTO🕺 enough for simple device commands like dimming the light at 9pm. If I wanted smarter tasks (e.g. checking my calendar and sending a notification), it would need to trigger an actual agent turn. Tasks are managed via the schedule_task tool (add/remove/list) and persist in SCHEDULE.md across restarts.

4. Channel Adapter

OpenClaw supports many messaging apps — WhatsApp, Telegram, Slack, Discord, and more. The channel adapter is the translator between a specific app and the agent: it normalizes incoming messages into a unified format, and formats the agent’s response back into whatever the app expects. It also acts as the authentication boundary — rather than managing per-user credentials like MCP does, OpenClaw’s approach is perimeter-based: “as long as you can reach me, I trust you.”

🤓 I only implemented Slack, which supports both direct messages and @mentions in channels. One part I find particularly 🕺ELEGANTO🕺 is the color picker: when the user wants to change the bulb color, a show_palette action returns a Slack dropdown block. The agent loop intercepts this block and passes it directly to the Slack adapter, which renders it as an interactive dropdown — no extra roundtrip needed. The user picks a color, and it triggers set_color directly.

Key learning

After building this, my biggest takeaway is that all these components — memory, skills, scheduling — work as well as they do because of how good modern LLMs are at tool calling. Once you wrap the right functionality as tools, the agent just... uses them. Memory feels persistent, skills feel modular, scheduling feels effortless. Especially the skill system: because you can add a new folder with a Markdown file and a Python script and the agent picks it up automatically — that expandability is truly 🕺ELEGANTOOOO🕺.

One more thing I want to share: I used Claude Code throughout this build, and it went better when I slowed down at the start. My first instinct was to clone the OpenClaw repo, dump it into context, and ask Claude to plan everything at once. That didn’t go well — I didn’t know enough about OpenClaw myself to tell if the plan made sense, and Claude can’t read your mind about what trade-offs matter to you. What actually helped was spending time upfront to clarify scope and intent together, before handing off any implementation. The clearer the brief, the better the output. That’s not specific to Claude — it’s just good collaboration.

Quick start for everyone

# Clone the repo
git clone https://github.com/jui-hung-yuan/smarthome-mcp-lab
cd smarthome-mcp-lab

# Add your Anthropic API key
 mkdir -p ~/.smarthome
 echo 'ANTHROPIC_API_KEY=sk-...' >> ~/.smarthome/.env

# Seed memory files (optional but recommended)
 mkdir -p ~/.smarthome/memory
 echo "# Memory" > ~/.smarthome/memory/MEMORY.md
 echo "# User Preferences" > ~/.smarthome/memory/USER.md

# Run with mock bulb (no hardware needed)
uv run python -m smarthome.agent --mock

# Run with real bulb — add `TAPO_USERNAME`, `TAPO_PASSWORD`, `TAPO_IP_ADDRESS` to `~/.smarthome/.env`, then:
uv run python -m smarthome.agent

# Run as a Slack bot — add `SLACK_BOT_TOKEN`, `SLACK_APP_TOKEN`, `SLACK_SIGNING_SECRET` to `~/.smarthome/.env`, then:
uv run python -m smarthome.agent --slack --mock   # mock bulb
uv run python -m smarthome.agent --slack          # real bulb
Enter fullscreen mode Exit fullscreen mode

Thanks for reading! 🙏 This was an 🕺ELEGANTO🕺 project to build and to write about. If you've built something similar, or have thoughts on the architecture, I'd love to hear from you — drop a comment below or open an issue on the repo. Always happy to exchange ideas.

Top comments (0)