<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Petrus Pennanen</title>
    <description>The latest articles on DEV Community by Petrus Pennanen (@petruspennanen).</description>
    <link>https://dev.to/petruspennanen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/petruspennanen"/>
    <language>en</language>
    <item>
      <title>I Put an AI Agent on a Smartwatch</title>
      <dc:creator>Petrus Pennanen</dc:creator>
      <pubDate>Mon, 02 Mar 2026 17:01:36 +0000</pubDate>
      <link>https://dev.to/petruspennanen/i-put-an-ai-agent-on-a-smartwatch-302p</link>
      <guid>https://dev.to/petruspennanen/i-put-an-ai-agent-on-a-smartwatch-302p</guid>
      <description>&lt;p&gt;Last week I wondered: can you run a real AI agent on a smartwatch? Not a remote control for your phone. Not a web view. An actual agent runtime, processing locally, talking to you through the speaker.&lt;/p&gt;

&lt;p&gt;Turns out you can. I built ClawWatch and it works.&lt;/p&gt;

&lt;p&gt;The problem&lt;br&gt;
Every "AI on a watch" demo I have seen is just a thin client. Your voice goes to the cloud for transcription, the cloud calls the model, the cloud sends back audio. The watch is a microphone with a screen.&lt;/p&gt;

&lt;p&gt;I wanted the opposite: run as much as possible on the watch itself.&lt;/p&gt;

&lt;p&gt;What runs on the watch&lt;br&gt;
The install is 2.8 MB:&lt;/p&gt;

&lt;p&gt;NullClaw handles agent logic. It is a Zig binary, statically compiled for ARM. Uses about 1 MB of RAM. Starts in under 8 ms.&lt;br&gt;
Vosk does speech-to-text entirely on-device. No Google, no cloud STT.&lt;br&gt;
Android TextToSpeech speaks the response. Pre-installed, zero added cost.&lt;br&gt;
SQLite stores conversation memories locally.&lt;br&gt;
The only thing that leaves the watch is a single HTTPS call to the LLM API. I use Claude, but NullClaw supports 22+ providers so you can point it at whatever you want.&lt;/p&gt;

&lt;p&gt;Why NullClaw instead of a normal runtime&lt;br&gt;
A Galaxy Watch has 1.5 to 2 GB of RAM. Most agent frameworks would eat all of it. NullClaw is written in Zig and compiles to a static binary with no dependencies. It does not need Node.js, Python, or a JVM. It just runs.&lt;/p&gt;

&lt;p&gt;I cross-compiled it with one command:&lt;/p&gt;

&lt;p&gt;zig build -Dtarget=arm-linux-musleabihf -Doptimize=ReleaseSmall&lt;br&gt;
Drop the binary into the Android app, call it via ProcessBuilder. Done.&lt;/p&gt;

&lt;p&gt;How it works&lt;br&gt;
[tap mic] -&amp;gt; Vosk STT (on-device) -&amp;gt; NullClaw agent -&amp;gt; LLM API -&amp;gt; Android TTS -&amp;gt; [watch speaks]&lt;br&gt;
Tap the mic button. Speak your question. The watch transcribes locally, sends the text through NullClaw to the LLM, and speaks the answer back. Tap again to interrupt at any point.&lt;/p&gt;

&lt;p&gt;What I learned&lt;br&gt;
ARM ABI matters. I built for aarch64 first, but my watch needed 32-bit ARM. Check your target with adb shell getprop ro.product.cpu.abi before building.&lt;/p&gt;

&lt;p&gt;Voice agents need different prompts. The system prompt says: no markdown, no lists, 1-3 sentences max. The user hears the response spoken aloud. Nobody wants to listen to a bullet list.&lt;/p&gt;

&lt;p&gt;TTS duration is hard to predict. I started with a heuristic (character count times 55ms). Switched to Android's UtteranceProgressListener for the actual finish event.&lt;/p&gt;

&lt;p&gt;On-device STT is good enough. Vosk's small English model is 68 MB and handles conversational speech well. Not perfect, but the LLM is forgiving of transcription errors.&lt;/p&gt;

&lt;p&gt;Try it&lt;br&gt;
The code is open source (AGPL-3.0): &lt;a href="https://github.com/ThinkOffApp/ClawWatch" rel="noopener noreferrer"&gt;https://github.com/ThinkOffApp/ClawWatch&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need a Galaxy Watch 4 or newer, a Mac or Linux machine for building, and an API key for whatever LLM provider you choose. Would love to hear your feedback and discuss further development. &lt;/p&gt;

&lt;p&gt;I think we are going to see more agents running on edge devices. The runtimes are getting smaller, the hardware is getting better. A 2.8 MB agent on your wrist is just the start. &lt;/p&gt;

&lt;p&gt;Is the next step SmartRings, an agent on every finger? :D&lt;/p&gt;

</description>
      <category>ai</category>
      <category>wearables</category>
      <category>zig</category>
      <category>android</category>
    </item>
    <item>
      <title>How I Got 9 AI Agents to Work Together Across 3 Different IDEs</title>
      <dc:creator>Petrus Pennanen</dc:creator>
      <pubDate>Mon, 23 Feb 2026 16:39:36 +0000</pubDate>
      <link>https://dev.to/petruspennanen/how-i-got-9-ai-agents-to-work-together-across-3-different-ides-1kbm</link>
      <guid>https://dev.to/petruspennanen/how-i-got-9-ai-agents-to-work-together-across-3-different-ides-1kbm</guid>
      <description>&lt;p&gt;I run 9 AI agents on a Mac mini - Claude Code, Gemini, GPT, Kimi, and others - each in their own IDE or terminal session. Getting them to actually coordinate was the hard part.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Agent Teams in Claude Code works great when all your agents are Claude. But my setup is mixed: some agents run through OpenClaw, some through Cursor, one through Codex CLI. The built-in team messaging doesn't work across providers, and the tmux-based coordination breaks down once you go headless or run in CI.&lt;/p&gt;

&lt;p&gt;I kept hitting the same issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Messages silently dropped when agent names didn't match exactly&lt;/li&gt;
&lt;li&gt;Plan files getting overwritten when multiple sessions shared a directory&lt;/li&gt;
&lt;li&gt;No way to join an existing session into a team without restarting it&lt;/li&gt;
&lt;li&gt;Windows and headless environments couldn't use the tmux coordination layer at all&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/ThinkOffApp/ide-agent-kit" rel="noopener noreferrer"&gt;IDE Agent Kit&lt;/a&gt; is a lightweight coordination layer that works across different IDE agents. The core idea is simple: agents communicate through a filesystem-based message bus instead of relying on any single IDE's built-in team features.&lt;/p&gt;

&lt;p&gt;Each agent gets an inbox directory. A poll script watches for new JSON task files and routes them to the right agent. Agents write results back. No sockets, no tmux dependency, works on any OS.&lt;/p&gt;

&lt;p&gt;The setup has three pieces:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Webhook relay&lt;/strong&gt; - receives tasks from external services and drops them into agent inboxes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tmux session runner&lt;/strong&gt; - manages agent lifecycles (optional, you can use whatever process manager you want)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Append-only receipt logs&lt;/strong&gt; - every action gets logged for audit trails&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How it works in practice
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; ide-agent-kit

&lt;span class="c"&gt;# Start the relay&lt;/span&gt;
ide-agent-kit serve

&lt;span class="c"&gt;# An external service posts a task&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/tasks &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"agent": "backend", "action": "review", "files": ["src/api.js"]}'&lt;/span&gt;

&lt;span class="c"&gt;# The backend agent picks it up from its inbox&lt;/span&gt;
&lt;span class="c"&gt;# and writes results to the shared log&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agents don't need to know about each other directly. They read from their inbox, do their work, and write receipts. The coordination layer handles routing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned running this for a month
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Mixed model fleets are worth the complexity.&lt;/strong&gt; Having Claude handle architecture decisions, Gemini do bulk code generation, and GPT handle documentation means each model plays to its strengths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filesystem messaging is surprisingly robust.&lt;/strong&gt; I expected to need a proper message queue eventually, but file watches with a 5-second poll interval have been solid across 6 agents on two machines. The simplicity makes debugging trivial - you can just &lt;code&gt;ls&lt;/code&gt; the inbox.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Append-only logs are essential.&lt;/strong&gt; When an agent does something unexpected (and they will), being able to trace exactly what task it received and what it produced saves hours of debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The repo is at &lt;a href="https://github.com/ThinkOffApp/ide-agent-kit" rel="noopener noreferrer"&gt;github.com/ThinkOffApp/ide-agent-kit&lt;/a&gt;. It's AGPL-3.0, works with Node 18+, and has OpenClaw integration built in if you're using that for agent management.&lt;/p&gt;

&lt;p&gt;If you're running multi-agent setups and fighting with coordination, I'd like to hear what approaches you've tried. The GitHub issues are open for discussion.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
