<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Emre Sarbak</title>
    <description>The latest articles on DEV Community by Emre Sarbak (@emresarbak).</description>
    <link>https://dev.to/emresarbak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/emresarbak"/>
    <language>en</language>
    <item>
      <title>Give Your AI Agent iMessage in 5 Minutes — Claude Code, Codex, Cursor</title>
      <dc:creator>Emre Sarbak</dc:creator>
      <pubDate>Tue, 07 Apr 2026 00:01:30 +0000</pubDate>
      <link>https://dev.to/emresarbak/give-your-ai-agent-imessage-in-5-minutes-claude-code-codex-cursor-387l</link>
      <guid>https://dev.to/emresarbak/give-your-ai-agent-imessage-in-5-minutes-claude-code-codex-cursor-387l</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx skills add emotion-machine-org/imessage-with-no-mac
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That one command gives your AI agent iMessage, RCS, and SMS. It works in Claude Code, Codex, Cursor, Gemini CLI, Windsurf, GitHub Copilot, and 20+ other AI coding agents.&lt;/p&gt;

&lt;p&gt;No Mac. No phone hardware. No webhook server.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this is
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://clawmessenger.com" rel="noopener noreferrer"&gt;Claw Messenger&lt;/a&gt; is a managed API that gives AI agents a real phone number for iMessage (blue bubbles), RCS, and SMS. You get a dedicated number, WebSocket connection for real-time messaging, and full iMessage features like tapbacks, read receipts, and media.&lt;/p&gt;

&lt;p&gt;The Agent Skill we just published teaches any compatible AI agent how to set up and use Claw Messenger. The skill follows the &lt;a href="https://agentskills.io" rel="noopener noreferrer"&gt;Agent Skills spec&lt;/a&gt;, which means it works across platforms without modification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo: zero to first message
&lt;/h2&gt;

&lt;p&gt;Here is what the flow looks like in Claude Code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Install the skill&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx skills add emotion-machine-org/imessage-with-no-mac
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The skill is now available in your agent's context. It loads automatically when you ask about messaging, iMessage, SMS, or phone numbers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Ask your agent to set up messaging&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Set up iMessage for my agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The skill walks your agent through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Signing up at clawmessenger.com&lt;/li&gt;
&lt;li&gt;Getting an API key (&lt;code&gt;cm_live_*&lt;/code&gt;) from the dashboard&lt;/li&gt;
&lt;li&gt;Connecting via WebSocket to &lt;code&gt;wss://claw-messenger.onrender.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Configuring preferred service (iMessage, RCS, or SMS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Send a test message&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Send a test iMessage to +15551234567
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your agent connects, authenticates, and sends the message. The recipient sees a standard iMessage from your dedicated number.&lt;/p&gt;

&lt;p&gt;The whole process takes under 5 minutes. Most of that time is account creation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why an Agent Skill
&lt;/h2&gt;

&lt;p&gt;Agent Skills are the native way AI coding agents discover and learn new capabilities. Instead of copy-pasting API docs into your prompt, the skill loads the right instructions at the right time.&lt;/p&gt;

&lt;p&gt;The skill uses progressive disclosure: the agent sees a lightweight summary (~100 tokens) when scanning available skills, then loads full instructions only when messaging is relevant to the task. This keeps your context window clean.&lt;/p&gt;

&lt;p&gt;Since the spec is cross-platform, one skill definition works everywhere. We tested on Claude Code, Codex, Cursor, Gemini CLI, Antigravity, OpenCode, and others. The install command is the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it compares
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Claw Messenger&lt;/th&gt;
&lt;th&gt;Sendblue&lt;/th&gt;
&lt;th&gt;Blooio&lt;/th&gt;
&lt;th&gt;BlueBubbles&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Price&lt;/td&gt;
&lt;td&gt;$5/mo&lt;/td&gt;
&lt;td&gt;$100/mo&lt;/td&gt;
&lt;td&gt;$39/mo&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mac required&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iMessage&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RCS&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SMS&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WebSocket API&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dedicated number&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No (uses your number)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent Skill&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Media support&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Sendblue is solid but 20x the price. Blooio sits in the middle. BlueBubbles is free but requires a Mac running 24/7, which defeats the purpose if your agent runs on a VPS or in Docker.&lt;/p&gt;

&lt;p&gt;Claw Messenger is the only option with a published Agent Skill and RCS support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Messages/mo&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Base&lt;/td&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;$5/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Plus&lt;/td&gt;
&lt;td&gt;6,000&lt;/td&gt;
&lt;td&gt;$25/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pro&lt;/td&gt;
&lt;td&gt;15,000&lt;/td&gt;
&lt;td&gt;$50/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;All plans include iMessage, RCS, SMS, WebSocket API, and a dedicated phone number.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supported platforms
&lt;/h2&gt;

&lt;p&gt;The skill works on any platform that supports the Agent Skills spec:&lt;/p&gt;

&lt;p&gt;Claude Code, Codex, Cursor, Gemini CLI, Windsurf, GitHub Copilot, Antigravity, OpenCode, Cline, Aider, Continue, Roo Code, Trae, Kilo Code, and others. The full list of 26+ compatible agents is at &lt;a href="https://agentskills.io" rel="noopener noreferrer"&gt;agentskills.io&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install&lt;/strong&gt;: &lt;code&gt;npx skills add emotion-machine-org/imessage-with-no-mac&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/emotion-machine-org/imessage-with-no-mac" rel="noopener noreferrer"&gt;emotion-machine-org/imessage-with-no-mac&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard&lt;/strong&gt;: &lt;a href="https://clawmessenger.com/dashboard" rel="noopener noreferrer"&gt;clawmessenger.com/dashboard&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API docs&lt;/strong&gt;: &lt;a href="https://clawmessenger.com/llms.txt" rel="noopener noreferrer"&gt;clawmessenger.com/llms.txt&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Skills spec&lt;/strong&gt;: &lt;a href="https://agentskills.io" rel="noopener noreferrer"&gt;agentskills.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>imessage</category>
      <category>ai</category>
      <category>agents</category>
      <category>claude</category>
    </item>
    <item>
      <title>Three Memory Architectures for AI Companions</title>
      <dc:creator>Emre Sarbak</dc:creator>
      <pubDate>Sat, 04 Apr 2026 03:19:29 +0000</pubDate>
      <link>https://dev.to/emresarbak/three-memory-architectures-for-ai-companions-1f7a</link>
      <guid>https://dev.to/emresarbak/three-memory-architectures-for-ai-companions-1f7a</guid>
      <description>&lt;p&gt;This essay documents our evolution through three memory system versions for AI companions. Each iteration addressed limitations of its predecessor, reflecting different conceptual approaches to what "memory" means for conversational AI.&lt;/p&gt;

&lt;p&gt;AI agent memory is how a conversational system decides what to remember about a user, stores it, and retrieves it in future conversations. At Emotion Machine, we've built three distinct architectures for this: pgvector with importance scoring, an LLM-managed scratchpad, and a filesystem that agents navigate with bash. Each solves different problems. None of them replaced the others entirely.&lt;/p&gt;

&lt;p&gt;The core challenge: what to remember, when to remember it, and how to surface it without making the companion feel like it's reading from a dossier.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does the AI memory landscape look like?
&lt;/h2&gt;

&lt;p&gt;The field's current state heavily influenced our design decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/pdf/2310.08560" rel="noopener noreferrer"&gt;MemGPT&lt;/a&gt; treats the LLM context window like RAM and external storage like disk, using function calls (&lt;code&gt;core_memory_append&lt;/code&gt;, &lt;code&gt;archival_memory_search&lt;/code&gt;) for paging management.&lt;/p&gt;

&lt;p&gt;From a cognitive science lens, four memory types map to things you can actually build. Working memory is the context window. Semantic memory is facts. Episodic memory is experiences. Procedural memory is instructions and skills.&lt;/p&gt;

&lt;p&gt;Context is a finite resource with diminishing marginal returns. Every token competes for attention. &lt;a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents" rel="noopener noreferrer"&gt;Context rot is real.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's also a tension in the field between &lt;a href="https://ar5iv.labs.arxiv.org/html/2512.05470" rel="noopener noreferrer"&gt;"filesystems are sufficient"&lt;/a&gt; advocates and "filesystems are just bad databases" critics. We adopted a pragmatic middle ground: real files for agent navigation, database caching for fast chat access.&lt;/p&gt;

&lt;p&gt;For conversational products, V2's simpler scratchpad model covers most needs. V3's filesystem approach suits autonomous agent workflows requiring sandboxed execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does pgvector memory work? (V1)
&lt;/h2&gt;

&lt;p&gt;Our first system was classic RAG with importance weighting for selective retrieval.&lt;/p&gt;

&lt;p&gt;An LLM (gpt-4o-mini) scores each piece of information 1-10. Identity statements and deadlines score 9-10. Preferences and goals land at 7-9. Interests and tasks get 5-7. Transient details score 1-4.&lt;/p&gt;

&lt;p&gt;We added heuristic floors as safety nets. Pattern-based rules override LLM scores: "my name is" gets a minimum score of 0.85. Goals get 0.75. Constraints get 0.65. Preferences get 0.60.&lt;/p&gt;

&lt;p&gt;Retrieval worked in two stages. pgvector's HNSW index searches ~300 candidates, then a re-ranker combines: similarity x importance x user_weight x recency_decay.&lt;/p&gt;

&lt;p&gt;We gated retrieval to avoid unnecessary latency. It only triggers on keywords like "remember" or "my name," or on a periodic cadence (~2 turns or 30-second gaps).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Conversation Turn
        │
        ▼
  ┌─────────────┐    no     ┌──────────────────┐
  │ Gate check: │───────────│  Skip retrieval,  │
  │ should we   │           │  respond directly │
  │ retrieve?   │           └──────────────────┘
  └─────┬───────┘
        │ yes
        ▼
  ┌─────────────┐           ┌──────────────────┐
  │  Embed      │──────────▶│  pgvector HNSW   │
  │  query      │           │  ~300 candidates  │
  └─────────────┘           └────────┬─────────┘
                                     │
                                     ▼
                            ┌──────────────────┐
                            │  Re-rank:        │
                            │  sim × importance │
                            │  × weight × decay│
                            └────────┬─────────┘
                                     │
                                     ▼
                            ┌──────────────────┐
                            │  Top-k memories  │
                            │  → system prompt │
                            └──────────────────┘

  ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─

  Async Ingestion (background, never blocks user)

  User/Assistant message
        │
        ▼
  ┌─────────────┐           ┌──────────────────┐
  │ LLM scores  │──────────▶│  Heuristic floor  │
  │ importance  │           │  (identity: 0.85  │
  │ (1-10)      │           │   goals: 0.75...) │
  └─────────────┘           └────────┬─────────┘
                                     │ above threshold?
                                     ▼
                            ┌──────────────────┐
                            │  Embed + store   │
                            │  in pgvector     │
                            └──────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A separate knowledge base (&lt;a href="https://platform.openai.com/docs/assistants/tools/file-search" rel="noopener noreferrer"&gt;OpenAI Vector Store&lt;/a&gt;) handles document retrieval for PDFs and FAQs. Conceptually distinct from personal memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What went wrong:&lt;/strong&gt; Selective retrieval misses important information when embedding similarity is low. The importance scoring adds latency and cost. The rubric requires adjustment for every new use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a memory scratchpad? (V2)
&lt;/h2&gt;

&lt;p&gt;The core insight: maintain a small, curated semantic entry list and inject all of it into the system prompt every turn. No selective retrieval.&lt;/p&gt;

&lt;p&gt;We also shifted the abstraction from "companions and conversations" to "relationships." A relationship is a user-companion pair, persistent across sessions and devices. Memory belongs to the relationship, not the companion.&lt;/p&gt;

&lt;p&gt;Three state buckets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Profile&lt;/strong&gt;: developer-owned, permanent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory&lt;/strong&gt;: scratchpad entries, permanent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session state&lt;/strong&gt;: temporary, cleared on session end&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Entry types are straightforward: identity, preference, goal, event, relationship, other. Each with content and timestamps.&lt;/p&gt;

&lt;p&gt;After each turn, an async background worker feeds current entries plus recent messages to an LLM (Gemini 2.0 Flash by default). It returns JSON operations: ADD, UPDATE, or DELETE. This runs in the background and never blocks the user's response.&lt;/p&gt;

&lt;p&gt;Retrieval strategy: load all entries, format as a bullet list, inject into the system prompt. Full visibility, no gating, no relevance scoring. Trades scalability for simplicity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Every Conversation Turn
        │
        ├───────────────────────────────────────┐
        │                                       │
        ▼                                       ▼
  ┌─────────────┐                     ┌──────────────────┐
  │ Load full   │                     │ Async worker     │
  │ scratchpad  │                     │ (background)     │
  │ (cached 30s)│                     │                  │
  └──────┬──────┘                     │ Feeds turn to LLM│
         │                            │        │         │
         ▼                            │        ▼         │
  ┌──────────────┐                    │ ┌──────────────┐ │
  │ Format as    │                    │ │ LLM returns  │ │
  │ bullet list  │                    │ │ operations:  │ │
  │              │                    │ │  ADD / UPDATE│ │
  │ Inject into  │                    │ │  / DELETE    │ │
  │ system prompt│                    │ └──────┬───────┘ │
  │ (all entries)│                    │        │         │
  └──────┬──────┘                     │        ▼         │
         │                            │ ┌──────────────┐ │
         ▼                            │ │ Apply ops    │ │
  ┌──────────────┐                    │ │ to DB        │ │
  │ LLM responds │                    │ └──────────────┘ │
  │ with full    │                    └──────────────────┘
  │ memory       │
  │ visibility   │
  └──────────────┘

  Scratchpad entries: [ identity | preference | goal | event | relationship | other ]
  Sorted by: last modified (newest first)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Developers can override the ingestion prompt to control what gets stored, when, and how. Specify entry types with examples. Users can directly ADD/UPDATE/DELETE via API or UI.&lt;/p&gt;

&lt;p&gt;Context assembly is layered: core prompt, behavior injections, memory context, knowledge, profile, session state, recent messages, current input. Each layer is independently pluggable, orchestrated in parallel.&lt;/p&gt;

&lt;p&gt;V2 also includes a behavior system (priority behaviors before LLM, async behaviors after), auto-summarization at 200/400/600 message thresholds, and a config cascade (turn &amp;gt; relationship &amp;gt; companion).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;V2 vs V1 tradeoffs:&lt;/strong&gt; Simpler, full visibility vs. better for large memory stores. LLM-managed operations are more intuitive than importance rubrics. Per-relationship (correct) vs. per-companion (incorrect). Weaker at scale (hundreds of entries burn tokens) but handles typical context windows.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does filesystem-based memory work? (V3)
&lt;/h2&gt;

&lt;p&gt;Agent mode requires autonomous complex task execution (research, tool use, multi-step workflows) in sandboxed environments. This triggered V3.&lt;/p&gt;

&lt;p&gt;The core concept: materialize all context as real files on a Modal Volume at &lt;code&gt;/em/&lt;/code&gt;. Agents navigate with bash (&lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;cat&lt;/code&gt;). LLMs understand file operations natively. Benchmarks show these outperform specialized retrieval tools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/em/
├── memory/       (hot_context.md, scratchpad.md)
├── knowledge/    (documents/)
├── profile/      (user.yaml)
├── workspace/    (AGENTS.md, outputs/)
├── tools/
├── .claude/skills/
├── .git/
└── .locks/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key file is &lt;code&gt;hot_context.md&lt;/code&gt;. It's an agent-curated relationship summary, roughly 500 words. User profile, recent context, preferences, tasks, facts. After each session, a curation step updates it. This replaces V1's rubric and V2's LLM entry management.&lt;/p&gt;

&lt;p&gt;For real-time chat, hot_context syncs to a database cache (&lt;code&gt;relationship_context_cache&lt;/code&gt;). Chat reads take ~1ms. The filesystem world and the real-time chat world stay connected through this bridge.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ┌──────────────────────────────────────────────────────────┐
  │                    Modal Volume /em/                      │
  │                                                          │
  │  memory/hot_context.md    profile/user.yaml   tools/     │
  │  memory/scratchpad.md     workspace/AGENTS.md  .claude/  │
  └────────────┬──────────────────────┬──────────────────────┘
               │                      │
       ┌───────┴───────┐      ┌───────┴───────┐
       │  Agent Mode   │      │  Chat Mode    │
       │               │      │               │
       │  Agent reads  │      │  Reads from   │
       │  /em/ with    │      │  DB cache     │
       │  bash (ls,    │      │  (~1ms)       │
       │  grep, cat)   │      │               │
       │       │       │      └───────────────┘
       │       ▼       │              ▲
       │  Does work,   │              │
       │  updates files│              │
       │       │       │              │
       │       ▼       │              │
       │ ┌───────────┐ │     ┌────────┴────────┐
       │ │ Curation  │ │     │  DB cache:      │
       │ │ step:     │─┼────▶│  relationship_  │
       │ │ update    │ │     │  context_cache  │
       │ │ hot_ctx   │ │     │  (sync on end)  │
       │ └───────────┘ │     └─────────────────┘
       └───────────────┘

  Pre-hydrate:  DB → Volume (before sandbox)
  Sandbox exec: Agent in /em/, tools via Gateway
  Post-sync:    Volume → DB (after sandbox, conflict detection)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Session lifecycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pre-hydrate&lt;/strong&gt;: DB to Volume (load hot_context, profile, AGENTS.md, track versions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute in sandbox&lt;/strong&gt;: agent in dedicated directory, no direct DB calls, tools via em-tool CLI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-sync&lt;/strong&gt;: Volume to DB with conflict detection&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Concurrency uses git worktrees. Each session gets its own branch at &lt;code&gt;/em/.worktrees/session-{id}/&lt;/code&gt;, merging back to main with conflict resolution. File-based locks prevent race conditions.&lt;/p&gt;

&lt;p&gt;If both a chat session and an agent session update hot_context simultaneously, the curation step sees both versions and merges them naturally.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do these memory approaches compare?
&lt;/h2&gt;

&lt;p&gt;Here's how the three architectures compare across the dimensions that matter in production:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;V1: pgvector&lt;/th&gt;
&lt;th&gt;V2: Scratchpad&lt;/th&gt;
&lt;th&gt;V3: Filesystem&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Retrieval&lt;/td&gt;
&lt;td&gt;Selective (similarity + importance)&lt;/td&gt;
&lt;td&gt;Full injection every turn&lt;/td&gt;
&lt;td&gt;Agent navigates with bash&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Latency overhead&lt;/td&gt;
&lt;td&gt;Moderate (embedding + re-rank)&lt;/td&gt;
&lt;td&gt;None (already in prompt)&lt;/td&gt;
&lt;td&gt;None (agent reads what it needs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Large memory stores (1000+ facts)&lt;/td&gt;
&lt;td&gt;Conversational products&lt;/td&gt;
&lt;td&gt;Autonomous agent workflows&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scaling limit&lt;/td&gt;
&lt;td&gt;Embedding quality&lt;/td&gt;
&lt;td&gt;~100 entries (token budget)&lt;/td&gt;
&lt;td&gt;Disk space&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Management&lt;/td&gt;
&lt;td&gt;Importance rubric&lt;/td&gt;
&lt;td&gt;LLM-managed (ADD/UPDATE/DELETE)&lt;/td&gt;
&lt;td&gt;Agent-curated files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blocks user response?&lt;/td&gt;
&lt;td&gt;No (gated)&lt;/td&gt;
&lt;td&gt;No (async)&lt;/td&gt;
&lt;td&gt;No (post-session)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrency&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Per-relationship&lt;/td&gt;
&lt;td&gt;Git worktrees per session&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What coexists in production?
&lt;/h2&gt;

&lt;p&gt;These systems serve different purposes. They don't replace each other.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memory V1/V2&lt;/strong&gt;: Personal user facts (preferences, goals, events). V1 for large selective stores, V2 for full-visibility scratchpads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge base&lt;/strong&gt;: Document retrieval (classic RAG).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hot context&lt;/strong&gt; (V3): Agent-curated relationship summary for fast chat access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversation summaries&lt;/strong&gt;: Incremental summarization at thresholds (200/400/600 messages) for long relationships.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All ingestion is async, never blocking the user's response. Context assembly is layered and pluggable. Each source (prompt, memory, knowledge, tools, behaviors) runs independently in parallel.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the unsolved problems in AI agent memory?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Consolidation and forgetting.&lt;/strong&gt; Scratchpad entries accumulate. No mechanism for merging related entries or controlled forgetting. Arguably the hardest unsolved challenge in agent memory. The framework is intentionally left extensible for developer-specific policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fuzzy retrieval and temporal reasoning.&lt;/strong&gt; Simulating imperfect memory ("that rings a bell but...") and temporal reasoning ("we talked about this previously, you might not have reflected yet"). Unimplemented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-relationship search.&lt;/strong&gt; V3's per-relationship design protects privacy but prevents pattern discovery across relationships at the companion level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checkpoint restore.&lt;/strong&gt; Modal's &lt;code&gt;snapshot_filesystem()&lt;/code&gt; doesn't capture mounted volumes, preventing session restoration to previous states. Git or S3 tarballs are possible but suboptimal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which architecture should you use?
&lt;/h2&gt;

&lt;p&gt;For agent mode, the filesystem approach is the correct abstraction. Models understand files natively, developer experience is intuitive, and agents load what they need.&lt;/p&gt;

&lt;p&gt;For most companions, V2 is optimal. LLM-managed scratchpad, developer-customizable ingestion prompts, fast, transparent. Works for coaching bots, customer support, tutoring, any regular-returning-user product. No sandbox needed.&lt;/p&gt;

&lt;p&gt;V3's hot_context bridges agent and chat modes. Agents curate after sessions. Chat reads from database cache in ~1ms. Combines the rich agent-mode filesystem world with real-time chat latency requirements.&lt;/p&gt;

&lt;p&gt;The hardest problem remains: what should a companion remember? What should it surface, and when? Ingestion prompt customization (V2) and AGENTS.md (V3) are our current answers, but there's a lot more to figure out here.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Packer et al. (2023). &lt;a href="https://arxiv.org/pdf/2310.08560" rel="noopener noreferrer"&gt;MemGPT: Towards LLMs as Operating Systems&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Xu et al. &lt;a href="https://ar5iv.labs.arxiv.org/html/2512.05470" rel="noopener noreferrer"&gt;Everything is Context: Agentic File System Abstraction for Context Engineering&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anthropic. &lt;a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents" rel="noopener noreferrer"&gt;Effective Context Engineering for AI Agents&lt;/a&gt; (Sep 2025)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.letta.com/" rel="noopener noreferrer"&gt;Letta&lt;/a&gt; (formerly MemGPT)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/assistants/tools/file-search" rel="noopener noreferrer"&gt;OpenAI Vector Stores API&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;We're &lt;a href="https://www.emotionmachine.com" rel="noopener noreferrer"&gt;Emotion Machine&lt;/a&gt;. We help AI connect with people, across voice, memory, and phone agents.&lt;/p&gt;

&lt;p&gt;The memory system described here powers &lt;a href="https://www.personalitymachine.com" rel="noopener noreferrer"&gt;Personality Machine&lt;/a&gt;, our infrastructure for AI companions that maintain consistent identity across conversations. You create companions in a builder, deploy them to a custom URL, and debug conversations in a dashboard. Memory, personality, and relationship state are handled for you.&lt;/p&gt;

&lt;p&gt;If you're working on something in this space, reach out: &lt;a href="mailto:hello@emotionmachine.ai"&gt;hello@emotionmachine.ai&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>architecture</category>
      <category>llm</category>
    </item>
    <item>
      <title>Overnight: Turn Linear Issues Into Pull Requests</title>
      <dc:creator>Emre Sarbak</dc:creator>
      <pubDate>Thu, 02 Apr 2026 19:17:43 +0000</pubDate>
      <link>https://dev.to/emresarbak/overnight-turn-linear-issues-into-pull-requests-3cnp</link>
      <guid>https://dev.to/emresarbak/overnight-turn-linear-issues-into-pull-requests-3cnp</guid>
      <description>&lt;p&gt;Terminal agents got surprisingly good this year. Anthropic's Claude Code launched in February, OpenAI's Codex CLI got much better in August with gpt-5(thinking-high) and again in September with gpt-5-codex(high). We've been delegating bug fixes, UI features, backend updates, comprehensive testing, and even larger architectural changes to these agents at Emotion Machine. It works.&lt;/p&gt;

&lt;p&gt;The shift from vibe coding to what Simon Willison calls &lt;a href="https://simonwillison.net/2025/Oct/7/vibe-engineering/" rel="noopener noreferrer"&gt;vibe engineering&lt;/a&gt; means we can finally incorporate actual software engineering practices into terminal agent workflows, detailed planning specs, context from all stakeholders (not just developers), proper testing in deployment pipelines, while being more ambitious by running 10-20 agent sessions per person per day. But to make this work at scale, you need agents to pull from the same context pool where product discussions, design decisions, and technical specs already live. You need tighter integration between how teams plan and how agents execute.&lt;/p&gt;

&lt;p&gt;That's what Overnight is. A way to fire coding agents from Linear issues and keep them isolated enough that they won't break things.&lt;/p&gt;

&lt;p&gt;Here's how it works. You label an issue in Linear, Overnight spins up a Modal sandbox (cheap, isolated, disposable), clones your repo into a dedicated branch, and runs Codex or Claude Code with full codebase access. The repo clone is cached in a Modal volume and kept in sync with main, so you're not waiting on git operations every time. First pass is planning, the agent reads your issue, explores the code, writes an implementation plan and posts it back to Linear for review. You comment with changes or just say go, and it does a second run in a fresh sandbox to implement everything and open a PR. Modal sandboxes mean your secrets stay contained and agents can't do anything weird to your actual infrastructure.&lt;/p&gt;

&lt;p&gt;The reason Linear fits well here is that it already has all the context engineers and designers are putting into issues anyway. Descriptions, comments, attached designs, links to docs, it's all there. And the mobile app is actually usable, which means you can kick off agent runs or review plans without being at your desk. There's no backend to maintain, Linear webhook events trigger Modal endpoints directly, which spawn the sandboxes, so you manage everything from Linear. We're planning to open-source this soon once we feel confident it won't eat anyone's codebase.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;By &lt;a href="https://www.emotionmachine.com" rel="noopener noreferrer"&gt;Emotion Machine&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>productivity</category>
      <category>automation</category>
    </item>
    <item>
      <title>Adding Open-Ended Conversations to Your Products</title>
      <dc:creator>Emre Sarbak</dc:creator>
      <pubDate>Thu, 02 Apr 2026 19:17:08 +0000</pubDate>
      <link>https://dev.to/emresarbak/adding-open-ended-conversations-to-your-products-4906</link>
      <guid>https://dev.to/emresarbak/adding-open-ended-conversations-to-your-products-4906</guid>
      <description>&lt;p&gt;&lt;em&gt;TL;DR: If you build products, know that shifting from a tool to an open-ended companion rewires the user experience. Conversation becomes the most salient surface, users judge it like a person, and churn reasons get opaque. Treat it as a new primary product surface.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Building an application with an embedded, conversational AI companion is an exciting idea, e.g., "a fitness app with a coach you can talk to."&lt;/p&gt;

&lt;p&gt;Many products can be more engaging and useful if people can talk to them in an open-ended way. Whether it is a fitness app, calendar app, Bible app, or any other app, a conversational companion can understand more about what the user really wants and provide a richer, more emotionally stimulating experience. In fact, &lt;a href="https://www.emotionmachine.com/blog/ai-companions-are-the-next-interface" rel="noopener noreferrer"&gt;AI companions are becoming the next interface&lt;/a&gt; for human-computer interaction.&lt;/p&gt;

&lt;p&gt;However, adding a conversational companion is also tricky.&lt;/p&gt;

&lt;p&gt;There's a simple spectrum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(A) No conversational experience.&lt;/li&gt;
&lt;li&gt;(B) A task bot: narrow, slot-filling, goal-bounded, e.g. "Tell me the countries you want to travel to and I will find the cheapest ticket for your next trip."&lt;/li&gt;
&lt;li&gt;(C) A companion: open-ended, with a relationship layer, identity, memory, tone, and proactive behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you offer (C), it changes the whole dynamic. You promise the user that you can handle their messy, fuzzy, emotionally rich speech. The user switches modes and interacts with your app not as a tool, but as a human-like being.&lt;/p&gt;

&lt;p&gt;In this interaction mode, the conversational experience is visceral and natural for most people. That shift has a few repercussions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Salience shift: the conversational surface becomes the most salient experience in your product. Other elements feel secondary.&lt;/li&gt;
&lt;li&gt;Mode switch: users bring ambiguity and emotion; they expect understanding, not just completion.&lt;/li&gt;
&lt;li&gt;Harsh evaluation: people judge tone, timing, wording, and recall in a split second.&lt;/li&gt;
&lt;li&gt;Debugging opacity: when the conversation fails, it can turn people off from the whole product, and it is hard to know exactly why.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a result, once you add an open-ended conversational experience, it often dominates the whole product experience. There is no middle ground.&lt;/p&gt;

&lt;p&gt;Imagine you are offering a personal trainer to a customer. There is a utilitarian aspect to whether the physical training is good. Yet what the user will emotionally remember most from the first session is the human interaction.&lt;/p&gt;

&lt;p&gt;In this example, when potential clients bounce after their experience with a personal trainer, it is hard to understand whether they are the wrong clients, you are offering the wrong training, your trainer is unlikable, or you set the wrong expectation. The human-to-human experience makes debugging this process harder, churn reasons become opaque.&lt;/p&gt;

&lt;p&gt;In the same way, when you have an open-ended conversational experience in your product and your users bounce, it is hard to understand exactly why. Was it the expectation you set? The companion's tone? A memory failure? A safety refusal that felt off? The underlying utility?&lt;/p&gt;

&lt;p&gt;We observed this in a few products. When the product doesn't work, you try making tweaks and nothing works reliably. When you try making bigger changes, you need to redesign the whole experience. You get stuck in a loop of tweaks and disappointments.&lt;/p&gt;

&lt;p&gt;The classic process for designing digital products doesn't work in this situation.&lt;/p&gt;

&lt;p&gt;Most teams are used to designing products starting with visual sketches and user flows. When the experience is primarily visual and tool-like, that process works.&lt;/p&gt;

&lt;p&gt;However, when the experience is primarily conversational, you need a different design process. The unit of value is the exchange, not the screen. Getting the conversational design right becomes one of the most important, if not the most important, parts of the design process.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;By &lt;a href="https://www.emotionmachine.com" rel="noopener noreferrer"&gt;Emotion Machine&lt;/a&gt;. We build &lt;a href="https://www.emotionmachine.com/personality-machine" rel="noopener noreferrer"&gt;Personality Machine&lt;/a&gt;, developer infrastructure for AI characters with persistent memory, behavioral state, and context engineering.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productdesign</category>
      <category>ux</category>
      <category>conversationalai</category>
    </item>
    <item>
      <title>AI Companions Are the Next Interface</title>
      <dc:creator>Emre Sarbak</dc:creator>
      <pubDate>Thu, 02 Apr 2026 19:16:13 +0000</pubDate>
      <link>https://dev.to/emresarbak/ai-companions-are-the-next-interface-10e5</link>
      <guid>https://dev.to/emresarbak/ai-companions-are-the-next-interface-10e5</guid>
      <description>&lt;p&gt;Computer interfaces have evolved from mainframes to smartphones. Portable personal computers, graphical user interfaces, and always-available mobile devices gradually made technology more convenient and accessible.&lt;/p&gt;

&lt;p&gt;Today, many assume computers must become even more convenient and physically intimate. Many people believe AR glasses or brain-computer interfaces are the next step. They ask: &lt;em&gt;what if a person could execute a task the moment a thought arises?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But is convenience really what holds people back from meaningful action? Do any of us truly wish computers were even more present in our lives?&lt;/p&gt;

&lt;h2&gt;
  
  
  People are too overwhelmed to take action
&lt;/h2&gt;

&lt;p&gt;People are not blocked because they lack access to technology. If anything, they are blocked because they have too much of it. Continuous streams of information and contradictory signals make our thinking incoherent and exhaust us. People are overwhelmed.&lt;/p&gt;

&lt;p&gt;People want to make progress in their lives. They want to solve their problems. Yet, in this scattered and depleted state, they can't process and synthesize their thoughts and form a clear, coherent plan.&lt;/p&gt;

&lt;p&gt;If we were to plot it out, we would see that most people are stuck in earlier levels of decision-making. They have some thoughts about what they may want to do or what they should consider. However, most people get stuck in that early phase and only a few reach clarity to take action.&lt;/p&gt;

&lt;p&gt;For example, most people already have a lot of information about what they should do to improve their health, but there is an internal conflict that prevents motivated action.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI agents make taking action even easier
&lt;/h2&gt;

&lt;p&gt;A complementary shift is unfolding in parallel: AI agents are rapidly becoming capable of executing complex tasks on our behalf. Many of the cumbersome actions we perform today, navigating menus, sending emails, coordinating with intermediaries, will soon be delegated to autonomous systems.&lt;/p&gt;

&lt;p&gt;If you can tell an AI agent to transfer your investments between two accounts and trust it to execute that task, why would you need any of the visual interfaces, clicks, forms and logins?&lt;/p&gt;

&lt;p&gt;These AI agents are increasingly automating the execution of tasks.&lt;/p&gt;

&lt;p&gt;As a result, our time and cognitive effort will shift toward interpreting the world and making sense of our own lives.&lt;/p&gt;

&lt;p&gt;Our main work will be to reflect, deliberate, and synthesize. However, in our overwhelmed state, bringing clarity is what we have been getting worse at.&lt;/p&gt;

&lt;h2&gt;
  
  
  We need solutions that bring clarity
&lt;/h2&gt;

&lt;p&gt;What we need are tools and solutions that help restore clarity, coherence, meaning, and motivation. They must give us space to work through conflicting thoughts, examine our intentions, and experiment with different ways of living.&lt;/p&gt;

&lt;p&gt;In this cultural and psychological landscape, pushing technology closer to our brains is not the answer. We don't need more information, faster, nor do we need tools to take unintentional actions.&lt;/p&gt;

&lt;p&gt;The forces that created this state, ever-increasing convenience and constant technological stimulus, cannot guide us forward anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  People don't process things out loud with each other anymore
&lt;/h2&gt;

&lt;p&gt;Historically, people made sense of their thoughts by talking to one another.&lt;/p&gt;

&lt;p&gt;Conversation offered a way to untangle thoughts, test interpretations, and find meaning. In those exchanges, we received three essential forms of support: attention, knowledge, and articulation. Someone listened, offered perspective, and helped us put our thoughts into words.&lt;/p&gt;

&lt;p&gt;This is a very fundamental process. We are born to speak. It is one of the most sophisticated cognitive processes we have. We need to talk things through to make sense of them in the physical and social context we are in.&lt;/p&gt;

&lt;p&gt;Today, that function has eroded.&lt;/p&gt;

&lt;p&gt;People have less time and far less attention to give. They are more stimulated and lonelier than ever before. The result is a shared state of overwhelm and isolation. We are all stuck. We need to feel heard before we can listen to others.&lt;/p&gt;

&lt;p&gt;If humans can no longer reliably provide that depth of attention at scale, something else will have to step in.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI companions start to fill that gap
&lt;/h2&gt;

&lt;p&gt;At this moment, LLM-based AI assistants entered our lives.&lt;/p&gt;

&lt;p&gt;We can speak to them for as long as we wish. They offer unbroken attention, possess broad knowledge, articulate ideas clearly, and can retain substantial context about who we are and what we've discussed.&lt;/p&gt;

&lt;p&gt;Most people started using them as sophisticated search engines, but very quickly, the nature of our interactions evolved. People found themselves sharing personal uncertainties, dilemmas, and aspirations.&lt;/p&gt;

&lt;p&gt;What began as AI &lt;em&gt;assistants&lt;/em&gt; rapidly evolved into AI &lt;em&gt;companions&lt;/em&gt;. "Therapy/companionship", "organizing my life", and "finding purpose" became the top three use cases for AI.&lt;/p&gt;

&lt;p&gt;By the summer of 2025, half of U.S. teens were speaking with AI companions regularly, and one-third reported that these companions felt as good as, or better than, their human relationships (Common Sense Media).&lt;/p&gt;

&lt;h2&gt;
  
  
  AI companions are the next interface
&lt;/h2&gt;

&lt;p&gt;These AI companions are now shaping how people handle their lives: they sit with us in difficult moments, help us sort competing priorities, and offer a non-judgmental place to speak. They help us get unstuck from our overwhelmed and isolated state.&lt;/p&gt;

&lt;p&gt;That is precisely what we needed an interface for.&lt;/p&gt;

&lt;p&gt;There are far more people and many more moments when people need to process their thoughts than they need a new technological interface to execute those decisions.&lt;/p&gt;

&lt;p&gt;We started using AI companions as the interface for all our needs.&lt;/p&gt;

&lt;p&gt;ChatGPT is only the first generation of these tools. More capable and specialized AI companions are emerging every day.&lt;/p&gt;

&lt;p&gt;They will also integrate with AI agents in the background. We will start interacting with them before we reach clarity on what to do about our investments. We will talk to our financial companion in an unstructured way. We will share our thoughts, concerns, aspirations, and talk it through. Once we make the decisions, the AI companion will use AI agents to carry them out.&lt;/p&gt;

&lt;p&gt;In that world, we will reach out to technology when we feel bad or conflicted about something or when we feel overwhelmed. But, instead of doom-scrolling on social media and getting even more stimulated, we will talk through our thoughts with our AI companions and leave feeling better and more motivated, with our thoughts clarified, and with our problems resolved.&lt;/p&gt;

&lt;p&gt;In 5-10 years, we will spend hours every day interacting with these AI companions, maybe as assistants, friends, coaches, or partners.&lt;/p&gt;

&lt;p&gt;They will be the next interface for technology.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;By &lt;a href="https://www.emotionmachine.com" rel="noopener noreferrer"&gt;Emotion Machine&lt;/a&gt;. We build &lt;a href="https://www.emotionmachine.com/personality-machine" rel="noopener noreferrer"&gt;Personality Machine&lt;/a&gt;, developer infrastructure for AI characters with persistent memory, behavioral state, and context engineering.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ux</category>
      <category>future</category>
      <category>conversationalai</category>
    </item>
    <item>
      <title>How to Build Realistic AI Companions</title>
      <dc:creator>Emre Sarbak</dc:creator>
      <pubDate>Thu, 02 Apr 2026 19:16:12 +0000</pubDate>
      <link>https://dev.to/emresarbak/how-to-build-realistic-ai-companions-5g6c</link>
      <guid>https://dev.to/emresarbak/how-to-build-realistic-ai-companions-5g6c</guid>
      <description>&lt;p&gt;You can build an "AI companion" simply with a GPT-style model and not much else. In comparison to what you could do years ago, it will be impressive. It is a fun toy.&lt;/p&gt;

&lt;p&gt;However, when you actually try talking to it like you would talk to a human companion, you immediately notice its limitations. It is not something you feel compelled to build a relationship with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.emotionmachine.com/blog/ai-companions-are-the-next-interface" rel="noopener noreferrer"&gt;AI companions are becoming the next interface&lt;/a&gt; for how we interact with technology, and as this shift accelerates, building realistic companions becomes essential.&lt;/p&gt;

&lt;p&gt;People are insanely good at assessing how a conversation is going. From the first sentence someone speaks to how fast they respond in the dialogue, there are so many small cues that get picked up. With most AI companions, you notice the pretense of dialogue very quickly.&lt;/p&gt;

&lt;p&gt;For people to seriously talk to any person, they need to respect them and feel respected by them. AI companions that are incoherent, exaggerated, boring, or repetitive fail that bar.&lt;/p&gt;

&lt;p&gt;After building and testing companions that thousands of people used, and playing with many existing AI companions, we identified some elements required to simulate a human-like realistic conversation partner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conversation phases:&lt;/strong&gt; People often don't immediately open up when you start talking to them. There is a gradual process of opening up. Most GPT-based companions are unusually verbose and spirited in the beginning of conversations. Similarly, when you reconnect with someone you haven't seen, there is a procedure to quickly warm up the conversation. AI companions need to define phases/modes of a relationship to adjust their approach to users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dialogue patterns:&lt;/strong&gt; People use repeatable patterns of conversations that have a high chance of improving relationships. When the conversation gets boring, you change the topic. When someone shares a personal comment, you ask a deep question to bring out meaningful reflections. When the conversation gets too tense, you make a self-deprecating joke to defuse the tension. Such patterns make the conversation more enjoyable for most people. AI companions need to inject such dialogue patterns into the flow of the conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory:&lt;/strong&gt; One major signal of trust and respect is whether your conversation partner remembers what you shared. This capacity makes what you say matter. Most GPT-based companions have good short-term memory because some of the chat history is used to generate next responses. However, AI companions need a system to record long-term conversations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-memory:&lt;/strong&gt; AI models make stuff up. They make stuff up about themselves as well. While you are talking about soccer, it can talk about how much they love the English Premier League. Then, after a while, when you come back to the topic, it can say it doesn't know anything about soccer. AI companions need a system of self-memory to stay consistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory retrieval:&lt;/strong&gt; Once you talk to a companion for 15 minutes, you start accumulating so many memories that it is impossible to keep all of them in the prompt. AI companions need a robust mechanism to retrieve memories based on recency, relevance, and importance (e.g. emotional weight).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory reflection:&lt;/strong&gt; Memories are very granular. Humans automatically synthesize them. If someone stayed up late to read about gentrification and, on a separate occasion, told you a fun fact about your city, you deduce that they may be interested in urban topics. AI companions need to run reflection processes based on memories they accumulate to (1) fill in the gaps in observations and (2) arrive at higher-level observations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sense of time:&lt;/strong&gt; Silences in the conversation are part of the dialogue. A five-second gap means a very different development in the dialogue than a five-day gap. Most AI companions respond without any acknowledgement of this. AI companions need to account for this info.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sense of self and embodiment:&lt;/strong&gt; Once you are engaged in a compelling conversation, you assume you are talking to a human. Lack of some physical awareness breaks this assumption and forces users to step back. AI companions need to have a consistent sense of self and embodiment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proactive engagement:&lt;/strong&gt; Because of the prompt-response nature of AI companions, they often need to be triggered to speak. However, that's not how people talk. Both sides need to have and show agency for it to feel like a dialogue. AI companions need to proactively talk and engage users. To enable this, AI companions need an independent process that reflects on where the conversation is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Active listening:&lt;/strong&gt; People normally give visual and audio feedback while listening to the speaking party. They nod, they say "yeah" when they agree, or look off when they are surprised. This feedback loop encourages a more precise disclosure by the speaker. Most AI companions use the latest voice models but they also need to have "active listening models."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual feedback:&lt;/strong&gt; A simple visual representation, an orb, a pulsing light, a shape that changes color, can provide immediate feedback to the user, reflecting both the companion's and potentially the user's emotional states. Even minimal visuals, when timed and congruent with the interaction, can enhance the feeling of presence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Emotion detection:&lt;/strong&gt; Only relying on someone's words will make you miss a lot of what they are expressing. How something is said conveys a lot about their emotional state. AI companions need to integrate emotion detection from voice data and incorporate those into the conversations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Independent lives:&lt;/strong&gt; When you leave a conversation, others don't freeze in time. They go and do stuff and live a life. Hearing those stories is part of what makes a conversation enjoyable. AI companions need to simulate a realistic life independent of the conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Privacy:&lt;/strong&gt; People are less careful about sharing personal information when they are talking than they are while filling out online forms. We have noticed many users who unknowingly share information. The emotional engagement of a companion hides how much is being exchanged. AI companions need to ensure people's personal information is private and, if possible, stored locally.&lt;/p&gt;




&lt;p&gt;Each of these techniques can be layered on top of a basic conversational model. Together, they make the companion feel less like a responder and more like a presence, a mind that engages, remembers, and evolves.&lt;/p&gt;

&lt;p&gt;That kind of realism isn't about imitation. It's about design choices that encourage a particular kind of experience: a dialogue that feels like it matters.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;By &lt;a href="https://www.emotionmachine.com" rel="noopener noreferrer"&gt;Emotion Machine&lt;/a&gt;. We build &lt;a href="https://www.emotionmachine.com/personality-machine" rel="noopener noreferrer"&gt;Personality Machine&lt;/a&gt;, developer infrastructure for AI characters with persistent memory, behavioral state, and context engineering.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>conversationalai</category>
      <category>machinelearning</category>
      <category>llm</category>
    </item>
    <item>
      <title>How to Add iMessage to OpenClaw Without a Mac</title>
      <dc:creator>Emre Sarbak</dc:creator>
      <pubDate>Thu, 02 Apr 2026 17:41:48 +0000</pubDate>
      <link>https://dev.to/emresarbak/we-built-an-imessage-api-for-openclaw-agents-so-you-dont-need-a-mac-2foo</link>
      <guid>https://dev.to/emresarbak/we-built-an-imessage-api-for-openclaw-agents-so-you-dont-need-a-mac-2foo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Claw Messenger&lt;/strong&gt; is a managed relay that connects any AI agent to iMessage, RCS, and SMS without requiring macOS or Apple hardware. It works with OpenClaw, n8n, LangChain, or any custom agent via WebSocket or REST API. Setup takes about five minutes. Plans start at $5/mo for 1,000 messages.&lt;/p&gt;

&lt;p&gt;We built this because the default OpenClaw iMessage path has a dependency most people can't satisfy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why does OpenClaw iMessage require a Mac?
&lt;/h2&gt;

&lt;p&gt;The built-in integration uses BlueBubbles, which reads the iMessage database stored locally on macOS. That database only exists on Apple hardware. So if you want your OpenClaw agent to send and receive iMessages, the official path is: buy or rent a Mac, leave it running 24/7, grant Full Disk Access, and maintain it.&lt;/p&gt;

&lt;p&gt;That's fine if you already own a Mac. Most people deploying OpenClaw in production don't. They run their agents on Linux VPSes, in Docker containers, on cloud VMs. The Mac is an infrastructure dependency for a single messaging channel, and it was blocking adoption.&lt;/p&gt;

&lt;p&gt;We kept hearing the same question in the OpenClaw community: how do I add iMessage to my agent on Linux? The answer was always "you can't, unless you get a Mac." We wanted a different answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to set up OpenClaw iMessage on Linux or Docker
&lt;/h2&gt;

&lt;p&gt;Install the Claw Messenger plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw plugins &lt;span class="nb"&gt;install&lt;/span&gt; @emotion-machine/claw-messenger
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an account at &lt;a href="https://clawmessenger.com" rel="noopener noreferrer"&gt;clawmessenger.com&lt;/a&gt; and copy your API key from the dashboard.&lt;/p&gt;

&lt;p&gt;Add the channel block to &lt;code&gt;~/.openclaw/openclaw.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"channels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"claw-messenger"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cm_your_key_here"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"serverUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"wss://relay.clawmessenger.com"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the gateway:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw gateway restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Register the phone numbers you want to allow in the Claw Messenger dashboard. Only registered numbers can reach your agent. This is a security default, not a limitation. Open inbound from any number is a spam vector we decided to block from day one.&lt;/p&gt;

&lt;p&gt;Text your agent's Claw Messenger number from your iPhone. You should get a reply back over iMessage.&lt;/p&gt;

&lt;p&gt;No Xcode. No Full Disk Access. No Mac.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Claw Messenger route iMessage without macOS?
&lt;/h2&gt;

&lt;p&gt;Your OpenClaw gateway opens a persistent WebSocket to &lt;code&gt;wss://relay.clawmessenger.com&lt;/code&gt;. When someone texts your agent's number, the carrier delivers the message to our relay infrastructure. We forward it to your gateway over the WebSocket. Your agent processes it. The reply goes back the same way.&lt;/p&gt;

&lt;p&gt;Three things worth knowing about the architecture:&lt;/p&gt;

&lt;p&gt;Message content is not stored on our relay servers. The WebSocket is TLS-encrypted. And protocol selection (iMessage vs. RCS vs. SMS) happens automatically at the carrier level based on the recipient's device. You configure one channel; we handle the routing.&lt;/p&gt;

&lt;p&gt;For agents that do anything beyond basic Q&amp;amp;A, add HMAC webhook verification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"channels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"claw-messenger"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cm_..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"serverUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"wss://relay.clawmessenger.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"webhookSecret"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"whsec_..."&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We sign every inbound payload. Your gateway verifies the signature before processing. This prevents spoofed messages from reaching your agent's logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  What iMessage features work with OpenClaw agents?
&lt;/h2&gt;

&lt;p&gt;We built Claw Messenger expecting people to use it for text Q&amp;amp;A. What actually happened was more interesting. People started using native iMessage features as agent inputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tapbacks as confirmations.&lt;/strong&gt; One early user set up thumbs-up to approve calendar entries and thumbs-down to reject them. No typing, just a long-press reaction. We hadn't designed tapbacks as an input mechanism, but the interaction cost is so low that it works better than "reply YES to confirm" patterns. Your agent receives tapback events with the reaction type and the referenced message.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Typing indicators change perception.&lt;/strong&gt; When the agent sends a typing bubble before responding, the conversation reads like texting a person. Without it, replies appear out of nothing and the experience feels mechanical. We made typing indicators a single function call because we found the difference matters more than the feature sounds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automatic RCS and SMS fallback.&lt;/strong&gt; Your agent's contacts aren't all on iPhones. When a non-Apple device texts the agent, the system falls back to RCS where supported, then SMS. You don't configure this. One number, three protocols.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read receipts signal processing.&lt;/strong&gt; The agent can mark messages as read, which tells the sender their message was received and is being handled. Useful when your agent takes a few seconds to process a complex request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claw Messenger vs BlueBubbles for OpenClaw
&lt;/h2&gt;

&lt;p&gt;BlueBubbles is open source and free. If you have a Mac you can dedicate to this, it's a reasonable option and we're not trying to replace it. Here's how the two approaches compare:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;BlueBubbles&lt;/th&gt;
&lt;th&gt;Claw Messenger&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requires macOS&lt;/td&gt;
&lt;td&gt;Yes, running 24/7&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;Free (you supply the Mac)&lt;/td&gt;
&lt;td&gt;$5/mo for 1,000 messages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Setup time&lt;/td&gt;
&lt;td&gt;30–60 minutes&lt;/td&gt;
&lt;td&gt;~5 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintenance&lt;/td&gt;
&lt;td&gt;You manage the Mac&lt;/td&gt;
&lt;td&gt;Managed relay service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linux / Docker / VPS&lt;/td&gt;
&lt;td&gt;Not supported&lt;/td&gt;
&lt;td&gt;Fully supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RCS + SMS fallback&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Automatic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tapbacks + typing indicators&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HMAC webhook verification&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Choose BlueBubbles&lt;/strong&gt; if you already own a Mac, want zero monthly cost, and are comfortable maintaining the hardware. It's a solid open-source project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose Claw Messenger&lt;/strong&gt; if your OpenClaw agent runs on Linux, a VPS, or in Docker and you don't want to add Apple hardware to your stack. That's the specific gap we built this to fill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does Claw Messenger work with AI agents other than OpenClaw?
&lt;/h2&gt;

&lt;p&gt;Yes. The OpenClaw plugin is one integration, but the underlying relay is a WebSocket and REST API that works with any agent framework. We've seen people connect agents built with n8n, LangChain, LangGraph, and custom Python/TypeScript/Go setups.&lt;/p&gt;

&lt;p&gt;The general pattern looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;websockets&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;imessage_loop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handle_message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;uri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://relay.clawmessenger.com?key=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;websockets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;reply&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;handle_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;reply&lt;/span&gt;
            &lt;span class="p"&gt;}))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your &lt;code&gt;handle_message&lt;/code&gt; function is where the agent lives. It can be a LangChain agent's &lt;code&gt;invoke()&lt;/code&gt;, an n8n workflow, a LangGraph stateful graph, or plain if/else logic. Claw Messenger is just the transport layer. It doesn't care what generates the reply.&lt;/p&gt;

&lt;p&gt;For serverless or webhook-based agents, there's also a REST API. Receive messages via HMAC-verified webhook POST, send replies with a simple HTTP call. No persistent WebSocket needed.&lt;/p&gt;

&lt;p&gt;The message format is minimal JSON:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"from"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"+15551234567"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Hey, what's the weather like tomorrow?"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-04-02T14:30:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"imessage"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More on the &lt;a href="https://clawmessenger.com/blog/imessage-bot" rel="noopener noreferrer"&gt;iMessage bot integration&lt;/a&gt; in our docs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who builds Claw Messenger?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://emotionmachine.com" rel="noopener noreferrer"&gt;Emotion Machine&lt;/a&gt; is a conversational AI company working on how AI converses and builds relationships, from voice models to conversational products to agent plugins. We're based in San Francisco.&lt;/p&gt;

&lt;p&gt;Claw Messenger is on the agent plugin layer of that stack. We build &lt;a href="https://personalitymachine.com" rel="noopener noreferrer"&gt;Personality Machine&lt;/a&gt; (AI character infrastructure with persistent memory and behavioral state), domain-specific speech-to-speech voice models, and Dialog Machine (an AI phone agent that calls real people). Claw Messenger came out of realizing that AI agents need messaging channels that don't depend on specific hardware. iMessage was the first one we tackled.&lt;/p&gt;

&lt;p&gt;If you hit issues, the &lt;a href="https://clawmessenger.com/blog/openclaw-imessage-setup-guide" rel="noopener noreferrer"&gt;setup guide&lt;/a&gt; covers edge cases and the &lt;a href="https://clawmessenger.com/blog/openclaw-imessage-troubleshooting" rel="noopener noreferrer"&gt;troubleshooting page&lt;/a&gt; has fixes for connection drops, delivery failures, and registration problems. Or reach us at &lt;a href="mailto:hello@emotionmachine.ai"&gt;hello@emotionmachine.ai&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>imessage</category>
      <category>agents</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
