<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ably Blog</title>
    <description>The latest articles on DEV Community by Ably Blog (@ablyblog).</description>
    <link>https://dev.to/ablyblog</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ablyblog"/>
    <language>en</language>
    <item>
      <title>LiveObjects now available: shared state without the infrastructure overhead</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Tue, 05 May 2026 08:48:23 +0000</pubDate>
      <link>https://dev.to/ablyblog/liveobjects-now-available-shared-state-without-the-infrastructure-overhead-hin</link>
      <guid>https://dev.to/ablyblog/liveobjects-now-available-shared-state-without-the-infrastructure-overhead-hin</guid>
      <description>&lt;p&gt;Shared state is a hard problem. Not hard in the abstract, computer-science sense (the concepts are well understood). Hard in the &lt;em&gt;someone has to actually build this&lt;/em&gt; sense, where every team that wants a live leaderboard, a shared config panel, or a poll that updates in real time ends up reinventing the same wheels: conflict resolution, reconnection handling, state recovery.&lt;/p&gt;

&lt;p&gt;Most teams do not want to spend their time building and maintaining that layer. They want to ship the feature that depends on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That is what &lt;a href="https://ably.com/docs/liveobjects" rel="noopener noreferrer"&gt;LiveObjects&lt;/a&gt; is for.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  From experimental to production-ready
&lt;/h2&gt;

&lt;p&gt;When we first shipped LiveObjects, the API was explicitly experimental. We had the primitives (LiveMap for synchronized key-value state, LiveCounter for distributed counting) but the ergonomics needed work. Early adopters were clear: working directly with object instances felt brittle, especially when objects were replaced. Subscriptions broke. Navigating nested structures was cumbersome. The mental model didn't fit how people actually wanted to build.&lt;/p&gt;

&lt;p&gt;So we rebuilt the API from the ground up. The result shipped in the JavaScript SDK before the end of last year, moving LiveObjects into Public Preview. It's centered on path-based operations. Instead of binding to specific object instances, you work with PathObjects that resolve at runtime against whatever exists at that location. Replace the object underneath, and your subscriptions follow automatically.&lt;/p&gt;

&lt;p&gt;That feedback loop, from experimental signal to a redesigned API, is what today's release reflects. The API is stable and ready for production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A new API designed around how you think about state
&lt;/h3&gt;

&lt;p&gt;The old approach required holding references to specific object instances, which meant reasoning about object identity rather than the shape of your data. The new PathObject API flips this: you describe a path, and the SDK handles the rest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;Ably&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ably&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LiveObjects&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LiveMap&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LiveCounter&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ably/liveobjects&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Ably&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Realtime&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; 
  &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your-api-key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LiveObjects&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;channels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;game:room-42&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;modes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;OBJECT_SUBSCRIBE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;OBJECT_PUBLISH&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;leaderboard&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LiveMap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;alice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LiveCounter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;bob&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LiveCounter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;}));&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;round&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LiveCounter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;leaderboard&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(({&lt;/span&gt; &lt;span class="nx"&gt;object&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;renderLeaderboard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compact&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;leaderboard&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;alice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;increment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Subscriptions now observe paths rather than instances. If the underlying object is replaced, your subscription keeps working. No rewiring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object resets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the most requested features: reset an object to a clean state without tearing down and recreating the channel. Previously, the workaround was destroying the channel entirely, which forced clients to reconnect, reattach subscribers, re-establish presence, and race the teardown. Object resets remove all of that. Useful for a new game round, a cleared poll, or a reset config without losing connection state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliable data expiry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;State now expires reliably after 90 days by default. Previously this was best-effort. If you're building anything with ephemeral sessions or time-bounded content, you can depend on this rather than writing your own cleanup logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revised object limits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The 100-object-per-channel limit has been revised to apply sensibly to top-level objects. Applications with nested structures can model data naturally without counting objects or designing workarounds to stay under the limit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easier map handling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;.compact()&lt;/code&gt; and &lt;code&gt;.compactJson()&lt;/code&gt; convert any LiveMap tree to a plain JavaScript object in one call, useful for rendering, serialization, or passing state to code that doesn't know about LiveObjects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;leaderboard&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;compact&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c1"&gt;// { alice: 120, bob: 95 }&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What you can build
&lt;/h2&gt;

&lt;p&gt;Live polls, leaderboards, collaborative forms, shared dashboards: any feature where multiple clients write to the same state and see each other's changes immediately. LiveObjects handles these well. But the use case we're focused on most right now is AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State sync for AI sessions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agents need to share context. Not respond to a single request and forget it, but maintain a live picture of what's happening: what the user is working on, what tasks are in progress, what the session looks like.&lt;/p&gt;

&lt;p&gt;The naive approach is polling or rebuilding session context on every request. That works until it doesn't. Agents diverge, state drifts, and the coordination layer becomes the thing your team maintains instead of the product.&lt;/p&gt;

&lt;p&gt;LiveObjects is a cleaner mechanism. Multiple clients and agents read and write shared state simultaneously, conflicts are resolved automatically, and every subscriber sees updates the moment they land.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;session&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;current_task&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Summarizing document&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;progress&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LiveCounter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;context&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LiveMap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;page_title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Q3 Report&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;selected_text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Revenue grew 24% YoY...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;session&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(({&lt;/span&gt; &lt;span class="nx"&gt;object&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;updateAgentStatusPanel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compact&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're building AI applications and using Ably for token streaming, LiveObjects handles the state layer: what the model is working with, what it's doing, and what users can steer in real time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple SDKs, production-ready
&lt;/h2&gt;

&lt;p&gt;LiveObjects is available in JavaScript today, with Swift and Java coming in the weeks that follow. Other SDKs are available now via inband objects and the REST API for platforms without a native client yet.&lt;/p&gt;

&lt;p&gt;We're making a stability commitment for each SDK when it reaches the bar, not flipping a global flag while only one runtime is actually ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;The LiveObjects plugin ships as part of the standard SDK.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm install ably
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://ably.com/docs" rel="noopener noreferrer"&gt;LiveObjects docs&lt;/a&gt; have quick-start guides and a migration guide if you're upgrading from the experimental API. If you're building AI applications, the &lt;a href="https://ably.com/docs/ai-transport" rel="noopener noreferrer"&gt;AI Transport docs&lt;/a&gt; cover how LiveObjects fits into the state sync layer.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>distributedsystems</category>
      <category>news</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Multi-device AI session continuity: how cross-device conversation sync works</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Tue, 05 May 2026 08:29:44 +0000</pubDate>
      <link>https://dev.to/ablyblog/multi-device-ai-session-continuity-how-cross-device-conversation-sync-works-2i1d</link>
      <guid>https://dev.to/ablyblog/multi-device-ai-session-continuity-how-cross-device-conversation-sync-works-2i1d</guid>
      <description>&lt;p&gt;&lt;strong&gt;Written by Amber Dawson&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You start a research task on your laptop, the network drops during a meeting, and when you open your phone to continue, the conversation is gone – you re-prompt, get partial duplicate results, and lose 30 minutes of work. The delivery layer dropped it. That's one of the most consistent problems teams hit when building AI applications.&lt;/p&gt;

&lt;p&gt;It's particularly acute in customer support, where a session belongs to the conversation - not to any single device, connection, or participant. An AI agent handles a query, the user switches from desktop to mobile mid-interaction, a human needs to step in. Every one of those transitions is a point where the session can silently break.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this breaks
&lt;/h2&gt;

&lt;p&gt;HTTP streaming is stateless. Each connection is independent, tied to a specific device and browser session, so when the user switches devices, refreshes, or loses connectivity, the new device has no position in the stream. It doesn't know which tokens the previous device received, it can't resume mid-response, and it starts over.&lt;/p&gt;

&lt;p&gt;There's no shared state across connections. Device B has no visibility into what Device A received, and without session tracking built into the architecture, the server treats each connection as a new actor. A stateless delivery layer wasn't designed for conversations that span sessions, devices, or time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhqad63sn462yjv6ln78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhqad63sn462yjv6ln78.png" alt=" " width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What breaks in production
&lt;/h2&gt;

&lt;p&gt;Teams building multi-device AI experiences without dedicated infrastructure hit the same set of edge cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lost responses.&lt;/strong&gt; The model finished generating while the user was offline or mid-switch. Nobody saw the output. The compute was wasted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Duplicate effort.&lt;/strong&gt; The user doesn't know if the previous session completed, so they re-prompt. You pay for the same response twice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State conflicts.&lt;/strong&gt; A new prompt arrives on the phone while the laptop tab still shows an incomplete response. Which version is canonical? The server doesn't know.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mobile-specific failures.&lt;/strong&gt; iOS and Android background apps aggressively drop connections. WiFi-to-cellular handoffs are frequent. A conversation that works fine on desktop will fall apart on mobile without explicit reconnection and resume handling.&lt;/p&gt;

&lt;p&gt;These failures don't show up in demos. They appear in production, under real network conditions, with real users – and they erode trust quickly because AI conversations often carry context the user spent time building.&lt;/p&gt;

&lt;h2&gt;
  
  
  What most teams build first
&lt;/h2&gt;

&lt;p&gt;The standard workaround is a Redis buffer between the AI backend and the client. It handles full page reloads reasonably well. It doesn't handle tab switches. It breaks on mobile backgrounding. And it has no path for multi-device delivery – the session state is scoped to one client, not to the user.&lt;/p&gt;

&lt;p&gt;Every serious production team discovers this wall independently and ends up engineering some version of the same architecture. Vercel's own lead maintainer acknowledged the gap directly: "to solve this we would need to have a channel to the server that allows transporting that information. WebSockets are one option." That's the right diagnosis. The Redis buffer is an approximation of the real fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architectural shift: state lives in the channel, not the connection
&lt;/h2&gt;

&lt;p&gt;The underlying problem is that session state is coupled to the connection. The fix is decoupling them.&lt;/p&gt;

&lt;p&gt;Instead of streaming directly over an HTTP connection, the server publishes messages to a channel. Any device subscribing to that channel receives the same messages. The state is in the channel. The connection is the transport, nothing more.&lt;/p&gt;

&lt;p&gt;This is the foundation of what's increasingly called a durable session – a persistent, addressable session between agents and users that outlives any single connection, device, or participant. Durable execution makes the backend crash-proof; durable sessions makes the experience crash-proof. They sit on opposite sides of the agent and complement each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnddrlfjk81uvid2fjpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnddrlfjk81uvid2fjpz.png" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In practice this changes the behavior fundamentally. Any device can join – same browser tab, phone, or tablet. Subscribing to the channel gives that device access to the conversation. Reconnection becomes catch-up rather than restart: &lt;a href="https://ably.com/docs/storage-history/history" rel="noopener noreferrer"&gt;channels persist message history&lt;/a&gt;, and when a device reconnects, it replays what it missed and transitions to live delivery. From the user's perspective, they pick up where they left off.&lt;/p&gt;

&lt;p&gt;Conflicts route through the server. User actions – sending prompts, interrupting, deleting messages – go to the server, which publishes the authoritative result to the channel. All devices receive the same update. There's no client-side state to reconcile.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the transport layer has to handle
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Identity-aware fan-out.&lt;/strong&gt; The system needs to recognize all active sessions associated with a single user and propagate updates across all of them. When a user sends a message on one device, every other active device should reflect the change immediately. This requires mapping user identity to active connections at the infrastructure level, not the application layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ordering and &lt;a href="https://ably.com/docs/connect/states" rel="noopener noreferrer"&gt;session recovery&lt;/a&gt;.&lt;/strong&gt; If the connection drops – from a device switch, a network blip, or a page refresh – the user shouldn't lose messages or see them out of sequence. A well-designed transport layer replays missed events and keeps message sequences intact. History loads first, then the live stream resumes. The client doesn't need to manage the transition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Token stream compaction.&lt;/strong&gt; Replaying thousands of individual tokens to a reconnecting device is wasteful. A better pattern compacts token streams into complete responses in channel history: one message per AI response, not hundreds of tokens. New devices load the complete response instantly, then receive new tokens for any in-progress generation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkct1pegstdtpxtgz2bue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkct1pegstdtpxtgz2bue.png" alt=" " width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://ably.com/docs/presence-occupancy/presence" rel="noopener noreferrer"&gt;Presence tracking&lt;/a&gt;.&lt;/strong&gt; The backend needs to know which devices are currently active. This matters for more than UX. Should the model keep streaming if the user closed the tab? Should a background task escalate if all devices have disconnected? Presence answers these questions from a live membership set rather than polling or timeout heuristics. Without it, systems rely on assumptions that produce missed interactions, wasted compute, and handoffs that arrive too late.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Presence-aware cost controls.&lt;/strong&gt; AI agents can quietly generate output that delivers no value but incurs real cost – streaming to an empty room, running tool calls after the user navigates away. Tying agent activity to presence means the infrastructure pauses or deprioritizes automatically when no devices are engaged and resumes when they return. Costs scale with actual usage, not connection count.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mobile is the hardest case
&lt;/h2&gt;

&lt;p&gt;Mobile devices are the toughest environment for connection continuity.&lt;/p&gt;

&lt;p&gt;Network instability is constant – WiFi-to-cellular handoffs, tunnel blackouts, dead zones. Resume capability isn't optional. Apps get backgrounded aggressively, so the model might finish generating while the app is suspended, and when the user returns they should see the completed response, not an empty screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/push" rel="noopener noreferrer"&gt;Push notifications&lt;/a&gt; bridge the gap. When significant events occur while the app is backgrounded – task complete, human takeover required – notifications alert the user and deep-link directly to the conversation. The payload should carry enough context for the app to restore state without a full reload. Push notification infrastructure (FCM, APNs, Web Push) ships as a supported capability; AI-specific end-to-end delivery patterns are still being documented, so implementation details vary by platform.&lt;/p&gt;

&lt;p&gt;Battery is also a real constraint. Holding open WebSocket connections when the app is backgrounded drains battery, so intelligent reconnection strategies close connections when backgrounded, reconnect on foreground, and use push notifications to trigger reconnects for important updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kwk36pddse08v611m10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kwk36pddse08v611m10.png" alt=" " width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Gen-1 AI vs Gen-2 AI: the real decision
&lt;/h2&gt;

&lt;p&gt;Not every AI application needs cross-device support. HTTP streaming works well for Gen-1 AI products – a user sends a prompt, the model returns a response, the interaction is complete. Single session, single device, seconds to complete. For that use case, HTTP streaming is the right call.&lt;/p&gt;

&lt;p&gt;Gen-2 AI products look structurally different. Sessions last minutes or hours. Agents make tool calls mid-conversation, coordinate with other agents, and run tasks in the background while the user is elsewhere. Humans need to step in – approving actions, taking over from an agent that has reached its limits, handing control back. Users move between devices and expect the conversation to follow them.&lt;/p&gt;

&lt;p&gt;The question isn't whether your architecture is complex. It's which generation of product you're building. If sessions outlive a single connection, if users will move between devices, if a human might need to join a running conversation – channel-based architecture is the right call. 32 of 37 vendors evaluated have no multi-device fan-out capability at all, which means most teams building Gen-2 products are either rebuilding this layer from scratch or shipping without it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this makes possible
&lt;/h2&gt;

&lt;p&gt;Channel-based sessions change what teams can build. A user starts a complex analysis on their phone during a commute, continues on their laptop at the office, and receives a push notification when the background task completes. In a customer support workflow, an AI agent handles a query, the conversation follows the user from desktop to mobile mid-interaction, and a human operator can step in on any device with full session context intact – then hand control back to the agent when they're done.&lt;/p&gt;

&lt;p&gt;Users already expect this from messaging applications. AI conversations are next.&lt;/p&gt;

&lt;p&gt;The infrastructure decision is whether to build session synchronization yourself or use systems designed for it. Building it means pub/sub channels, &lt;a href="https://ably.com/docs/storage-history/storage" rel="noopener noreferrer"&gt;message persistence with configurable retention&lt;/a&gt;, client SDKs that handle subscription and &lt;a href="https://ably.com/docs/storage-history/history" rel="noopener noreferrer"&gt;history replay&lt;/a&gt;, &lt;a href="https://ably.com/docs/presence-occupancy/presence" rel="noopener noreferrer"&gt;presence tracking&lt;/a&gt;, mobile SDKs with background handling, &lt;a href="https://ably.com/docs/push" rel="noopener noreferrer"&gt;push notification support&lt;/a&gt;, and identity-scoped authorization. That's weeks to months of engineering, and the edge cases don't appear until production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/ai-transport" rel="noopener noreferrer"&gt;Ably AI Transport&lt;/a&gt; implements this model – the docs on &lt;a href="https://ably.com/docs/storage-history/history" rel="noopener noreferrer"&gt;channel history&lt;/a&gt; and &lt;a href="https://ably.com/docs/connect/states" rel="noopener noreferrer"&gt;connection state recovery&lt;/a&gt; cover what the infrastructure layer needs to handle in detail.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Why we're betting on Durable Sessions</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Mon, 27 Apr 2026 11:45:08 +0000</pubDate>
      <link>https://dev.to/ablyblog/why-were-betting-on-durable-sessions-2gck</link>
      <guid>https://dev.to/ablyblog/why-were-betting-on-durable-sessions-2gck</guid>
      <description>&lt;p&gt;Written by Matthew O'Riordan&lt;/p&gt;

&lt;p&gt;Over the past year, I've spoken to more than 40 engineering teams building production AI agents. Different companies, different frameworks, different use cases. The same conversation kept happening.&lt;/p&gt;

&lt;p&gt;"Our streams break when users switch tabs." "We can't tell if the agent crashed or is still thinking." "We built a custom reconnection layer and it took three months." "Our users can't switch from laptop to phone mid-conversation." Every team described it differently, but they were all describing the same gap. Between the agent and the user, there's no dedicated infrastructure for the session itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjptaq1of0vdam8d41fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjptaq1of0vdam8d41fi.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We backed this up with research across 37 AI infrastructure platforms, hundreds of GitHub issues and community threads, and 40+ customer discovery calls. 35 of those 37 platforms have no stream resumption after a disconnect. 33 have no way to detect an agent crash. The gap is universal, and the framework maintainers know it. Vercel built a &lt;a href="https://vercel.com/blog/ai-sdk-5" rel="noopener noreferrer"&gt;pluggable ChatTransport in AI SDK&lt;/a&gt; 5 so developers can bring their own transport. TanStack AI shipped a &lt;a href="https://tanstack.com/ai/latest/docs/guides/connection-adapters" rel="noopener noreferrer"&gt;ConnectionAdapter&lt;/a&gt; for third-party providers. They've diagnosed the problem and built the plugin points. They're waiting for specialist infrastructure to show up.&lt;/p&gt;

&lt;p&gt;Nobody did anything wrong. Everyone focused on the right thing first: the intelligence, the orchestration, the models. But as AI experiences have gotten more sophisticated, the transport layer between the agent and the user has become the constraint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agents are becoming human-like, and they need human infrastructure
&lt;/h2&gt;

&lt;p&gt;The insight that changed how we think about this came from an unexpected direction. As agents get more sophisticated, they start behaving like human participants in a conversation. They think for a while before responding. They work on tasks in the background. They hand off to a human colleague when they hit their limits. Users walk away, come back later, and expect to pick up exactly where things were.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfrfutaduy9xfk2hjg38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfrfutaduy9xfk2hjg38.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are the exact communication challenges we've been solving for human-to-human interaction for 10 years. Presence, reliable delivery, session continuity across devices, bidirectional control. Every messaging app since WhatsApp has solved these problems for humans, and the moment agents become participants in conversations, they need the same infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  We've been building this for a decade
&lt;/h2&gt;

&lt;p&gt;I'll be honest. We almost dismissed the AI space entirely. When every company suddenly needed an "AI strategy," my instinct was skepticism. We're an infrastructure company. We process trillions of transactions across billions of devices. Why would we need an AI-specific product?&lt;/p&gt;

&lt;p&gt;I was wrong. Companies like Intercom and HubSpot were already building AI agent experiences on top of our Pub/Sub messaging infrastructure, the realtime layer that handles reliable delivery between servers, devices, and services. They needed ordered delivery, presence, session state, multi-device support. They were using the infrastructure we'd already built, without waiting for us to package it as an AI product.&lt;/p&gt;

&lt;p&gt;Ably has been a durable session layer for 10 years. We never called it that because the term didn't exist. We called it realtime infrastructure, messaging, pub/sub. But the capabilities are the same. Persistent sessions that survive disconnects. Ordered delivery with automatic catch-up. Multi-device fan-out, presence, bidirectional communication. We built all of this for human communication at scale, and it turns out it's exactly what AI-to-human communication needs too.&lt;/p&gt;

&lt;h2&gt;
  
  
  A category is forming
&lt;/h2&gt;

&lt;p&gt;We're not inventing this term. We're recognizing something that's already happening.&lt;/p&gt;

&lt;p&gt;ElectricSQL published a &lt;a href="https://electric-sql.com/blog/2026/01/12/durable-sessions-for-collaborative-ai" rel="noopener noreferrer"&gt;"Durable Sessions" blog post&lt;/a&gt; earlier this year defining it as a pattern for collaborative AI. EMQX has used "Durable Sessions" as &lt;a href="https://docs.emqx.com/en/emqx/latest/durability/durability_introduction.html" rel="noopener noreferrer"&gt;a named feature in their MQTT&lt;/a&gt; broker for years. Convex is building agent components with persistent threads and durable workflows. Vercel is building a DurableAgent class. At least 12 companies are converging on the same problem space from different angles.&lt;/p&gt;

&lt;p&gt;The pattern mirrors Durable Execution. Temporal existed before AI agents needed it, then suddenly every team building production agents needed backend workflows that couldn't fail. Temporal went from niche to &lt;a href="https://temporal.io/blog/temporal-raises-usd300m-series-d-at-a-usd5b-valuation" rel="noopener noreferrer"&gt;a $5 billion valuation.&lt;/a&gt; AWS adopted the term for Lambda Durable Functions. The category debate was over.&lt;/p&gt;

&lt;p&gt;Durable Execution made the backend crash-proof. Durable Sessions makes the experience crash-proof. They're complementary layers on opposite sides of the agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  This needs to be bigger than Ably
&lt;/h2&gt;

&lt;p&gt;A category with one company in it isn't a category. It's a product pitch. We want other companies in this space. We want developers to recognize "durable sessions" as an infrastructure layer they need, regardless of who provides it.&lt;/p&gt;

&lt;p&gt;We've published &lt;a href="https://durablesessions.ai/" rel="noopener noreferrer"&gt;durablesessions.ai&lt;/a&gt; as a community resource that defines the concept, documents vendor convergence, and tracks how the ecosystem is forming. I'm personally committed to pushing this forward. Not because it helps Ably specifically, but because I believe it will improve how we all build and experience AI. I've been doing this for a long time and I've never been more energized about what's ahead.&lt;/p&gt;

&lt;p&gt;If you're at AI Engineer Europe next week, our tech lead will be presenting on durable sessions and why this layer matters. I'll be there too. Come find me and the team. If you're building in this space, whether as a competitor, a complement, or a fellow traveler, I want to talk. Getting the people working on this in the same room, having honest conversations about what developers actually need, is worth more than any blog post.&lt;/p&gt;

&lt;p&gt;This is the first in a series. Over the coming weeks, we'll go deeper into the evidence, the ecosystem, and the practical framework for evaluating what your AI sessions actually need. Follow along here or connect with me on &lt;a href="https://www.linkedin.com/in/mattoriordan" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Why AI agents need a transport layer: Solving the realtime sync problem</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Mon, 27 Apr 2026 11:37:29 +0000</pubDate>
      <link>https://dev.to/ablyblog/why-ai-agents-need-a-transport-layer-solving-the-realtime-sync-problem-k4o</link>
      <guid>https://dev.to/ablyblog/why-ai-agents-need-a-transport-layer-solving-the-realtime-sync-problem-k4o</guid>
      <description>&lt;p&gt;Building AI agents that work reliably in production requires solving problems that have nothing to do with AI. While teams focus on prompt engineering, model selection, and agent orchestration, a different class of challenges emerges at deployment. These have little to do with LLMs and everything to do with keeping agents and clients synchronized in realtime.&lt;/p&gt;

&lt;p&gt;Over the past few months, we've spoken with engineers at over 40 companies building AI assistants, copilots, and agentic workflows. The same infrastructure problems surfaced repeatedly – problems with distributed systems, not models.&lt;/p&gt;

&lt;h2&gt;
  
  
  The infrastructure gap in AI applications
&lt;/h2&gt;

&lt;p&gt;When you're building an AI agent that streams responses to users, you're not just building an AI system. You're building a distributed realtime application where state needs to stay synchronized across components that connect, disconnect, and reconnect unpredictably.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4nlrbmq897i379jipft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4nlrbmq897i379jipft.png" alt=" " width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are the technical challenges that came up consistently:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connection management at scale.&lt;/strong&gt; Managing WebSocket or SSE connections between agents and clients becomes complex quickly. Connections drop during mobile network handoffs, page refreshes, and tab switches. Each disconnection requires handling buffering, replay logic, and state reconciliation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Client-specific state tracking.&lt;/strong&gt; Agents need to track what each individual client has received, across multiple devices and multiple users. When a client reconnects, the agent must determine exactly which messages they missed and replay only those, without gaps or duplicates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed agent routing.&lt;/strong&gt; In distributed deployments, reconnecting clients need to reach the correct agent instance. This gets harder still with durable execution patterns, where agent state persists but the instance handling it may change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuity between historical and live data.&lt;/strong&gt; Clients loading a conversation need continuity between historical messages and live streaming responses. Gaps in this transition break the user experience.&lt;/p&gt;

&lt;p&gt;What teams actually wanted wasn't complicated: token streams that survive network interruptions, conversations that work across device switches, multi-user sessions that stay synchronized, and long-running agent work that continues when users go offline.&lt;/p&gt;

&lt;p&gt;These requirements describe a transport layer problem – the infrastructure between agents and clients that handles delivery, synchronization, and state management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vwl8ddrhluaw16punj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vwl8ddrhluaw16punj4.png" alt=" " width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical patterns for AI workloads
&lt;/h2&gt;

&lt;p&gt;Several technical patterns emerged from observing how teams build AI applications on top of pub/sub infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/ai-transport/token-streaming" rel="noopener noreferrer"&gt;&lt;strong&gt;Token streaming&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;with message appends.&lt;/strong&gt; LLMs stream tokens individually, but storing thousands of separate token messages per response creates inefficient channel history. Loading a conversation would require replaying thousands of individual tokens.&lt;/p&gt;

&lt;p&gt;The solution is a message append operation: publish an initial message, then append subsequent tokens to it by referencing the message serial. Clients joining mid-stream receive the complete response so far in a single update, then receive subsequent appends. Channel history contains one compacted message per AI response rather than thousands of token fragments.&lt;/p&gt;

&lt;p&gt;Server-side rollups batch appends within a configurable time window (default 40ms) to stay within rate limits while maintaining smooth streaming UX. This handles the impedance mismatch between token-by-token streaming from models and efficient message storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Annotations for citations.&lt;/strong&gt; AI responses that reference external sources need citation metadata attached without modifying the response content itself. Publishing &lt;a href="https://ably.com/docs/ai-transport/messaging/citations" rel="noopener noreferrer"&gt;citations as annotations&lt;/a&gt; – metadata referencing a message serial – keeps the response clean while enabling rich client-side rendering.&lt;/p&gt;

&lt;p&gt;Annotations include a type (e.g., citations:multiple.v1) and arbitrary data: URLs, titles, character offsets for inline citation markers. The transport aggregates annotations automatically – clients receive a summary ("3 citations from wikipedia.org, 2 from nasa.gov") rather than processing every individual event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Messaging patterns for agentic workflows.&lt;/strong&gt; The bi-directional nature of channels enables several agent interaction patterns:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/ai-transport/messaging/tool-calls" rel="noopener noreferrer"&gt;Tool calls&lt;/a&gt;: Agents publish tool invocations with a toolCallId for correlation. Clients can render generative UI (displaying a weather card when get_weather is invoked) or execute client-side tools (agent requests GPS location, client executes locally and publishes the result back).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/ai-transport/messaging/human-in-the-loop" rel="noopener noreferrer"&gt;Human-in-the-loop&lt;/a&gt;: Agents publish approval requests. Authorized users review and respond over the same channel. The agent verifies the approver's clientId or userClaim before executing sensitive operations. The request-response pattern fits naturally into bi-directional channels.&lt;/p&gt;

&lt;p&gt;Chain-of-thought streaming: Streaming reasoning alongside output can happen inline (single channel, distinguished by message name) or threaded (separate reasoning channel per response, subscribed to on demand to reduce bandwidth).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5klq3s33z7o0zxao4x3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5klq3s33z7o0zxao4x3.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for production AI
&lt;/h2&gt;

&lt;p&gt;The gap between prototype and production AI isn't primarily about model capabilities. It's about infrastructure that handles the messy realities of distributed systems: network interruptions, device switches, concurrent users, and agent failures.&lt;/p&gt;

&lt;p&gt;When agents and clients communicate through a proper transport layer rather than direct connections, entire classes of complexity disappear. Agents don't track connection state. Reconnection logic isn't custom code in every agent. &lt;a href="https://ably.com/blog/cross-device-ai-sync" rel="noopener noreferrer"&gt;Multi-device support&lt;/a&gt; isn't a feature you build, it's a property of the architecture.&lt;/p&gt;

&lt;p&gt;The interesting problems in AI infrastructure aren't always where you expect them. Sometimes the hard part isn't the AI – it's keeping everything synchronized.&lt;/p&gt;

&lt;p&gt;Ready to build resilient AI applications? Explore the &lt;a href="https://ably.com/docs/ai-transport" rel="noopener noreferrer"&gt;AI Transport documentation&lt;/a&gt; for implementation patterns, code examples, and architectural guidance.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>The missing transport layer in user-facing AI applications</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Mon, 13 Apr 2026 14:17:39 +0000</pubDate>
      <link>https://dev.to/ablyblog/the-missing-transport-layer-in-user-facing-ai-applications-3j90</link>
      <guid>https://dev.to/ablyblog/the-missing-transport-layer-in-user-facing-ai-applications-3j90</guid>
      <description>&lt;p&gt;Most AI applications start the same way: wire up an LLM, stream tokens to the browser, ship. That works for simple request-response. It breaks when sessions outlast a connection, when users switch devices, or when an agent needs to hand off to a human.&lt;/p&gt;

&lt;p&gt;The cracks appear in the delivery layer, not the model. Every serious production team discovers this independently and builds their own workaround. Those workarounds don't hold once users start hitting them in production.&lt;/p&gt;

&lt;p&gt;Here's what breaks, and what the transport layer needs to handle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shift that creates the problem
&lt;/h2&gt;

&lt;p&gt;Simple AI applications are synchronous. User sends a message, model returns a response, done. A dropped connection restarts cleanly.&lt;/p&gt;

&lt;p&gt;Agentic applications aren't like that. They run in a loop: perceive the user's intent, reason with the model, act by calling tools or sub-agents, and observe the result. Then they go around again until the task is done.&lt;/p&gt;

&lt;p&gt;A research agent might loop a dozen times over several minutes, calling APIs and querying databases. The user is present throughout, watching, waiting, potentially needing to redirect. The connection might drop mid-loop, the user might switch devices, or they realize mid-stream the agent is heading the wrong way.&lt;/p&gt;

&lt;p&gt;That's a different problem, and one HTTP streaming wasn't designed to solve. The backend surviving and the session surviving are two different things. What's missing is a layer that treats the conversation as durable state: persisting across connections, devices, and participants.&lt;/p&gt;

&lt;p&gt;Durable execution makes the backend crash-proof. Durable sessions makes what the user actually sees crash-proof. Most teams building agentic products need both.&lt;/p&gt;

&lt;h2&gt;
  
  
  What breaks in production
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tokens disappear and reconnects corrupt state.&lt;/strong&gt; HTTP streaming delivers tokens once. A dropped connection loses them. Most workarounds handle full page reloads but not tab switches or mobile backgrounding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2odrarurbi84dlej26kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2odrarurbi84dlej26kt.png" alt=" " width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Worse, naive reconnect implementations replay the same output and produce duplicates: fragments, repeated tokens, or an interface in an indeterminate state. The Vercel AI SDK makes the tradeoff explicit: its resume and stop features are incompatible. You can resume a dropped stream or cancel it, but not both. &lt;a href="https://ably.com/blog/token-streaming-for-ai-ux" rel="noopener noreferrer"&gt;A full breakdown of what resumable streaming requires at the infrastructure level is here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Users can't see what the agent is doing.&lt;/strong&gt; The agent is running tool calls, checking backend systems, orchestrating sub-agents. From the user's perspective it's a spinner and silence. Users abandon tasks they can't see progressing.&lt;/p&gt;

&lt;p&gt;There's no standard mechanism for surfacing intermediate results as first-class events on the session channel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There's no way to interrupt.&lt;/strong&gt; Once generation starts, the user is locked out. Interruption requires bi-directional communication on the same channel simultaneously, user input arriving while agent output is still streaming, without breaking state. One company disabled user input entirely during agent responses because the backend couldn't distinguish an intentional cancel from a dropped connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre343jt98g88k35r6ovo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre343jt98g88k35r6ovo.png" alt=" " width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The agent keeps working after the user has left.&lt;/strong&gt; No signal tells the agent the user closed the tab. Compute and token costs accumulate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/presence-occupancy/presence" rel="noopener noreferrer"&gt;Presence&lt;/a&gt; is a live membership set showing who is active in the session. Agents use it to pause expensive operations when nobody is there and resume when they return.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple agents collide.&lt;/strong&gt; When two specialist agents are working on the same request, every intermediate update routes through the orchestrator. The orchestrator becomes a bottleneck: when it's relaying progress it doesn't care about, the architecture starts to fight itself. &lt;a href="https://ably.com/blog/multi-agent-ai-systems" rel="noopener noreferrer"&gt;The multi-agent coordination post goes deeper on how this plays out with concurrent specialist agents.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents fail silently.&lt;/strong&gt; Most infrastructure has no agent health mechanism at the transport level. When an agent crashes, a presence disconnect fires immediately, rather than waiting for a timeout inferred from a dead stream. Build on the wrong signal and recovery logic breaks under real failure conditions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4erssut4y9x7u106v15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4erssut4y9x7u106v15.png" alt=" " width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Human handovers lose context.&lt;/strong&gt; When an agent escalates, most implementations open a different interface, summarize what happened, and hope the transfer works. The user explains their problem again. A &lt;a href="https://ably.com/docs/ai-transport/messaging/human-in-the-loop" rel="noopener noreferrer"&gt;unified channel where agents and humans can both participate&lt;/a&gt; addresses this: the human arrives with full history and picks up mid-thread.&lt;/p&gt;

&lt;p&gt;There are no transport-level diagnostics. Model-level tooling shows what the model decided to do. Nothing shows what happened between the agent and the user's screen: whether a message arrived, whether a reconnection worked, whether delivery stalled. Debugging a failed session means stitching together server logs that rarely reconstruct what actually happened.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funrbaou5u34xrpo9yszd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funrbaou5u34xrpo9yszd.png" alt=" " width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the transport layer needs to handle
&lt;/h2&gt;

&lt;p&gt;Resumable streaming. Output persists in the channel, not the connection. When a client reconnects, it rejoins from its last received position with no gaps and no duplicates. Mutable messages handle retry corruption: republish to the same message ID and the client sees clean updated state, not a second copy. Vercel built a pluggable &lt;a href="https://ai-sdk.dev/docs/ai-sdk-ui/transport" rel="noopener noreferrer"&gt;ChatTransport interface&lt;/a&gt; specifically to support this pattern; TanStack AI shipped a &lt;a href="https://tanstack.com/ai/latest/docs/guides/connection-adapters" rel="noopener noreferrer"&gt;ConnectionAdapter&lt;/a&gt; for the same reason. The ecosystem has diagnosed the problem and built the plug-in points.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-device continuity.&lt;/strong&gt; &lt;a href="https://ably.com/docs/ai-transport/sessions-identity" rel="noopener noreferrer"&gt;Session state lives on the channel, not any individual client.&lt;/a&gt; Any device subscribing gets the same history and live updates. The session follows the user, not the connection.&lt;/p&gt;

&lt;p&gt;23 of 26 AI platforms evaluated in recent market research have no multi-device session continuity, including ChatGPT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bi-directional communication on a shared channel.&lt;/strong&gt; User input and agent output flow on the same channel simultaneously. A redirect from the user arrives as an explicit signal while the agent is mid-stream, not as an ambiguous TCP side effect. The backend can now distinguish an intentional cancel from a dropped connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Progress as structured events.&lt;/strong&gt; Agent reasoning steps, tool call progress, and intermediate results should be &lt;a href="https://ably.com/docs/ai-transport/messaging" rel="noopener noreferrer"&gt;first-class events on the channel&lt;/a&gt;, subscribable independently of the main response stream. Specialized agents publish progress directly. The orchestrator stops relaying events it doesn't care about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Presence.&lt;/strong&gt; A live membership set for users, agents, and human operators. Agents make real decisions based on it: pause when the user is gone, resume when they return. Crash detection is a presence event: when an agent disconnects, the event fires immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session-level diagnostics.&lt;/strong&gt; Channel history serves as both the live diagnostic feed and the persistent audit record: structured, timestamped, and identity-attributed. This covers the delivery layer between agent and user, separate from model-level observability, and both surfaces matter in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  The underlying principle
&lt;/h2&gt;

&lt;p&gt;Each of these problems is tractable in isolation. Solving all of them together, without a dedicated infrastructure layer, is where engineering budget quietly disappears. None of it has anything to do with the AI product itself.&lt;/p&gt;

&lt;p&gt;The workaround that seemed to hold breaks as soon as teams need cancellation, multi-device continuity, or human handover without a context break. The result is a growing layer of glue code that keeps teams away from the features they're actually trying to ship.&lt;/p&gt;

&lt;p&gt;The category forming around this problem, durable sessions, is the session-layer equivalent of what durable execution did for backend workflows. The infrastructure requirement is the same: a layer built for the failure modes that actually occur, not workarounds patched onto infrastructure designed for something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Ably AI Transport fits
&lt;/h2&gt;

&lt;p&gt;Ably AI Transport is a drop-in durable session layer that absorbs this complexity. Developers publish to a session. The infrastructure handles resumable streaming, multi-device continuity, presence, shared state, and bi-directional communication. No changes required to your model calls or agent orchestration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/ai-transport" rel="noopener noreferrer"&gt;Docs go deeper →&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Why your AI response restarts on page refresh (and what it takes to prevent it)</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Mon, 13 Apr 2026 13:58:52 +0000</pubDate>
      <link>https://dev.to/ablyblog/why-your-ai-response-restarts-on-page-refresh-and-what-it-takes-to-prevent-it-gd2</link>
      <guid>https://dev.to/ablyblog/why-your-ai-response-restarts-on-page-refresh-and-what-it-takes-to-prevent-it-gd2</guid>
      <description>&lt;p&gt;Your AI assistant is mid-sentence explaining a complex debugging strategy. The user refreshes the page. The response starts over from the beginning, or worse, vanishes entirely.&lt;/p&gt;

&lt;p&gt;This isn't a model problem. It's a delivery problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What breaks&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most AI applications &lt;a href="https://ably.com/docs/ai-transport/token-streaming" rel="noopener noreferrer"&gt;stream LLM responses&lt;/a&gt; over HTTP using Server-Sent Events or fetch streams. The connection delivers tokens in order until the response completes. If the user refreshes, closes the tab, or loses network connectivity, the stream ends. When they reconnect, there's no mechanism to resume from where they left off.&lt;/p&gt;

&lt;p&gt;The application has two options: start the entire response over (wasting tokens and user time) or lose everything that was streamed before the disconnection (losing context the user already read).&lt;/p&gt;

&lt;p&gt;Neither option works in production. Users refresh pages. Networks drop. Browsers crash. Mobile apps background. These aren't edge cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F861n0s6abmkchmlecz2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F861n0s6abmkchmlecz2j.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why naive approaches fail&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Client-side buffering:&lt;/strong&gt; You can cache tokens in memory or localStorage, but this only handles intentional refreshes on the same device. It doesn't help with network interruptions, crashes, or users switching devices mid-conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Response regeneration:&lt;/strong&gt; Re-requesting the full response from the LLM costs tokens, adds latency, and often produces different output. The user sees the response change on reload, breaking continuity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stateless HTTP streaming:&lt;/strong&gt; Standard SSE and fetch streams have no concept of session recovery. When the connection closes, the client has no way to tell the server "resume from token 847."&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How resumable streaming actually works&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The system needs three components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session identity:&lt;/strong&gt; Each AI response gets a unique session ID that persists across connections. When the client reconnects, it presents this ID to resume the same logical response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Offset tracking:&lt;/strong&gt; The server tracks which tokens have been delivered. The client tracks which tokens it has received and rendered. On reconnect, the client requests "start from token N."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ordered delivery with history:&lt;/strong&gt; The transport layer guarantees token ordering and maintains a replayable history. When a client reconnects with an offset, the server resumes delivery from that point without re-invoking the LLM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquckez75mq23z3u3tlsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquckez75mq23z3u3tlsn.png" alt=" " width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tradeoffs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Building this yourself means managing session state, handling offset synchronisation across multiple connections, and ensuring tokens arrive in order even if network packets don't. You'll need persistent storage for token history and logic to handle race conditions when users reconnect from multiple tabs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A concrete example&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;User asks an AI assistant to explain a codebase. The LLM streams 2,000 tokens over 30 seconds. At token 1,247, the user's network drops for eight seconds. Without resumability, the user sees a frozen response, then either loses everything or watches it restart.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://ably.com/blog/token-streaming-for-ai-ux" rel="noopener noreferrer"&gt;resumable streaming&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client detects the disconnection and stores offset 1,247&lt;/li&gt;
&lt;li&gt;Network recovers, client reconnects with session ID and offset&lt;/li&gt;
&lt;li&gt;Server resumes delivery from token 1,248&lt;/li&gt;
&lt;li&gt;User sees the response continue exactly where it stopped&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The user never knows there was an interruption.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Multi-device continuity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ably.com/blog/ably-ai-transport" rel="noopener noreferrer"&gt;Resumable streaming also enables conversation continuity across devices&lt;/a&gt;. The user starts a question on their laptop, switches to their phone, and sees the AI response pick up mid-stream. Same session ID, same offset tracking, different client.&lt;/p&gt;

&lt;p&gt;This matters for AI workflows that span locations: research started at a desk, continued on a commute, finished in a meeting room. Without transport-level session management, each device restart loses context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mvla9ev7prpo067bqhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mvla9ev7prpo067bqhl.png" alt=" " width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why this matters for AI reliability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Unreliable delivery creates unreliable AI experiences. Users learn not to trust that responses will complete. They avoid asking complex questions because they might lose the answer. They stop using AI features on mobile networks.&lt;/p&gt;

&lt;p&gt;Fixing this isn't about better models or smarter prompts. It's about ensuring delivery is as dependable as the intelligence behind it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Next steps&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you're building AI features where responses take more than a few seconds, or where users might switch devices or encounter network issues, you need resumable streaming. You can build session management and offset tracking yourself, or use infrastructure like &lt;a href="https://ably.com/ai-transport" rel="noopener noreferrer"&gt;Ably AI Transport&lt;/a&gt; that handles it for you.&lt;/p&gt;

&lt;p&gt;Either way, design for reconnection from day one. Your users will refresh. Your network will drop. Production isn't a stable connection.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>llm</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Resume tokens and last-event IDs for LLM streaming: How they work &amp; what they cost to build</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Mon, 13 Apr 2026 13:32:33 +0000</pubDate>
      <link>https://dev.to/ablyblog/resume-tokens-and-last-event-ids-for-llm-streaming-how-they-work-what-they-cost-to-build-4l7e</link>
      <guid>https://dev.to/ablyblog/resume-tokens-and-last-event-ids-for-llm-streaming-how-they-work-what-they-cost-to-build-4l7e</guid>
      <description>&lt;p&gt;When an AI response reaches token 150 and the connection drops, most implementations have one answer: start over. The user re-prompts, you pay for the same tokens twice, and the experience breaks.&lt;/p&gt;

&lt;p&gt;Resume tokens and last-event IDs are the mechanism that prevents this. They make streams addressable – every message gets an identifier, clients track their position, and reconnections pick up from exactly where they left off. The concept is straightforward. The production scope is not: storage design, deduplication, gap detection, distributed routing, and multi-device continuity all follow from the same first decision.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e66kdpxwi5xaz8ib8jy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e66kdpxwi5xaz8ib8jy.png" alt=" " width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How resume tokens actually work
&lt;/h2&gt;

&lt;p&gt;Resumable streaming has four moving parts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message identifiers.&lt;/strong&gt; Every token or message gets a sequential ID when published – monotonically increasing, so each new message has a higher ID than the previous one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Client state.&lt;/strong&gt; The client tracks the ID of the last message it successfully received. In a browser, that's typically held in memory or local storage. On mobile, it needs to survive app backgrounding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reconnection protocol.&lt;/strong&gt; When the connection drops, the client presents the last ID it saw. The server responds with everything that arrived after that ID, then transitions to live streaming.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Catchup delivery.&lt;/strong&gt; The client receives missed messages in order before live tokens resume. The seam should be invisible.&lt;/p&gt;

&lt;p&gt;The stream itself becomes the source of truth. The client doesn't reconstruct what it missed – the stream delivers it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What SSE's Last-Event-ID header gives you
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ably.com/topic/server-sent-events" rel="noopener noreferrer"&gt;Server-Sent Events&lt;/a&gt; implements this natively. When an SSE connection drops, the browser automatically includes a Last-Event-ID header on reconnection. The server sees which event the client last received and resumes from there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnssh7fnzz544paspo1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnssh7fnzz544paspo1k.png" alt=" " width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The browser handles reconnect logic. Application code doesn't change between initial connection and reconnection. For the happy path – stable connection, single device, short responses – SSE with Last-Event-ID works well.&lt;/p&gt;

&lt;p&gt;The problems start at the boundary of what SSE can do.&lt;/p&gt;

&lt;p&gt;SSE is unidirectional and HTTP-only. It has no native history beyond what you implement server-side. It doesn't handle bidirectional messaging, so live steering – users redirecting the AI mid-response – requires a separate channel. On distributed infrastructure, a reconnecting client may reach a different server instance that has no record of the original session. SSE handles the reconnect handshake. Everything else – distributed state, per-instance routing, multi-device history – is still your problem. For use cases that need bidirectional messaging, &lt;a href="https://ably.com/blog/websockets-vs-sse" rel="noopener noreferrer"&gt;WebSockets vs SSE&lt;/a&gt; covers the tradeoffs in detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building resume into WebSockets
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ably.com/topic/websockets" rel="noopener noreferrer"&gt;WebSockets&lt;/a&gt; don't include resume semantics. When a WebSocket closes, the connection is gone. Reconnecting creates a new socket with no knowledge of the previous one.&lt;/p&gt;

&lt;p&gt;Building resume on WebSockets means building all of it yourself:&lt;/p&gt;

&lt;p&gt;Session IDs generated at stream start, stored server-side, presented by the client on reconnection. Message IDs assigned sequentially. Server logic to look up a session, find the position, replay history, then transition to live. Buffer management to decide how long to keep messages for sessions that haven't reconnected yet. Cleanup logic to expire stale sessions without cutting off legitimate reconnects.&lt;/p&gt;

&lt;p&gt;Each piece is straightforward in isolation. The edge cases are where the weeks go.&lt;/p&gt;

&lt;h2&gt;
  
  
  The storage problem teams underestimate
&lt;/h2&gt;

&lt;p&gt;Token-level storage is where most implementations hit an unexpected wall.&lt;/p&gt;

&lt;p&gt;A 500-word response generates roughly 625 tokens. If you store each token as a separate record, loading one response means retrieving 625 records. A conversation with 20 exchanges is 12,500 records. Multiply across thousands of concurrent users and history retrieval becomes the performance bottleneck.&lt;/p&gt;

&lt;p&gt;This matters because history retrieval is on the critical path for multi-device continuity. When a user switches from laptop to phone, the speed of catchup determines whether the experience feels continuous or broken.&lt;/p&gt;

&lt;p&gt;The more practical model is to treat each AI response as a single logical message and append tokens to it rather than publishing them individually. Clients joining mid-stream receive the full message so far, then get new tokens as they arrive. One record per response instead of hundreds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Duplicates and gaps: the two failure modes that break trust
&lt;/h2&gt;

&lt;p&gt;Duplicates happen when the connection drops after the client receives a message but before the acknowledgement reaches the server. On reconnect, the server doesn't know whether to replay that message. Without deduplication logic, the client renders the same token twice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmq3n9zn1kh5sy5x77r9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmq3n9zn1kh5sy5x77r9.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fix is using message IDs as deduplication keys on the client – straightforward in principle, but it needs to survive page reloads and work across tabs.&lt;/p&gt;

&lt;p&gt;Gaps happen when sequential IDs arrive out of order or not at all. If a client receives message 153 after 150, messages 151 and 152 are missing. Without gap detection, the client silently renders an incomplete response. With it, you need logic to request missing messages, decide what to do if they can't be retrieved, and handle the state when the client gives up waiting.&lt;/p&gt;

&lt;p&gt;Both failure modes are rare enough to be invisible in testing. Both surface under real network conditions: mobile handoffs, flaky WiFi, corporate proxy timeouts. The first time you see them is usually a support ticket.&lt;/p&gt;

&lt;h2&gt;
  
  
  What distributed deployment adds
&lt;/h2&gt;

&lt;p&gt;A single-server implementation can tie session state to process memory and mostly work. As soon as you run multiple instances – which you will, for reliability and scale – a routing problem appears.&lt;/p&gt;

&lt;p&gt;A client that connected to instance A reconnects to instance B. Instance B has no record of the session. Your options: route all reconnections back to the originating instance (a pinning strategy that creates hotspots and defeats the purpose of multiple instances), or store session state in shared infrastructure that all instances can read.&lt;/p&gt;

&lt;p&gt;Shared session storage means Redis or equivalent: network round-trips on reconnect, cache invalidation logic, and failure handling when the cache is unavailable. This is solvable. It's also not in the first implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The multi-device gap
&lt;/h2&gt;

&lt;p&gt;Multi-device continuity is where connection-oriented design hits a wall.&lt;/p&gt;

&lt;p&gt;When state lives in the connection – or in server memory tied to that connection – device switching loses context. The phone doesn't know what the laptop received. Without a shared source of truth for message history that any device can query, each reconnect from a new device is a new session.&lt;/p&gt;

&lt;p&gt;True multi-device continuity requires decoupling state from connections entirely. The conversation lives in a channel or persistent store. Devices subscribe and catch up rather than resuming a connection.&lt;/p&gt;

&lt;p&gt;This is a different architectural model than resuming an HTTP stream. For most teams, that realisation arrives after the first implementation is already in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k5mq6yl5vgbly6h6auo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k5mq6yl5vgbly6h6auo.png" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When resumable streaming matters most
&lt;/h2&gt;

&lt;p&gt;Not every streaming application needs this. For short-lived, single-session interactions on stable connections, standard HTTP streaming is fine.&lt;/p&gt;

&lt;p&gt;Resume becomes critical under specific conditions:&lt;/p&gt;

&lt;p&gt;Mobile clients handle network handoffs between WiFi and cellular constantly. Each one is a potential disconnection.&lt;/p&gt;

&lt;p&gt;Long responses – anything over 30 seconds – have a high probability of encountering a transient failure.&lt;/p&gt;

&lt;p&gt;Multi-device usage means the conversation needs to live in a channel, not a connection.&lt;/p&gt;

&lt;p&gt;Multi-agent systems, where several agents publish updates to a shared channel. A reconnecting client needs to catch up on everything all agents published, not just the primary response thread.&lt;/p&gt;

&lt;p&gt;The alternative is forcing users to restart on every interruption. That breaks trust fast, and the cost compounds on longer or more complex tasks where restarting is most painful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you're actually signing up for to build this
&lt;/h2&gt;

&lt;p&gt;Teams that have shipped resumable streaming in production describe a consistent arc: the first implementation takes a week, the edge cases take a month, and cross-device reliability is still not fully solved six months later.&lt;/p&gt;

&lt;p&gt;The full scope of a production-grade build: session management, message storage with efficient retrieval by ID range, client-side deduplication, gap detection, distributed routing, cache invalidation, buffer expiry, and monitoring to surface issues you can't reproduce locally.&lt;/p&gt;

&lt;p&gt;Good transport infrastructure handles duplicates and gaps automatically. Application logic shouldn't need to check for either – that's the infrastructure's job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build vs infrastructure
&lt;/h2&gt;

&lt;p&gt;Building resumable streaming yourself is a reasonable choice if you have a stable team, time to maintain it, and no multi-device or distributed requirements.&lt;/p&gt;

&lt;p&gt;It's a harder choice than the SSE documentation makes it look. One team described spending several weeks on custom session management and still not fully solving cross-device reliability. The problems weren't obvious in the design phase – they appeared under mobile network conditions, under load, and when users did things the system wasn't built to handle.&lt;/p&gt;

&lt;p&gt;The alternative is transport infrastructure that implements resume as part of the platform. You keep control of your LLM, prompts, and application logic. Session continuity, offset management, ordered delivery, and multi-device state become infrastructure concerns rather than application concerns.&lt;/p&gt;

&lt;p&gt;Both paths are defensible. The costs of building are real and most of them are invisible until the first deploy.&lt;/p&gt;




&lt;p&gt;Streaming responses between AI agents and clients? &lt;a href="https://ably.com/docs/ai-transport/token-streaming" rel="noopener noreferrer"&gt;Ably AI Transport&lt;/a&gt; includes resumable token streaming, automatic replay, and channel-based delivery with guaranteed ordering. Docs go deeper.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>llm</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Appends for AI apps: Stream into a single message with Ably AI Transport</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Thu, 26 Feb 2026 12:05:30 +0000</pubDate>
      <link>https://dev.to/ablyblog/appends-for-ai-apps-stream-into-a-single-message-with-ably-ai-transport-398a</link>
      <guid>https://dev.to/ablyblog/appends-for-ai-apps-stream-into-a-single-message-with-ably-ai-transport-398a</guid>
      <description>&lt;p&gt;Streaming tokens is easy. Resuming cleanly is not. A user refreshes mid-response, another client joins late, a mobile connection drops for 10 seconds, and suddenly your "one answer" is 600 tiny messages that your UI has to stitch back together. Message history turns into fragments. You start building a side store just to reconstruct "the response so far".&lt;/p&gt;

&lt;p&gt;This is not a model problem. It's a delivery problem&lt;/p&gt;

&lt;p&gt;That's why we developed message appends for &lt;a href="https://ably.com/ai-transport" rel="noopener noreferrer"&gt;Ably AI Transport&lt;/a&gt;. Appends let you stream AI output tokens into a single message as they are produced, so you get progressive rendering for live subscribers and a clean, compact response in history.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The failure mode we're fixing&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The usual implementation is to stream each token as a single message, which is simple and works perfectly on a stable connection. In production, clients disconnect and resume mid-stream: refreshes, mobile dropouts, backgrounded tabs, and late joins.&lt;/p&gt;

&lt;p&gt;Once you have real reconnects and refreshes, you inherit work you did not plan for: ordering, dedupe, buffering, "latest wins" logic, and replay rules that make history and realtime agree. You can build it, but it is the kind of work that quietly eats weeks of engineering time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaxg5b1j6o07bcp2xkds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaxg5b1j6o07bcp2xkds.png" alt=" " width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With appends you can avoid that by changing the shape of the data. Instead of hundreds of token messages, you have one response message whose content grows over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The pattern: create once, append many
&lt;/h3&gt;

&lt;p&gt;In Ably AI Transport, you publish an initial response message and capture its server-assigned serial. That serial is what you append to.&lt;/p&gt;

&lt;p&gt;It's a small detail that ends up doing a lot of work for you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;response&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;serials&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;msgSerial&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, as your model yields tokens, you append each fragment to that same message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;token&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;appendMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;serial&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;msgSerial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;What changes for clients&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Subscribers still see progressive output, but they see it as actions on the same message serial. A response starts with a create, tokens arrive as appends, and occasionally clients may receive a full-state update to resynchronise (for example after a reconnection).&lt;/p&gt;

&lt;p&gt;Most UIs end up implementing this shape anyway. With appends, it becomes boring and predictable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message.append&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;renderAppend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message.update&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;renderReplace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important difference is that history and realtime stop disagreeing, without your client code doing any extra work. You render progressively for live users, and you still treat the response as one message for storage, retrieval, and rewind.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reconnects and refresh stop being special cases&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Short disconnects are one thing. Refresh is the painful case, because local state is gone and to stream each token as a single message forces you into replaying fragments and hoping the client reconstructs the same response.&lt;/p&gt;

&lt;p&gt;With message-per-response, hydration is straightforward because there is always a current accumulated version of the response message. Clients joining late or reloading can fetch the latest state as a single message and continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/docs/channels/options/rewind" rel="noopener noreferrer"&gt;Rewind&lt;/a&gt; and history become useful again because you are rewinding meaningful messages, not token confetti:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;realtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;channels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ai:chat&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;rewind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2m&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Token rates without token-rate pain
&lt;/h3&gt;

&lt;p&gt;Models can emit tokens far faster than most realtime setups want to publish. If you publish a message per token, rate limits become your problem and your agent has to handle batching in your code.&lt;/p&gt;

&lt;p&gt;Appends are designed for high-frequency workloads and include automatic rollups. Subscribers still receive progressive updates, but Ably can roll up rapid appends under the hood so you do not have to build your own throttling layer.&lt;/p&gt;

&lt;p&gt;If you need to tune the tradeoff between smoothness and message rate, you can adjust appendRollupWindow. Smaller windows feel more responsive but consume more message-rate capacity. Larger windows batch more aggressively but arrive in bigger chunks.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Enabling appends&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Appends require the "Message annotations, updates, appends, and deletes" channel rule for the namespace you're using. Enabling it also means messages are persisted, which affects usage and billing.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why this is a better default for AI output&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you are shipping agentic AI apps, you eventually need three things at the same time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;streaming UX&lt;/li&gt;
&lt;li&gt;history that's usable&lt;/li&gt;
&lt;li&gt;recovery that does not depend on luck&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Appends are how you get there without building your own "message reconstruction" subsystem. If you want the deeper mechanics (including the message-per-response pattern and rollup tuning), the &lt;a href="https://ably.com/docs/ai-transport" rel="noopener noreferrer"&gt;AI Transport docs&lt;/a&gt; are the best place to start.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>streaming</category>
      <category>realtime</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Realtime steering: interrupt, barge-in, redirect, and guide the AI</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Mon, 09 Feb 2026 09:58:06 +0000</pubDate>
      <link>https://dev.to/ablyblog/realtime-steering-interrupt-barge-in-redirect-and-guide-the-ai-22ai</link>
      <guid>https://dev.to/ablyblog/realtime-steering-interrupt-barge-in-redirect-and-guide-the-ai-22ai</guid>
      <description>&lt;p&gt;Start typing, change your mind, redirect the AI mid-response. It just works. That is the promise of realtime steering. Users expect to interrupt an answer, correct its direction, or inject new instructions on the fly without losing context or restarting the session. It feels simple, but delivering it requires low-latency control signals, reliable cancellation, and shared conversational state that survives disconnects and device switches. This post explores why expectations have shifted, why today's stacks struggle with these patterns, and what your infrastructure needs to support proper realtime steering.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's changing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI tools are moving beyond static, one-turn interactions. Users expect to interact dynamically, especially in chat. But most AI systems today force users to wait while the assistant responds in full, even if it's off-track or no longer relevant. That's not how human conversations work.&lt;/p&gt;

&lt;p&gt;Expectations are shifting toward something more natural. Users want to jump in mid-stream, adjust the AI's course, or stop it altogether. These patterns (barge-in, redirect, steer) are becoming table stakes for responsive, agentic assistants.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What users want, and why this enhances the experience&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Users want to stay in control of the conversation. If the AI starts drifting, they want to say "stop" or "try a different angle" and get an immediate course correction. They want to guide the assistant's direction without breaking the flow or starting over.&lt;/p&gt;

&lt;p&gt;This improves trust, keeps sessions on-topic, and avoids wasted time. It also brings AI interactions closer to how real collaboration works: iterative, reactive, fast.&lt;/p&gt;

&lt;p&gt;Users now expect a few technical behaviours as part of that experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responses can be interrupted in real time&lt;/li&gt;
&lt;li&gt;New instructions are applied mid-stream without reset&lt;/li&gt;
&lt;li&gt;The AI keeps context and adjusts without losing the thread&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why realtime steering is proving hard to build&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most AI systems treat generation as a one-way stream. Once the model starts producing tokens, the system just plays them out to the client. If the user wants to interrupt or change direction, the only real option is to cancel and resend a new prompt - often from scratch. That's because most systems today cannot support mid-stream redirection because their underlying communication model does not allow it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stateless HTTP cannot carry steering signals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Traditional request–response models push output in one direction only. Once a long-running generation begins, there is no reliable way to send control signals back to the server. Cancelling or redirecting usually means tearing down the stream and starting again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browser-held state breaks immediately&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most apps keep the state of an active generation in the browser. If the user refreshes or switches device, the in-flight response loses continuity. Any client-side steering logic tied to that state vanishes too, which forces a full reset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend models often run without shared conversational state&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the orchestration layer is not tracking what the AI is currently doing, it cannot apply corrections cleanly. The model receives a brand-new prompt instead of a context-preserving instruction layered onto an active task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The default stack was never designed for low-latency control loops&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Steering requires coordinated signalling between UI, transport, orchestration, and model inference. That means ordering guarantees, durable state, and fast propagation of control messages. Without these, the AI continues generating tokens after a user says stop, causing confusion and wasted compute.&lt;/p&gt;

&lt;p&gt;Steering mid-stream looks like a simple UX gesture. It is not. It is a distributed-systems problem sitting under a conversational interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why you need a drop-in AI transport layer for steering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Delivering realtime control requires more than token streaming. It requires a transport layer that keeps context alive, supports low-latency bidirectional messaging, and ensures that user instructions and model output remain synchronised.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bi-directional, low-latency messaging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Client-side signals such as "stop" or "try this instead" must reach the backend quickly and reliably. WebSockets or similar long-lived connections make this possible by enabling client-to-server control while the &lt;a href="https://ably.com/blog/token-streaming-for-ai-ux" rel="noopener noreferrer"&gt;AI continues to stream output.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpa1kjj7ko14raf19ty0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpa1kjj7ko14raf19ty0.png" alt=" " width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliable interrupt and cancellation primitives&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Stopping generation must be instant and clean. The transport must carry cancellation events with ordering guarantees so the backend halts inference exactly where intended, without corrupting state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session continuity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system needs persistent session identity so instructions and outputs are tied to the same conversational thread. Redirection should extend the session, not rebuild it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rwg6p1z9am78x5bck9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rwg6p1z9am78x5bck9a.png" alt=" " width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Presence and focus tracking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If users have &lt;a href="https://ably.com/blog/cross-device-ai-sync" rel="noopener noreferrer"&gt;multiple tabs or devices&lt;/a&gt; open, the system needs to know where instructions are coming from. Steering messages must route to the correct active session without collisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx70zhamocl1d04pebxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx70zhamocl1d04pebxr.png" alt=" " width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Realtime steering relies on a transport layer designed for conversational control, not just message delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How the experience maps to the transport layer&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;User experience desired&lt;/th&gt;
&lt;th&gt;Required transport layer features&lt;/th&gt;
&lt;th&gt;Underlying technical implementation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Interrupt and redirect responses in real time&lt;/td&gt;
&lt;td&gt;Bi-directional messaging&lt;/td&gt;
&lt;td&gt;WebSocket-based channels enabling client-to-server signals during output&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cancel generation cleanly&lt;/td&gt;
&lt;td&gt;Interrupt primitives&lt;/td&gt;
&lt;td&gt;Server-side control hooks to stop model inference and close stream pipelines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Preserve continuity after steering&lt;/td&gt;
&lt;td&gt;Session continuity&lt;/td&gt;
&lt;td&gt;Persistent session or conversation IDs with context caching&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Update response direction on the fly&lt;/td&gt;
&lt;td&gt;Dynamic state sync&lt;/td&gt;
&lt;td&gt;Shared state model where new input is merged into active conversational context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Steer across devices&lt;/td&gt;
&lt;td&gt;Identity-aware multiplexing&lt;/td&gt;
&lt;td&gt;Fan-out model updates across all user sessions in sync&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Realtime steering for AI you can ship today&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You don't need a new architecture to support real-time steering, cancellation, or recovery. You need a transport layer that can keep the session alive, deliver messages in order, and preserve state across disconnects. &lt;a href="https://ably.com/ai-transport" rel="noopener noreferrer"&gt;Ably AI Transport&lt;/a&gt; provides those foundations out of the box, so you can build controllable, resilient AI interactions without rebuilding your entire stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/sign-up" rel="noopener noreferrer"&gt;Sign-up for a free account&lt;/a&gt; and try today.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>systemdesign</category>
      <category>ux</category>
    </item>
    <item>
      <title>Why orchestrators become a bottleneck in multi-agent AI published</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Tue, 03 Feb 2026 12:42:23 +0000</pubDate>
      <link>https://dev.to/ablyblog/why-orchestrators-become-a-bottleneck-in-multi-agent-ai-published-4mgf</link>
      <guid>https://dev.to/ablyblog/why-orchestrators-become-a-bottleneck-in-multi-agent-ai-published-4mgf</guid>
      <description>&lt;p&gt;Complex user tasks often need multiple AI agents working together, not just a single assistant. That's what agent collaboration enables. Each agent has its own specialism - planning, fetching, checking, summarising - and they work in tandem to get the job done. The experience feels intelligent and joined-up, not monolithic or linear. But making that work means more than prompt chaining or orchestration logic. It requires shared state, reliable coordination, and user-visible progress as agents branch out and converge again. This post explores what users now expect, why traditional infrastructure falls short, and how to support truly collaborative AI systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's changing?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The shift from simple question-response to collaborative AI experiences goes beyond continuity or conversation. It's about delegation. Users are starting to expect AI systems that can take a complex request and break it down behind the scenes. That means not one big model doing everything, but a network of agents, each focused on a part of the task, coordinating to deliver a coherent outcome. We've seen this in tools like travel planners, research assistants, and document generators. You don't just want answers, you want progress, structure, and coordination you can see. The AI system shouldn't just feel like a chat thread, it should feel like a team quietly getting on with things while keeping you informed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What users want, and why this enhances the experience&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When users interact with a system powered by multiple agents, they want to feel the benefits of parallelism without the overhead of managing complexity. If one agent is fetching flight data, another handling hotel options, and a third reviewing visa requirements, the user doesn't care about the internal plumbing. They care that their travel plan is evolving visibly and coherently. They want to see that agents are working, understand what's happening in realtime, and be able to intervene or revise things if needed.&lt;/p&gt;

&lt;p&gt;Crucially, users expect the state of their task to reflect reality, not just the conversation. If they change a hotel selection manually, the system should adapt. If an agent crashes or stalls, the UI should show it. The value isn't just in faster results, it's in reliability, transparency, and the sense that multiple agents are genuinely collaborating, with each other and with the user - toward a shared goal.&lt;/p&gt;

&lt;p&gt;To deliver this, agent systems need to stay in sync. State needs to be shared across agents and user sessions. Progress needs to be surfaced incrementally, not hidden behind a final answer. And context must be preserved so agents don't overwrite or duplicate each other's work. That's what turns a bunch of isolated model calls into a coordinated assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why this is proving challenging&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Multi-agent systems &lt;em&gt;can&lt;/em&gt; work today, but the default pattern most tools push you toward is an orchestration-first user experience. Even when multiple agents are running behind the scenes, their activity is typically funnelled through a single orchestrator that becomes the only "voice" the user can see. That hides useful progress, creates a single bottleneck for updates, and limits how fluid the experience can feel.&lt;/p&gt;

&lt;p&gt;That's because traditional LLM interfaces assume a single stream of input and a single stream of output. Orchestration frameworks may invoke multiple agents in parallel, but the UI still tends to expose a linear, synchronous workflow: the orchestrator collects results, then reports back. If the user changes direction mid-process, or if an agent needs to react immediately to something in shared state, you're often forced back into "wait for the orchestrator" loops.&lt;/p&gt;

&lt;p&gt;The underlying infrastructure assumptions reinforce this. HTTP request/response cycles work well when one component is responsible for coordinating everything, but they make it awkward for &lt;em&gt;multiple&lt;/em&gt; agents to maintain an ongoing, direct connection to the user and to shared context. Token streaming helps, but it usually represents one agent's output to one user - not concurrent updates from a group of agents reacting in real time to a changing state.&lt;/p&gt;

&lt;p&gt;Ultimately, the challenge isn't that orchestration fails. It's that it constrains app developers. Most systems don't give you fine-grained control over which agent communicates what, when, and how, or an easy way to reflect multi-agent activity directly in the user experience. To build confidence and responsiveness, clients need to know which agents are active, what they're doing, and how that activity relates to the shared, realtime session context - without everything having to be mediated by a heavyweight orchestrator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6e7vwnv8qc56l22wz5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6e7vwnv8qc56l22wz5h.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why you need a drop-in AI transport layer&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To make multi-agent collaboration work in practice, you need infrastructure that handles concurrency, coordination, and visibility - not just messaging.&lt;/p&gt;

&lt;p&gt;The transport layer must support persistent, multiplexed communication where multiple agents can publish updates independently while still participating in the same user session. That gives app developers fine-grained control over the user experience: which agents speak to the user, when they speak, and how progress is presented. Orchestrators can still exist, but they don't have to mediate every user-facing update.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrkj6qtpjfddywuyvza6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrkj6qtpjfddywuyvza6.png" alt=" " width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  State synchronisation is non-negotiable
&lt;/h3&gt;

&lt;p&gt;Structured data, like a list of selected hotels or the current trip itinerary, should live in a realtime session store that agents and UIs can both read from and write to. This creates a  single source of truth, even when updates happen asynchronously, across devices, or outside the chat interface&lt;/p&gt;

&lt;h3&gt;
  
  
  Presence adds another layer of confidence
&lt;/h3&gt;

&lt;p&gt;When users see which agents are online and working, it sets expectations and builds trust. If an agent goes offline, the system should detect it, not leave the user guessing. This becomes even more important as these systems scale up in production environments where reliability is critical.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interruption handling rounds it out
&lt;/h3&gt;

&lt;p&gt;Users will change their minds mid-task. Your system needs to respond without the orchestrator agent tearing down and restarting everything. That means listening for user input while processing, canceling or rerouting tasks, and updating the shared state cleanly so individual agents can pick up where they left off or switch strategies on the fly.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How the experience maps to the transport layer&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;User experience desired&lt;/th&gt;
&lt;th&gt;Required transport layer features&lt;/th&gt;
&lt;th&gt;Underlying technical implementation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Visible, concurrent agent progress&lt;/td&gt;
&lt;td&gt;Multiplexed pub/sub channels&lt;/td&gt;
&lt;td&gt;Multiple agents publish progress updates to a shared realtime channel the UI subscribes to&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shared, up-to-date task state&lt;/td&gt;
&lt;td&gt;Structured state synchronisation&lt;/td&gt;
&lt;td&gt;Use of shared shared session state with clear schemas to reflect selections, status, and choices&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Seamless agent-to-agent coordination&lt;/td&gt;
&lt;td&gt;Out-of-band messaging support&lt;/td&gt;
&lt;td&gt;Internal HTTP APIs or RPC protocols between agents, decoupled from user-facing updates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Awareness of system activity and health&lt;/td&gt;
&lt;td&gt;Presence tracking&lt;/td&gt;
&lt;td&gt;Agents register presence on connection and broadcast availability or error states&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graceful handling of mid-task changes&lt;/td&gt;
&lt;td&gt;Event-driven state updates and recovery&lt;/td&gt;
&lt;td&gt;Listen to user changes in shared state and cancel or adjust in-flight work accordingly&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Making it work today&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Multi-agent collaboration is already happening in planning tools, research systems, and internal automation workflows. The models are not the limiting factor. The hard part is the infrastructure that keeps agents in sync, shares state reliably, and exposes progress to users in real time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/ai-transport" rel="noopener noreferrer"&gt;Ably AI Transport&lt;/a&gt; gives you the infrastructure needed to support this pattern. Realtime channels, shared state objects, presence, and resilient connections provide the foundations for agents that coordinate reliably and surface their work as it happens. No rebuilds, no custom multiplexing, no home-grown state machinery.&lt;/p&gt;

&lt;p&gt;Sign-up for a free developer account and try it out.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Multi-agent AI systems need infrastructure that can keep up</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Fri, 30 Jan 2026 10:49:09 +0000</pubDate>
      <link>https://dev.to/ablyblog/multi-agent-ai-systems-need-infrastructure-that-can-keep-up-3aj7</link>
      <guid>https://dev.to/ablyblog/multi-agent-ai-systems-need-infrastructure-that-can-keep-up-3aj7</guid>
      <description>&lt;h2&gt;
  
  
  An Ably AI Transport demo
&lt;/h2&gt;

&lt;p&gt;When you're building agentic AI applications with multiple agents working together, the infrastructure challenges show up fast. Agents need to coordinate, users need visibility into what's happening, and the whole system needs to stay responsive even as tasks branch out across specialised workers.&lt;/p&gt;

&lt;p&gt;We built a multi-agent travel planning system to understand these problems better. What we learned applies well beyond holiday booking.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/mO53IQcHDaQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  The coordination problem
&lt;/h2&gt;

&lt;p&gt;The demo uses four agents: one orchestrator and three specialists (flights, hotels, activities). When a user asks to plan a trip, the orchestrator delegates sub-tasks to the specialists. Each specialist queries data sources, evaluates options, and reports back. The orchestrator synthesises everything and presents choices to the user.&lt;/p&gt;

&lt;p&gt;This mirrors how most teams are actually building agentic systems. You don't build one massive agent that tries to do everything. You build focused agents, give them specific tools, and coordinate between them.&lt;/p&gt;

&lt;p&gt;The infrastructure question is: how do you keep everyone (the agents and the user) synchronized as work happens?&lt;/p&gt;

&lt;h2&gt;
  
  
  Why streaming alone isn't enough
&lt;/h2&gt;

&lt;p&gt;Token streaming solves part of this. The orchestrator can stream its responses back to the user so they're not waiting for complete answers. That's table stakes now for any AI interface.&lt;/p&gt;

&lt;p&gt;But streaming tokens from the orchestrator is only part of the problem. Users want visibility into the behaviour of each specialised agent – through their own token streams, structured updates like pagination progress, or the current reasoning of an agent working through a task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep58p3x4aoo90vi4oxrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep58p3x4aoo90vi4oxrx.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prompt: Plan a weekend trip to a nearby city&lt;/p&gt;

&lt;p&gt;In our AI Transport demo, we also use &lt;a href="https://ably.com/liveobjects" rel="noopener noreferrer"&gt;Ably LiveObjects&lt;/a&gt; to publish progress updates from each specialist agent. The user sees which agent is active (&lt;a href="https://ably.com/docs/presence-occupancy/presence" rel="noopener noreferrer"&gt;tracked via presence&lt;/a&gt;), what it's querying, and how much data it's processing. These aren't logs or debug output. They're structured state updates that drive the UI. The agent even decides how to represent its progress to the user, taking raw database query parameters and turning them into natural language descriptions through a separate model call.&lt;/p&gt;

&lt;p&gt;This requires infrastructure that can handle multiple publishers updating different parts of the shared state concurrently. The flight agent publishes its progress. The hotel agent publishes its progress. The orchestrator streams tokens (and it doesn't need to care about intermediate progress updates from the specialized agents). All on the same channel, all staying in sync.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1cl9ov6pxdjh2pgvkq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1cl9ov6pxdjh2pgvkq4.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agent searches for flights and hotels based of user's criteria&lt;/p&gt;

&lt;h2&gt;
  
  
  State that reflects reality, not just conversation
&lt;/h2&gt;

&lt;p&gt;Chat history creates a limited view of what's actually happening. If a user changes their mind, deletes a selection, or modifies something outside the conversation thread, the agent needs to know about it.&lt;/p&gt;

&lt;p&gt;We use Ably LiveObjects to maintain the user's current selections (flights, hotels, activities) and agent status. This creates a source of truth that exists independently of the conversation. The orchestrator can query this state directly through a tool call, even if nothing in the chat history explains the change.&lt;/p&gt;

&lt;p&gt;The interesting bit: agents can &lt;em&gt;subscribe&lt;/em&gt; to changes in this data, so they see updates live. While you could store this in a database and have agents query it via tool calls, the ability to subscribe means agents can react to user context in real time (what the user is doing in the app, data they're manipulating, configuration changes they're making).&lt;/p&gt;

&lt;p&gt;When the user asks "what's my current itinerary?", the agent doesn't rely on conversation history. It checks the actual state. If the user deleted their flight selection, the agent sees that immediately.&lt;/p&gt;

&lt;p&gt;This separation matters more as systems get complex. The conversation is one interface to the system. The actual state (what's selected, what's in progress, what's completed) needs to exist independently. Agents, users, and other parts of your system all need reliable access to current state, not a reconstruction from message history.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41ec36b05l4v3tg5reuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41ec36b05l4v3tg5reuh.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agent offers hotel options while remembering flight choice&lt;/p&gt;

&lt;h2&gt;
  
  
  Synchronising different types of state
&lt;/h2&gt;

&lt;p&gt;Not all state is created equal, and your infrastructure needs to handle different patterns:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured, bounded state&lt;/strong&gt; works well with LiveObjects. Progress indicators (percentage complete, items processed), agent status (online, processing, completed), user selections, and configuration settings all have predictable size limits. Clients can subscribe to changes and re-render UI efficiently. Agents can read current state without parsing through message history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unbounded state&lt;/strong&gt; like full conversation history, audit trails, or complete reasoning chains still belongs in messages on a channel. You're appending to a growing log rather than updating bounded data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bidirectional state synchronization&lt;/strong&gt; enables richer interactions. You can sync agent state to users (progress updates, ETAs, task lists), let users configure controls for agents (settings, preferences, constraints), and give agents visibility into user context (where they are in the app, what they're doing, what data they're viewing). Each of these can use structured data patterns for efficient synchronization.&lt;/p&gt;

&lt;p&gt;The key is knowing which pattern fits which data, and having infrastructure that supports both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decoupling internal coordination from user-facing updates
&lt;/h2&gt;

&lt;p&gt;The agents in our demo communicate with each other over HTTP using agent-to-agent protocols. That's appropriate for internal coordination. It's synchronous, it's request-response, it follows established patterns.&lt;/p&gt;

&lt;p&gt;The user-facing updates go over Ably AI Transport. That's where you need state synchronization and the ability for multiple publishers to update different parts of the UI concurrently.&lt;/p&gt;

&lt;p&gt;This decoupling matters. Each agent can independently decide how to surface its progress updates and state to the user, while the user maintains a single shared view over updates from all agents.&lt;/p&gt;

&lt;p&gt;We also let specialist agents write directly to LiveObjects, bypassing the orchestrator. When the flight agent has progress to report, it writes it. The user sees it. The orchestrator never touches that data (it only needs the final result). This avoids additional coordination and keeps the architecture simpler.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling interruptions
&lt;/h2&gt;

&lt;p&gt;Users change their minds. They interrupt. They refine requests mid-task. Your infrastructure needs to support this without rebuilding everything from scratch.&lt;/p&gt;

&lt;p&gt;In the demo, you can barge in and interrupt the agent while it's working. The system detects the new input, cancels the in-flight task, updates the state, and kicks off a new search. The UI shows the cancellation, the new request, and the new progress, all without breaking the conversation.&lt;/p&gt;

&lt;p&gt;This works because state updates are events on a channel. The agents listen for new user input even while they're processing. When they see it, they can decide whether to cancel current work, adapt it, or complete it first. The infrastructure doesn't dictate this logic (it enables it).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo21gdpcgxxxbski836rt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo21gdpcgxxxbski836rt.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agent then helps user select activities to do on trip&lt;/p&gt;

&lt;h2&gt;
  
  
  What presence actually tells you
&lt;/h2&gt;

&lt;p&gt;Before any interaction starts, the UI shows which agents are online. This comes from Presence. Each agent enters presence when it starts up and updates it as its status changes.&lt;/p&gt;

&lt;p&gt;Presence serves multiple purposes. Agents can see the online status of users and take action if a user goes offline (canceling tasks or queuing notifications – essential from a cost optimization perspective). In multi-user applications, users can see who else is online in the conversation. And for your operations team, it's observability built into the architecture. This answers a basic question for users: is this system actually working right now?&lt;/p&gt;

&lt;h2&gt;
  
  
  The enterprise patterns that emerge
&lt;/h2&gt;

&lt;p&gt;This travel demo is deliberately simple, but the patterns map directly to enterprise use cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Research workflows&lt;/strong&gt; where multiple agents pull from different data sources (financial databases, customer records, market data) and coordinate findings. Users need to see progress across all of them, not wait for a final answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document generation&lt;/strong&gt; where one agent structures the outline, others fill in sections, another handles compliance checks. The state (which sections are complete, which are being reviewed, what's been approved) needs to stay synchronized as different agents work in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customer support routing&lt;/strong&gt; where classification agents determine issue type, specialist agents handle resolution, and orchestration agents manage escalation. Status updates need to flow to support reps, customers, and dashboards in real time.&lt;/p&gt;

&lt;p&gt;The common thread: multiple agents, concurrent work, shared state, and humans who need visibility and control. The infrastructure that makes a travel planner responsive and reliable is the same infrastructure that makes these systems work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1qbrx7ghoy0ycph3dj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1qbrx7ghoy0ycph3dj0.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Labelled screenshot of AI Travel Agent's moving parts&lt;/p&gt;

&lt;h2&gt;
  
  
  What this requires from infrastructure
&lt;/h2&gt;

&lt;p&gt;You need a reliable transport layer that allows concurrent agents and clients to communicate in realtime. This isn't just about pub/sub – it's about robust infrastructure, high availability, and &lt;a href="https://ably.com/topic/pubsub-delivery-guarantees" rel="noopener noreferrer"&gt;guaranteed delivery&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You need state synchronisation that works for both structured data and message logs. Having access to both patterns depending on your needs is critical–bounded state objects for UI updates and configuration, unbounded message streams for conversation history and audit trails.&lt;/p&gt;

&lt;p&gt;You need presence so you know what's actually online and available. You need &lt;a href="https://ably.com/docs/platform/architecture/connection-recovery" rel="noopener noreferrer"&gt;connection recovery&lt;/a&gt; so users don't lose context when networks flicker.&lt;/p&gt;

&lt;p&gt;Most importantly, you need this to work at the edge – in browsers and mobile apps, not just between backend services. That's where your users are. That's where responsiveness matters. The transport layer needs to be &lt;a href="https://ably.com/blog/token-streaming-for-ai-ux" rel="noopener noreferrer"&gt;robust enough to handle the reality of client connectivity&lt;/a&gt;: spotty networks, mobile handoffs, browser tabs backgrounded and resumed.&lt;/p&gt;

&lt;p&gt;The hard part of building multi-agent systems isn't the LLMs. The models are getting better every month. The hard part is the coordination, the state management, the visibility, and the reliability as these systems get more complex.&lt;/p&gt;

&lt;p&gt;This is why we built AI Transport. We saw teams struggling with these exact problems: cobbling together WebSocket libraries, building their own state synchronization, dealing with reconnection logic, and watching their systems break under the messiness of real client connectivity. &lt;a href="https://ably.com/ai-transport" rel="noopener noreferrer"&gt;AI Transport gives you the infrastructure layer these systems need&lt;/a&gt;, built on Ably's proven reliability at scale, so you can focus on your agents instead of your transport layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building agentic AI experiences? You can ship it now
&lt;/h2&gt;

&lt;p&gt;This demo was built with &lt;a href="https://ably.com/ai-transport" rel="noopener noreferrer"&gt;Ably AI Transport&lt;/a&gt;. It's achievable today. You don't need to rebuild your stack to make it happen.&lt;/p&gt;

&lt;p&gt;Ably AI Transport provides all you need to support persistent, identity-aware, streaming AI experiences across multiple clients. If you're working on agentic products and want to get this right, improve the AI UX, we'd love to talk.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Anticipatory customer experience: How realtime infrastructure transforms CX</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Wed, 28 Jan 2026 10:27:17 +0000</pubDate>
      <link>https://dev.to/ablyblog/anticipatory-customer-experience-how-realtime-infrastructure-transforms-cx-3pn4</link>
      <guid>https://dev.to/ablyblog/anticipatory-customer-experience-how-realtime-infrastructure-transforms-cx-3pn4</guid>
      <description>&lt;p&gt;We're entering a new era of &lt;strong&gt;anticipatory customer experience&lt;/strong&gt; – one that's not just reactive, not just responsive, but truly predictive. In this new model, systems don't wait for friction to appear; they recognise signals early and step in before the user ever feels a slowdown or moment of uncertainty. The bar has shifted: customers now expect brands to predict their needs and act before friction even surfaces. It's a fundamental rewiring of the relationship between companies and the people they serve.&lt;/p&gt;

&lt;p&gt;This shift toward &lt;strong&gt;predictive customer experiences&lt;/strong&gt; isn't hypothetical. Anticipatory experiences are happening now, powered by &lt;strong&gt;realtime data infrastructure&lt;/strong&gt; that moves companies from playing catch-up to staying ahead. Think of it as the Age of Anticipation – where realtime signals, reliability, and adaptability form the core of modern CX design.&lt;/p&gt;

&lt;p&gt;Anticipatory CX isn't magic, it's just realtime infrastructure done right.&lt;/p&gt;

&lt;p&gt;So, if you're building next-generation CX or AI-powered agentic systems, this article outlines the architectural groundwork required to make anticipation real.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is anticipatory customer experience?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Anticipatory customer experience&lt;/strong&gt; uses realtime data infrastructure to predict and address customer needs before friction occurs. Unlike reactive support that waits for problems, anticipatory CX leverages continuous data streams, event-driven patterns, and predictive signals to intervene proactively, turning unknowns into reassurance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why realtime infrastructure matters for CX:&lt;/strong&gt; Realtime infrastructure enables the continuous flow of customer signals needed for prediction. Without it, systems rely on stale, batch-processed data that kills foresight. Companies like Doxy.me and HubSpot use &lt;strong&gt;realtime platforms&lt;/strong&gt; to anticipate confusion, delays, and churn risk before customers experience frustration.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;From reactive to anticipatory: Why realtime data infrastructure powers predictive CX&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Anticipation starts with having the right information at the right moment. But prediction requires fresh, &lt;strong&gt;realtime signals&lt;/strong&gt; flowing continuously through your systems.&lt;/p&gt;

&lt;p&gt;The healthcare sector illustrates this shift perfectly. &lt;a href="https://ably.com/case-studies/doxyme" rel="noopener noreferrer"&gt;Doxy.me&lt;/a&gt;, a telehealth platform trusted by hundreds of thousands of providers, faced a critical challenge: how do you anticipate patient confusion before it derails a virtual appointment? Their answer was "teleconsent" – a feature where healthcare providers walk patients through consent forms collaboratively, in real time.&lt;/p&gt;

&lt;p&gt;As the patient reads, fills in fields, and types responses, the provider sees every change as it happens. No refresh required. No lag. No wondering if the patient is stuck on question three. The system detects hesitation patterns and enables providers to intervene before confusion becomes abandonment. This is anticipatory CX in action – predicting friction points and addressing them before they escalate.&lt;/p&gt;

&lt;p&gt;But building this required infrastructure that could handle the continuous flow of patient interactions without introducing the very friction it was meant to eliminate. "The more that I can get my team to focus on healthcare business logic and less to focus on infrastructural data synchronisation, the better," explains Heath Morrison from Doxy.me. "Anything that provides higher level APIs to get us more in that space – and not be specialised in the stuff you guys should specialise in – is appealing and valuable to us."&lt;/p&gt;

&lt;p&gt;By rebuilding their realtime stack on reliable infrastructure, Doxy.me achieved a 65% cost reduction while transforming their system from a liability into a core strength. &lt;strong&gt;&lt;a href="https://ably.com/case-studies/doxyme" rel="noopener noreferrer"&gt;Read the full Doxy.me case study →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Retailers are doing similar work, spotting churn risk in realtime and intervening with targeted offers or support before the customer clicks away. Financial services companies are shifting from asking "what happened?" to "what's about to happen?" These aren't reactive fixes. They're &lt;strong&gt;anticipatory moves&lt;/strong&gt; that change outcomes – but only when the underlying data infrastructure can keep pace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Realtime infrastructure&lt;/strong&gt; like &lt;a href="https://ably.com/pubsub" rel="noopener noreferrer"&gt;Ably's&lt;/a&gt; makes this possible – it's the unseen layer that ensures systems receive the continuous stream of signals they need to predict accurately, without lag or data loss.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Industries using anticipatory CX&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Healthcare:&lt;/strong&gt; &lt;a href="https://ably.com/health-tech" rel="noopener noreferrer"&gt;Telehealth platforms&lt;/a&gt; use realtime infrastructure to anticipate patient needs, showing "doctor joining now" before patients wonder if something's wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Financial Services:&lt;/strong&gt; Banks predict fraud patterns and alert customers to unusual activity before money moves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retail:&lt;/strong&gt; E-commerce platforms spot abandonment signals and intervene with targeted offers before checkout failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logistics:&lt;/strong&gt; Delivery services flag delays and update ETAs before customers start refreshing tracking pages.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Building trust through realtime customer engagement: The infrastructure foundation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Trust is built in moments of uncertainty. And anticipation? It turns unknowns into reassurance.&lt;/p&gt;

&lt;p&gt;Think about the last time you booked a rideshare or waited for a delivery. The difference between a company that leaves you guessing and one that proactively updates you – "your driver is two minutes away," "slight delay, new ETA: 3:47pm" – is the difference between anxiety and confidence. &lt;strong&gt;Realtime anticipation&lt;/strong&gt; doesn't just inform, it reassures.&lt;/p&gt;

&lt;p&gt;Telehealth platforms have figured this out. When patients see "doctor joining now" before they've even begun to wonder if something's wrong, it changes the entire experience. Logistics companies that flag delays before customers start refreshing tracking pages are doing the same thing: reducing friction before it becomes frustration.&lt;/p&gt;

&lt;p&gt;But there's a flip side: when realtime systems fail, trust erodes faster than it built up. A phantom notification, a delayed update, an inaccurate prediction – these aren't just technical hiccups. They're credibility problems. Reliability isn't a nice-to-have, it's the foundation. When customers cite Ably's five-plus years without a global outage, they're not celebrating uptime for its own sake. They're describing the baseline that makes anticipation possible at scale. &lt;a href="https://status.ably.io/" rel="noopener noreferrer"&gt;View Ably's live uptime status&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ably exists to be that foundation. The reason trust can scale across millions of interactions, without companies needing to worry about the underlying infrastructure failing at the worst moment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Core technologies behind anticipatory CX&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Core technology&lt;/th&gt;
&lt;th&gt;Benefit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Realtime pub/sub messaging&lt;/td&gt;
&lt;td&gt;WebSocket-based event distribution for instant signal propagation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-driven architecture&lt;/td&gt;
&lt;td&gt;Composable, adaptive systems that respond to customer signals&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Predictive analytics&lt;/td&gt;
&lt;td&gt;AI-powered interpretation of continuous data streams&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Continuous data streams&lt;/td&gt;
&lt;td&gt;Sub-6.5ms message delivery latency without polling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fault-tolerant infrastructure&lt;/td&gt;
&lt;td&gt;99.999% uptime requirements for maintaining trust&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Future-proofing customer experience: Event-driven architecture for anticipatory CX&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To anticipate effectively, your CX stack needs to evolve as fast as your customers' expectations do. Rigid, monolithic architectures can't keep up with new signals, emerging channels, or changing customer behaviors. The future belongs to composable, &lt;strong&gt;event-driven systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Doxy.me's evolution illustrates this perfectly. They built their realtime features organically – using PubNub to handle presence detection and state synchronisation, all ephemeral data that disappeared after each session. But as they planned their next phase, they hit a wall: they needed persistence. The ability to decouple patient workflows from video calls, support richer collaboration, maintain state across sessions, and plug in new capabilities without rebuilding their entire stack. They prototyped with Convex and loved the developer experience, but needed production-grade infrastructure that could slot into their Node/TypeScript/Postgres/AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/pubsub" rel="noopener noreferrer"&gt;&lt;strong&gt;Event-driven architectures&lt;/strong&gt;&lt;/a&gt; make this kind of evolution possible. You can layer in predictive capabilities, plug in new communication channels, or add analytics tools – all without tearing everything down and starting over. One enterprise CX leader described it this way: "We used to dread adding new functionality. Now we think in terms of what events we need to listen for and what actions we want to trigger. It has completely changed our velocity."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/solutions/customer-experience-tech" rel="noopener noreferrer"&gt;Ably&lt;/a&gt; enables this kind of interoperability – CRMs, chat systems, analytics tools, customer-facing applications all publishing and subscribing to customer events in real time. WebSockets and pub/sub patterns ensure consistent, low-latency communication across every channel, without developers having to reinvent transport logic for each integration. It's the connective tissue that makes anticipatory systems work at scale.&lt;/p&gt;

&lt;p&gt;But more moving parts do mean more complexity. Companies need governance frameworks and resilience planning to ensure their adaptive architectures don't become fragile ones. The ones succeeding here aren't necessarily the ones with the newest tech – they're the ones who've built systems that can absorb change without breaking.&lt;/p&gt;

&lt;p&gt;The Age of Anticipation is composable. Adaptive, event-driven architecture is what makes foresight scalable.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to implement anticipatory CX&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Establish realtime data infrastructure&lt;/strong&gt; – Replace polling with streaming architecture for continuous signal flow&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Implement event-driven pub/sub patterns&lt;/strong&gt; – Enable loosely coupled systems that respond to customer signals&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Build predictive models using continuous data&lt;/strong&gt; – Layer AI/ML on top of realtime streams for pattern recognition&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Create proactive intervention workflows&lt;/strong&gt; – Design automated responses to predictive signals (offers, alerts, support)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Monitor reliability metrics rigorously&lt;/strong&gt; – Track latency, uptime, message integrity to maintain trust at scale&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What makes this different&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most CX discussions focus on speed (faster responses, quicker resolutions.) But anticipation goes deeper. It's about infrastructure that doesn't just move data quickly, but does so reliably enough to build trust and flexibly enough to adapt as expectations evolve. &lt;a href="https://ably.com/four-pillars-of-dependability" rel="noopener noreferrer"&gt;Explore Ably's four pillars of dependability&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Realtime infrastructure&lt;/strong&gt; is the hidden enabler. It's what makes customer care feel effortless, predictive, and ultimately, more human. Not because it replaces human judgment, but because it removes the friction that gets in the way of delivering exceptional care.&lt;/p&gt;

&lt;p&gt;The companies winning in the Age of Anticipation aren't the ones with the flashiest technology demos. They're the ones who've built the unglamorous, reliable, adaptive infrastructure that makes anticipation possible at scale. They've realised that foresight isn't magic – it's architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Business impact of anticipatory customer experience&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.pwc.com/us/en/services/consulting/business-transformation/library/2025-customer-experience-survey.html" rel="noopener noreferrer"&gt;&lt;strong&gt;52% of consumers&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;stopped using brands after bad experiences&lt;/strong&gt; – making proactive, anticipatory CX non-negotiable (PwC 2025 Customer Experience Survey)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ably.com/case-studies/doxyme" rel="noopener noreferrer"&gt;&lt;strong&gt;65% cost reduction&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;achieved by Doxy.me&lt;/strong&gt; through realtime infrastructure that prevents issues versus fixing them reactively&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://broadbandbreakfast.com/four-predictions-for-customer-experience-in-2025/" rel="noopener noreferrer"&gt;&lt;strong&gt;61% of CX leaders deliver proactive communications&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;using AI&lt;/strong&gt;, while only 6% of laggards do, creating a significant competitive gap (Cisco research)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://status.ably.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;5+ years without a global outage&lt;/strong&gt;&lt;/a&gt; – Ably's proven track record demonstrates the reliability required for maintaining trust at enterprise scale&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.nextiva.com/blog/customer-experience-insights.html" rel="noopener noreferrer"&gt;&lt;strong&gt;40% of companies plan to increase investment&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;in predictive instant experiences&lt;/strong&gt; in 2025, signalling industry-wide shift to anticipatory models&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Three pillars of anticipatory CX&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Realtime data streams&lt;/strong&gt; – Fresh, continuous signals flowing through your systems without latency or data loss&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Reliability at scale&lt;/strong&gt; – Infrastructure trusted to maintain consistency across millions of interactions, measured in years of uptime&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Adaptive architecture&lt;/strong&gt; – Event-driven systems that evolve with customer expectations without requiring rebuilds&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Ready to build anticipatory experiences?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ably's realtime platform delivers the continuous data streams and event-driven patterns your systems need to anticipate customer needs, with the reliability required to maintain trust at scale.&lt;/p&gt;

&lt;p&gt;Six-plus years of 100% uptime. Sub-6.5ms message delivery latency. Built-in message integrity guarantees.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ably.com/cx-tech" rel="noopener noreferrer"&gt;See how Ably powers anticipatory CX&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ably.com/blog/data-integrity-in-ably-pub-sub" rel="noopener noreferrer"&gt;Read more about the technicalities&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ably.com/support" rel="noopener noreferrer"&gt;Start building free or talk to our team about your use case&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
