<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ably</title>
    <description>The latest articles on DEV Community by Ably (@ably).</description>
    <link>https://dev.to/ably</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ably"/>
    <language>en</language>
    <item>
      <title>Ably AI Transport is now available</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Tue, 20 Jan 2026 17:49:29 +0000</pubDate>
      <link>https://dev.to/ably/ably-ai-transport-is-now-available-482p</link>
      <guid>https://dev.to/ably/ably-ai-transport-is-now-available-482p</guid>
      <description>&lt;p&gt;Today we’re launching &lt;a href="https://ably.com/ai-transport" rel="noopener noreferrer"&gt;&lt;strong&gt;Ably AI Transport&lt;/strong&gt;&lt;/a&gt;: a drop-in realtime delivery and session layer that sits between agents and devices, so AI experiences stay continuous across refreshes, reconnects, and device switches — without an architecture rewrite.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap: HTTP streaming breaks down for stateful AI UX
&lt;/h2&gt;

&lt;p&gt;AI has moved from “type and wait” requests to experiences that are long-running and stateful: responses stream, users steer mid-flight, and work needs to carry across tabs and devices. That shift changes what “working” means in production. It’s not just whether the model can generate tokens, it’s whether the experience stays continuous when real users behave like real users do.&lt;/p&gt;

&lt;p&gt;Most AI apps still start with a connection-oriented setup: the client opens a streaming connection (SSE, fetch streaming, sometimes WebSockets), the agent generates tokens, and the UI renders them as they arrive. It’s low friction and demos well.&lt;/p&gt;

&lt;p&gt;But HTTP streaming really solves only the first part of the problem, and it’s not a good place to end.&lt;/p&gt;

&lt;p&gt;First: &lt;strong&gt;continuity&lt;/strong&gt;. When output is tied to a specific connection, the experience becomes fragile by default. Refreshes, network changes, backgrounding, multiple tabs, device switches, agent handovers (even agent crashes) are normal behaviour. And they’re exactly where teams see partial output, missing tokens, duplicated messages, drifting state, and “start again” recovery paths. That’s where user trust gets lost.&lt;/p&gt;

&lt;p&gt;Second: &lt;strong&gt;capability&lt;/strong&gt;. A connection-first transport layer doesn’t just make UX fragile. It limits what you can build. Once you want true collaborative patterns like barge-in, live steering, copilot-style bidirectional exchange, multi-agent coordination, or a seamless human takeover with full context, you need more than “a stream.” You need a stateful conversation layer that can support multiple participants, resumable delivery, and shared session state.&lt;/p&gt;

&lt;p&gt;So teams patch it: buffering, replay, offsets, reconnection logic, session IDs, routing rules for interrupts and tool results, multi-subscriber consistency, and observability once production incidents start. It’s critical work — but it’s not differentiation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Ably AI Transport does
&lt;/h2&gt;

&lt;p&gt;AI Transport gives each AI conversation a durable bi-directional session that isn’t tied to one tab, connection or agent. Agents publish output into a session channel, clients subscribe from any device, and Ably handles the delivery guarantees you’d otherwise rebuild yourself: ordered delivery, recovery after reconnects, and fan-out to multiple subscribers.&lt;/p&gt;

&lt;p&gt;It’s deliberately model and framework-agnostic. You keep your agent runtime and orchestration. AI Transport handles the delivery and session layer underneath.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqaf20vuuz65xgvr49g7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqaf20vuuz65xgvr49g7.png" alt="AI Transport examples grid" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The key shift: sessions become channels
&lt;/h2&gt;

&lt;p&gt;In a connection-oriented setup, the “session” effectively lives inside the streaming pipe. When the pipe breaks, continuity becomes a headache.&lt;/p&gt;

&lt;p&gt;With AI Transport, the session is created once and represented as a durable channel. Agents and clients can join independently. Refresh becomes reattach and hydrate. Device switching becomes another subscriber joining the same session. Multi-device behaviour becomes fan-out rather than custom routing. Agents and humans become truly connected over a transport designed for AI bi-directional low latency conversations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkryrw054d5r2s8iie20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkryrw054d5r2s8iie20.png" alt="Before and after: HTTP streaming vs AI Transport" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Ably AI Transport ensures a resilient, stateful AI UX
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Resumable, ordered token streaming:&lt;/strong&gt; A great AI UX depends on durable streaming. Output is treated as session data, so clients can catch up cleanly after refreshes, brief dropouts, and network handoffs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-device continuity:&lt;/strong&gt; Conversations are user-scoped, not tab-scoped. Multiple clients can join the same session without split threads, duplication, or drifting state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live steering and interruption:&lt;/strong&gt; Modern AI UX needs control, not just output. Interrupts, redirects, and approvals route through the same bi-directional session fabric as the response stream, so steering works even across reconnects and devices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Presence-aware sessions:&lt;/strong&gt; Once agents do real work, wasted compute becomes a serious cost problem. Presence provides a reliable signal for whether the user is currently connected (or fully offline across devices), so you can throttle, defer, or resume work accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents that collaborate and act with awareness:&lt;/strong&gt; As soon as you have more than one agent (or an agent plus tools/workers), coordination becomes the product. Shared session state and routing prevent clashing replies, duplicated context, and “two brains answering at once,” so multiple agents can communicate directly with users coherently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seamless human takeover when it really matters:&lt;/strong&gt; When an agent hits a boundary (risk, uncertainty, or policy) a human should be able to step in with full context and continue the session immediately. The handoff keeps the same session history and controls, so there’s no repeated questions, no “start again,” and no losing track of what happened mid-flight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity and access control:&lt;/strong&gt; Beyond toy demos, you need to know who can read, write, steer, or approve actions. Verified identity plus fine-grained permissions let multi-party sessions stay secure without inventing a bespoke access model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability and governance:&lt;/strong&gt; When AI UX breaks in production, it’s rarely obvious where. Built-in visibility into session delivery and continuity makes failures diagnosable and auditable instead of “black box streaming incidents.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqaf20vuuz65xgvr49g7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqaf20vuuz65xgvr49g7.png" alt="AI Transport capabilities" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Concrete examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Multi-device copilots:&lt;/strong&gt; A user starts a long-running answer on desktop, switches to mobile mid-response, and the session continues without restarting. Steering and approvals apply to the same session regardless of device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cr0ci7i6jmhpiacn57r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cr0ci7i6jmhpiacn57r.png" alt="Multi-device copilots architecture" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long-running agents:&lt;/strong&gt; A research agent runs multi-step tool work for minutes. If the user disconnects, the work continues; when the user returns, the client hydrates from session history instead of resetting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dar8zk01rlw93zv3xk7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dar8zk01rlw93zv3xk7.png" alt="Long-running agents architecture" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started (low friction)
&lt;/h2&gt;

&lt;p&gt;You can get a basic session running in minutes:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
js
import Ably from 'ably';

// Initialize Ably Realtime client
const realtime = new Ably.Realtime({ key: 'API_KEY' });

// Create a channel for publishing streamed AI responses
const channel = realtime.channels.get('my-channel');

// Publish initial message and capture the serial for appending tokens
const { serials: [msgSerial] } = await channel.publish('response', { data: '' });

// Example: stream returns events like { type: 'token', text: 'Hello' }
for await (const event of stream) {
  // Append each token as it arrives
  if (event.type === 'token') {
    channel.appendMessage(msgSerial, event.text);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ai</category>
      <category>realtime</category>
      <category>devtools</category>
      <category>architecture</category>
    </item>
    <item>
      <title>AWS us-east-1 outage: How Ably’s multi-region architecture held up</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Fri, 24 Oct 2025 12:24:17 +0000</pubDate>
      <link>https://dev.to/ably/aws-us-east-1-outage-how-ablys-multi-region-architecture-held-up-15mk</link>
      <guid>https://dev.to/ably/aws-us-east-1-outage-how-ablys-multi-region-architecture-held-up-15mk</guid>
      <description>&lt;h2&gt;
  
  
  Resilience in action: zero service disruption
&lt;/h2&gt;

&lt;p&gt;During this week’s AWS us-east-1 outage, &lt;a href="https://ably.com/" rel="noopener noreferrer"&gt;Ably&lt;/a&gt; maintained full service continuity with no customer impact. This was our multi-region architecture working exactly as designed; error rates were negligibly low and unchanged throughout. Any additional round trip latency was limited to 12ms, which is below the typical variance in any client-to-endpoint connection, and well below our 40–50ms global median; this is imperceptible to users and below monitoring thresholds. There were no user reports of issues. Taken together this means there was zero service disruption.&lt;/p&gt;

&lt;h2&gt;
  
  
  The technical sequence
&lt;/h2&gt;

&lt;p&gt;Ably provides a globally-distributed system hosted on AWS with services provisioned in multiple regions globally. Each scales independently in response to the level of traffic in the regions, and us-east-1 is normally the busiest region.&lt;/p&gt;

&lt;p&gt;From the onset of the AWS incident what we saw was that the infrastructure already existing in that region continued to provide error-free service. However, issues with various ancillary AWS services meant that our control plane in the region was disrupted, and it was clear that we would not be able to add capacity in the region as traffic levels increased during the day.&lt;/p&gt;

&lt;p&gt;As a result, at around 1200 UTC we made DNS changes so that new connections were not routed to us-east-1; traffic that would have ordinarily been routed there (based on latency) were instead handled in us-east-2. This is a routine intervention that we make in response to disruption in a region. Pre-existing connections in us-east-1 remained untouched, continuing to serve traffic without errors and with normal latency throughout the incident. Our monitoring systems, via connections established before the failover, confirmed this directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Latency impact: negligible
&lt;/h2&gt;

&lt;p&gt;We continuously test real-world performance in multiple ways. Monitors operated by Ably, in proximity to regional datacenter endpoints, indicated that the worst case impact on latency - which would have been clients directly adjacent to the us-east-1 datacenter, but which now have to connect to us-east-2 - was 12ms at p50. We also have real browser &lt;a href="https://ably.com/docs/platform/architecture/latency#round-trip-latency-measurement" rel="noopener noreferrer"&gt;round-trip latency measurements&lt;/a&gt; using Uptrends, which more closely simulate real users, with actual browser instances publishing and receiving messages between various global monitoring locations.&lt;/p&gt;

&lt;p&gt;These measurements taken during the incident are shown below; real-world clients experienced even lower latency impact, since from each of the cities tested, there is negligible difference in distance, and  latency, between that location and us-east-2 versus us-east-1. Taken across all US cities that are monitoring locations, the measured latency difference averaged 3ms. That actual difference is substantially lower than normal variance in client connection latencies, and is therefore imperceptible to users and well below monitoring thresholds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvptk7f2a8ecv4w4nopag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvptk7f2a8ecv4w4nopag.png" alt="Ably publish error rates for us-east-1" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwoyevnqqwx9o2b37o8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwoyevnqqwx9o2b37o8p.png" alt="Ably US browser latencies" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We restored us-east-1 routing on 21 October following validation from AWS and our own internal testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture at work
&lt;/h2&gt;

&lt;p&gt;This incident validated our multi-region architecture in production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each region operates independently, isolating failures&lt;/li&gt;
&lt;li&gt;Latency-based DNS adapts routing to regional availability&lt;/li&gt;
&lt;li&gt;Existing persistent connections are unaffected if the only change is to the routing of new connections&lt;/li&gt;
&lt;li&gt;A further layer of defense, not used in this case, provides automatic client-side failover to up to five globally-distributed endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That final layer matters. Even if us-east-1 infrastructure had failed entirely (it didn’t), client SDKs would have automatically failed over to alternative regions, maintaining connectivity at the cost of increased latency. It didn’t activate this time, since regional operations continued normally, but it’s a core part of our defense-in-depth strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons reinforced
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The key takeaways for us from this incident:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A genuinely distributed system spanning multiple regions, not just availability zones, is essential for ultimate continuity of service&lt;/li&gt;
&lt;li&gt;Planning for, and drilling, responses to this type of event is critical to ensuring that your resilience is real and not just theoretical&lt;/li&gt;
&lt;li&gt;A multi-layered approach, with mitigations both in the infrastructure and SDKs, ensures redundancy and continuity even without active intervention. AWS continues to be an outstandingly good global service, but occasional regional failures must be expected. Well-architected systems on AWS infrastructure are capable of supporting the most critical business needs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep your realtime apps running smoothly, even when the internet breaks. Try &lt;a href="https://ably.com/" rel="noopener noreferrer"&gt;Ably&lt;/a&gt; for free today!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>uptime</category>
      <category>outage</category>
    </item>
    <item>
      <title>How doxy.me turned realtime from a liability into a strategic asset</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Thu, 17 Jul 2025 16:30:49 +0000</pubDate>
      <link>https://dev.to/ably/how-doxyme-turned-realtime-from-a-liability-into-a-strategic-asset-41bh</link>
      <guid>https://dev.to/ably/how-doxyme-turned-realtime-from-a-liability-into-a-strategic-asset-41bh</guid>
      <description>&lt;p&gt;When Realtime Breaks, Virtual Care Breaks.&lt;/p&gt;

&lt;p&gt;Here’s the hard reality of telehealth: when infrastructure cracks, care collapses. A missed chat ping or delayed check-in notification isn’t just a glitch, it’s a broken line of communication between a provider and a patient. That can be the difference between timely diagnosis and uncertainty, between trust and frustration.&lt;/p&gt;

&lt;p&gt;Doxy.me runs over 250,000 virtual visits every day. Their core product delivers browser-based telemedicine that works without downloads or installations. But behind the scenes, one part of the stack had become a chronic liability: realtime infrastructure.&lt;/p&gt;

&lt;p&gt;It wasn’t just fragile - it was feared. VP of Engineering, Ben Anderson-Dukes recalled their engineering team viewing it as a “black box.” Over time, that black box became a bottleneck. Small issues like ghost check-in chimes and out-of-order messages became symptoms of deeper instability. Lag was unpredictable. Full-day outages were a looming threat. Worse still, this fragile infrastructure had become the second-largest operating expense after video delivery.&lt;/p&gt;

&lt;p&gt;That wasn’t sustainable. But replacing their realtime provider wasn’t a simple procurement problem. It required a mindset shift from seeing realtime as a necessary evil to treating it like a core product surface that deserved strategic investment.&lt;/p&gt;

&lt;p&gt;A Strategic Overhaul, Not Just a Quick Fix.&lt;/p&gt;

&lt;p&gt;That partner was Ably. Unlike previous vendors, Ably took a collaborative, hands-on approach from day one, offering architectural guidance and flexible support throughout the migration.&lt;/p&gt;

&lt;p&gt;“We needed a partner who could not only help us rebuild realtime into something reliable, scalable, and secure, but one that was developer-friendly and of a similar mindset to doxy.me.”&lt;/p&gt;

&lt;p&gt;With support from external dev agency Walter Code, doxy.me and Ably planned a phased migration away from their existing realtime provider to Ably’s modern WebSocket-based infrastructure. The process was methodical: time it right, plan it right, implement it right. Nothing was rushed.&lt;/p&gt;

&lt;p&gt;“Initially, we thought this would take a year. But with Ably, we went from design to 100% migrated in under six months.”&lt;/p&gt;

&lt;p&gt;Despite handling more than 250,000 calls a day, the migration was completed with zero downtime. The transition not only modernized their infrastructure but also demystified it. Engineers gained new visibility into system behavior, and the team collectively regained confidence.&lt;/p&gt;

&lt;p&gt;“They helped us not only get there, but raise the bar in internal education about realtime as well.”&lt;/p&gt;

&lt;p&gt;Real Results, Not Just Promises.&lt;/p&gt;

&lt;p&gt;Post-migration, the transformation was visible across both technical and business metrics:&lt;br&gt;
• 65% reduction in realtime infrastructure costs&lt;br&gt;
• 95% fewer patient queue issues&lt;br&gt;
• 99% drop in app crashes caused by signaling issues&lt;br&gt;
• 100% elimination of ghost check-in chimes&lt;/p&gt;

&lt;p&gt;But beyond metrics, the real value came from stability. doxy.me’s CTO, initially skeptical, came to view realtime as a stable part of core infrastructure. The engineering team moved from firefighting to forward planning.&lt;/p&gt;

&lt;p&gt;“Support tickets dropped, realtime errors disappeared, and our Datadog logs became clean and readable.”&lt;/p&gt;

&lt;p&gt;The financial impact was just as compelling:&lt;/p&gt;

&lt;p&gt;“We saw a full ROI in under six months, despite using an external team to handle the migration. That’s unheard of.”&lt;/p&gt;

&lt;p&gt;“For me personally, it’s been a really great win. It’s bolstered Engineering’s reputation internally. There was much kudos served all-round.”&lt;/p&gt;

&lt;p&gt;Looking Forward: Realtime as Innovation Engine.&lt;/p&gt;

&lt;p&gt;With Ably in place, doxy.me isn’t just maintaining a stable stack, they’re building on top of it. New use cases are now in scope, including advanced presence, session orchestration, and collaboration tooling designed to make virtual care more intuitive and human.&lt;/p&gt;

&lt;p&gt;“Even now, we’re exploring roadmap innovations together.”&lt;/p&gt;

&lt;p&gt;One of doxy.me’s largest customers, accounting for nearly 30% of realtime traffic, now operates smoothly with no degradation in performance. Internally, engineers have better observability, faster diagnostics, and more freedom to innovate.&lt;/p&gt;

&lt;p&gt;“Ably is helping us push realtime beyond the basics, into new opportunities that let providers be with their patients more reliably and securely.”&lt;/p&gt;

&lt;p&gt;At its core, doxy.me is about connection - between patient and provider, between care and access.&lt;/p&gt;

&lt;p&gt;“We believe that providers are the real heroes, and that doxy.me is their superpower.”&lt;/p&gt;

&lt;p&gt;With Ably behind the scenes, that superpower now has a reliable, resilient realtime engine built for scale.&lt;/p&gt;

&lt;p&gt;“Ably has helped us turn realtime from a liability into a strategic asset.”&lt;/p&gt;

&lt;p&gt;Doxy.me built this on top of &lt;a href="https://ably.com/pubsub" rel="noopener noreferrer"&gt;Ably Pub/Sub&lt;/a&gt;, the core messaging product in the Ably platform. Please &lt;a href="https://ably.com/docs" rel="noopener noreferrer"&gt;see our docs&lt;/a&gt; if you're interested in the technical details.&lt;/p&gt;

</description>
      <category>pubsub</category>
      <category>websocket</category>
    </item>
    <item>
      <title>Achieving low latency with pub/sub</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Wed, 22 Jan 2025 22:39:19 +0000</pubDate>
      <link>https://dev.to/ably/achieving-low-latency-with-pubsub-33gd</link>
      <guid>https://dev.to/ably/achieving-low-latency-with-pubsub-33gd</guid>
      <description>&lt;p&gt;In pub/sub messaging systems, getting messages to flow quickly between publishers and users isn't just critical to its general performance, but central to its basic usability. Achieving this at scale introduces some extra challenges that require thoughtful architecture design and strategies for handling unexpected behavior (e.g. traffic spikes).&lt;/p&gt;

&lt;p&gt;To better understand the best practices we can implement to our architecture to overcome these challenges, let's revisit how the pub/sub pattern works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is pub/sub?
&lt;/h2&gt;

&lt;p&gt;Pub/sub (or publish/subscribe) is an architectural &lt;a href="https://ably.com/topics/patterns" rel="noopener noreferrer"&gt;design pattern&lt;/a&gt; used in distributed systems for asynchronous communication between different components or services. Although publish/subscribe is based on earlier design patterns like message queuing and event brokers, it is more flexible and scalable. The key to this is the fact that pub/sub enables the movement of messages between different components of the system without the components being aware of each other’s identity (they are decoupled).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9che68h2e4ibttiosa4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9che68h2e4ibttiosa4o.png" alt="pub-sub-pattern" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a deeper dive into pub/sub, including examples and comparisons to other messaging patterns, see our guide: &lt;a href="https://ably.com/topic/pub-sub" rel="noopener noreferrer"&gt;What is pub/sub?&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why latency is crucial to pub/sub realtime systems
&lt;/h2&gt;

&lt;p&gt;Latency is the time it takes for data to travel from the backend (like a datacenter) to the end-user’s device. Latency levels of &amp;lt;100ms are hard to achieve in general, but for pub/sub systems, it’s essential that those speeds remain not only consistent but also undetectable so that users remain engaged and don’t quit the app entirely. Applications that focus on, for example, crucial broadcasting updates, realtime chat, and live streaming services, need to be able to deliver seamless experiences with ultra-low latency to maintain their user bases.&lt;/p&gt;

&lt;p&gt;This becomes especially important at scale for a global audience: If a pub/sub system can’t maintain these speeds as it scales up and reaches a global user base, message delays could render it unusable, even if your infrastructure has the raw capacity. Serving a single region is significantly simpler than achieving consistent low latency across a distributed global audience, where factors like inter-region data replication and network variability come into play. &lt;em&gt;Global median latency&lt;/em&gt; is a good measure of average global latency if you’re operating at scale, and it’s the metric we use to measure our speeds at Ably. &lt;/p&gt;

&lt;p&gt;Some architectural decisions you can make to achieve low latency are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Global datacenter coverage:&lt;/strong&gt; The physical proximity of datacenters or edge points of presence (PoPs) to end users significantly impacts round-trip times for messages. If you distribute datacenters and PoPs globally, you can drive down latency for your users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Protocol efficiency:&lt;/strong&gt; The choice of protocol affects how efficiently messages are transmitted. For example, WebSocket is highly efficient for realtime communication &lt;a href="https://ably.com/blog/websockets-vs-long-polling" rel="noopener noreferrer"&gt;compared to HTTP long polling&lt;/a&gt;. (&lt;a href="https://ably.com/topic/websockets" rel="noopener noreferrer"&gt;WebSockets&lt;/a&gt; are a particularly good protocol for achieving low latency in pub/sub systems since they maintain an open connection between the client and server without the need for frequent HTTP responses. For a deeper dive into how WebSockets compare to other protocols in pub/sub systems, check out our guide &lt;a href="https://ably.com/topic/pub-sub-vs-websockets" rel="noopener noreferrer"&gt;Pub/Sub vs WebSockets&lt;/a&gt;.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Network robustness:&lt;/strong&gt; A reliable, fault-tolerant underlying network infrastructure can ensure consistent low latency even under high traffic volumes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges to achieving low latency
&lt;/h2&gt;

&lt;p&gt;The most straightforward obstacle to latency is network speeds - latency is inherently affected by the distance between clients and the server. The farther a client is from a datacenter, the longer it takes for messages to reach them.This is a critical consideration for global systems, where distances between users and datacenters can span continents. But there are other factors that can affect end-user latency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Message routing:&lt;/strong&gt; Poorly optimized routing can lead to bottlenecks, especially in use cases with high fanout where a single message is delivered to thousands or millions of subscribers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load balancing:&lt;/strong&gt; If you don’t have a load balancer, or an improperly configured one, imbalances can cause overloading of certain nodes, resulting in delays for subscribers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System resource contention:&lt;/strong&gt; High message volumes can strain CPU, memory, and storage resources, leading to increased latency. This is particularly true during traffic spikes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Encoding:&lt;/strong&gt; Inefficient message encoding increases latency by slowing down the system’s ability to translate data into a transmittable format and back again.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best practices for achieving low latency
&lt;/h2&gt;

&lt;p&gt;Best practices for achieving low latency are, on paper, straightforward fixes to the points discussed above. However, making these changes to your architecture requires significant engineering effort and potentially a rehaul of your existing infrastructure. Here’s what we recommend you do:&lt;/p&gt;

&lt;h4&gt;
  
  
  Use a globally-distributed architecture
&lt;/h4&gt;

&lt;p&gt;Deploying servers in multiple regions reduces the physical distance between clients and the server, minimizing network latency. Make sure that your infrastructure includes a combination of core datacenters and edge points of presence (PoPs). This secures fast round-trip and consistent round trip times for users anywhere in the world.&lt;/p&gt;

&lt;h4&gt;
  
  
  Optimize message routing
&lt;/h4&gt;

&lt;p&gt;Efficient routing algorithms, such as consistent hashing, can ensure that messages are delivered to subscribers quickly and reliably. For systems with high fanout, prioritize techniques that minimize duplication and ensure messages are processed efficiently.&lt;/p&gt;

&lt;h4&gt;
  
  
  Have a load balancer
&lt;/h4&gt;

&lt;p&gt;Dynamic load balancing distributes traffic evenly across servers, preventing overloading. For pub/sub systems, load balancers must account for both connection count and message throughput.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use message delta compression
&lt;/h4&gt;

&lt;p&gt;Compressing messages reduces their size, enabling faster transmission over the network. Use lightweight, efficient compression algorithms to minimize processing overhead.&lt;/p&gt;

&lt;h4&gt;
  
  
  Autoscale to reduce resource consumption
&lt;/h4&gt;

&lt;p&gt;Optimize resource usage by scaling infrastructure elastically during traffic spikes. Use dynamic autoscaling to add capacity on demand and maintain a significant resource buffer.&lt;/p&gt;

&lt;h4&gt;
  
  
  Have redundancy and failover
&lt;/h4&gt;

&lt;p&gt;Build redundancy into servers and have failover mechanisms that reroute traffic during outages. For global systems, failover strategies should account for regional redundancies to make sure that if one region experiences an outage, traffic can seamlessly shift to another without impacting users worldwide. This minimizes latency spikes during failover events and ensures uninterrupted service.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Ably can help
&lt;/h2&gt;

&lt;p&gt;For many people, building a system with all of these components from the ground up is impractical and is a huge time and skill investment. That investment also tends to be more expensive than initially expected because of  &lt;a href="https://ably.com/blog/building-realtime-infrastructure-costs-and-challenges" rel="noopener noreferrer"&gt;maintenance costs and other challenges&lt;/a&gt; - like scalability and &lt;a href="https://ably.com/blog/why-data-integrity-is-essential-for-delivering-realtime-updates" rel="noopener noreferrer"&gt;data integrity&lt;/a&gt; -  that make maintaining a low enough latency even more difficult.&lt;/p&gt;

&lt;p&gt;At Ably, our team is very familiar with the amount of work building a low-latency pub/sub system takes - and all the edge cases around optimum performance. We’ve made it our mission to provide the most reliable realtime service for you - and &lt;a href="https://ably.com/pubsub" rel="noopener noreferrer"&gt;Ably Pub/Sub&lt;/a&gt; is devoted to pub/sub use cases.&lt;/p&gt;

&lt;p&gt;Choosing a managed pub/sub service like Ably can save you and your team the headache of managing the architectural challenges of low latency at scale. Performance is one of Ably’s core pillars, and it’s built into what we do. Here’s how:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Predictable performance:&lt;/strong&gt; A low-latency and high-throughput &lt;a href="https://ably.com/network" rel="noopener noreferrer"&gt;global edge network&lt;/a&gt;, with median latencies of &amp;lt;50ms.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Guaranteed ordering &amp;amp; delivery:&lt;/strong&gt; Messages are delivered in order and exactly once, with automatic reconnections. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fault-tolerant infrastructure:&lt;/strong&gt; Redundancy at regional and global levels with 99.999% uptime SLAs. 99.999999% (8x9s) message availability and survivability, even with datacenter failures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High scalability &amp;amp; availability:&lt;/strong&gt; Built and battle-tested to handle millions of concurrent connections at scale. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized build times and costs:&lt;/strong&gt; Deployments typically see a 21x lower cost and upwards of $1M saved in the first year.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Low latency is non-negotiable for any pub/sub system that aims to deliver realtime experiences at scale. If you’re looking for a solution that scales up and ensures some of the lowest latencies in the business, Ably provides a robust and reliable platform to power your pub/sub needs. &lt;a href="https://ably.com/sign-up" rel="noopener noreferrer"&gt;Sign up&lt;/a&gt; for a free account to try it for yourself.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>pubsub</category>
      <category>networking</category>
    </item>
    <item>
      <title>How to use Ably LiveSync’s MongoDB Connector for realtime and offline data sync</title>
      <dc:creator>Carolina Carriazo</dc:creator>
      <pubDate>Thu, 16 Jan 2025 13:01:19 +0000</pubDate>
      <link>https://dev.to/ably/how-to-use-ably-livesyncs-mongodb-connector-for-realtime-and-offline-data-sync-4d5o</link>
      <guid>https://dev.to/ably/how-to-use-ably-livesyncs-mongodb-connector-for-realtime-and-offline-data-sync-4d5o</guid>
      <description>&lt;p&gt;In light of the recent deprecation of MongoDB Atlas Device Sync (ADS), developers are seeking alternative solutions to synchronize on-device data with cloud databases. Ably LiveSync offers a potential alternative and can replace some of ADS’s functionality, enabling realtime synchronization of database changes to devices at scale. LiveSync allows for a large number of changes to MongoDB to be propagated to end user devices in realtime and store the changes in any number of local storage options - from an embedded database to in-memory storage.&lt;/p&gt;

&lt;p&gt;For instance, imagine an inventory app that needs to broadcast stock updates to multiple devices in realtime. Ably LiveSync allows you to automatically subscribe to inventory changes in your database and broadcast this data to millions of clients at scale, allowing them to remain synchronized with the state of your inventory in realtime.&lt;/p&gt;

&lt;p&gt;This article explains why on-device storage is critical, explores existing solutions, and demonstrates how Ably LiveSync’s MongoDB connector can help with a brief code tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why keep information on-device?
&lt;/h2&gt;

&lt;p&gt;Local storage is a must for apps that need offline access or fast performance— like  e-commerce inventory apps, or news apps downloading content for offline browsing. But not every app needs it. If your app is always online or just streams read-only data, you can skip the complexity of a local database. Thankfully, with Ably, you can adapt to your use case, whether you need offline support or just realtime updates. Some of the benefits of on-device storage are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Offline access&lt;/strong&gt;: Storing data directly on the device ensures users can seamlessly access and interact with information even when they have no internet connection or are in areas with poor connectivity. This is particularly crucial for users who frequently work in offline environments or travel to locations with unreliable network coverage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Applications demonstrate significantly improved response times and reduced latency when accessing data stored locally, as opposed to making time-consuming server calls across the network. This local data access eliminates network-related delays and provides instantaneous data retrieval for critical operations.
Cost efficiency: Users experience substantial savings on their data usage and associated costs since the application doesn't need to repeatedly download information from remote servers. This is especially beneficial for users with limited data plans or in regions where mobile data is expensive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User experience&lt;/strong&gt;: Users benefit from a consistently smooth and reliable application experience, maintaining uninterrupted access to their data regardless of their network status or connection quality. This reliability helps build user trust and satisfaction with the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Options for storing information on device
&lt;/h2&gt;

&lt;p&gt;Modern mobile operating systems provide a variety of ways to store information on device:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;iOS&lt;/strong&gt;: Includes UserDefaults, CoreData, and SQLite, with flexibility for additional solutions based on specific needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Android&lt;/strong&gt;: Provides shared preferences, Room database, and file storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-platform frameworks&lt;/strong&gt;: With React Native, react-native-async-storage is a popular starting library for simple needs. However, for advanced use cases requiring NoSQL-like abilities, some good choices here would be RealmDB (which, unfortunately, as we know, is being deprecated), UnQLite, LevelDB, or Couchbase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regardless of your choice of an on-device database and storage methodology, you can use Ably LiveSync to synchronize data from your managed or on-premises database to mobile devices in realtime. &lt;strong&gt;&lt;em&gt;This includes MongoDB - as well as Atlas.&lt;/em&gt;&lt;/strong&gt; While we currently support only MongoDB and PostgresSQL, we are working on adding support for other database engines.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Ably LiveSync?
&lt;/h2&gt;

&lt;p&gt;Ably LiveSync lets you monitor database changes and reliably broadcast them to millions of frontend clients, keeping them up-to-date in realtime.&lt;/p&gt;

&lt;p&gt;LiveSync works with any tech stack and prevents data inconsistencies from dual-writes while avoiding scaling issues from "thundering herds" — sudden surges of traffic that can overwhelm your database.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to persist data locally with Ably
&lt;/h2&gt;

&lt;p&gt;Let’s explore how to build a simple in-store management app that tracks product inventory using React Native and SQLite for local storage and a Mongo Atlas for our cloud database. Despite Mongo being a document storage and SQLite being a relational database, the two can be used in combination. We are going to use the Ably SDK callback methods to store documents and changes inside our local SQLite database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up Ably
&lt;/h3&gt;

&lt;p&gt;For simplicity, we’ll stick to TypeScript. Before anything else, create a new React Native project using the CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx @react-native-community/cli@latest init AwesomeStore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a MongoDB integration rule with Ably
&lt;/h3&gt;

&lt;p&gt;Now we need to create a new channel that streams database changes to your clients. This ensures realtime updates whenever your MongoDB data changes. To create an integration rule that will sync your MongoDB database with Ably, you’ll first have to sign up for an Ably account.&lt;/p&gt;

&lt;p&gt;Once that’s done, you should have access to your Ably dashboard. Create an app or select the app you wish to use. Navigate to the &lt;strong&gt;Integrations&lt;/strong&gt; tab &amp;gt; &lt;strong&gt;Create a new integration rule&lt;/strong&gt; &amp;gt; &lt;strong&gt;MongoDB&lt;/strong&gt;. Fill out the &lt;strong&gt;Connection URL&lt;/strong&gt; with your MongoDB connection URL; Database name with your db name (for this example, &lt;code&gt;SQLiteDatabase&lt;/code&gt;); and &lt;strong&gt;Collection&lt;/strong&gt; with your collection name (for this example, &lt;code&gt;products&lt;/code&gt;). For more information on this process and the parameters involved, check out &lt;a href="https://ably.com/docs/livesync/mongodb#integration-rule" rel="noopener noreferrer"&gt;our docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53jc4ffecmn7161n7b06.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53jc4ffecmn7161n7b06.gif" alt="Navigating to the MongoDB integration rule" width="960" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This  sets up a new channel, built on top of our core Ably Pub/Sub product,  which streams changes (through MongoDB change streams) from your database to your clients.This essentially ensures that any change that occur in your database will be delivered to any device subscribed to a channel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the local datastore
&lt;/h3&gt;

&lt;p&gt;We’ll create a new file in our project called &lt;code&gt;datastore.js&lt;/code&gt; and initialize SQLite:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createTables&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SQLiteDatabase&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`CREATE TABLE IF NOT EXISTS products(
        id INT32
        name TEXT NOT NULL
        description TEXT
        quantity INT32
    );`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;executeSql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the tables are created, we need a way to retrieve store  products and update their stock:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getProducts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SQLiteDatabase&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ToDoItem&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;products&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;StoreProduct&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;executeSql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`SELECT id, name, description, quantity FROM &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;tableName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to get products!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;saveOrUpdateProducts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SQLiteDatabase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;StoreProduct&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;insertQuery&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
    &lt;span class="s2"&gt;`INSERT OR REPLACE INTO &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;tableName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;(id, name, description, quantity) values`&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`(&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;', '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;', '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;')`&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;executeSql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;insertQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Receiving database changes from MongoDB over Ably
&lt;/h3&gt;

&lt;p&gt;Let’s take a look at how we can receive changes from the configured Ably channel. More information can be found in our documentation, but this is the important snippet we need - setting up the Ably Realtime SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;Ably&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ably&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Instantiate the Realtime SDK&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ably&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Ably&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Realtime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[your API key]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Get the channel to subscribe to&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ably&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;channels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;store:1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Subscribe to messages on the 'store:1' channel&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Print every change detected in the channel&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Received a change event in realtime: &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to write a new function that will take the payload of &lt;code&gt;message.data&lt;/code&gt; and store it in our database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;addOrUpdateProduct&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;product&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;StoreProduct&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getDBConnection&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;saveOrUpdateProducts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can call our new function in our message subscription:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Subscribe to messages on the 'store:1' channel&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Print every change detected in the channel&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Received a change event in realtime: &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;coll&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;products&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;addOrUpdateProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;coll&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;store&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// Another function which updates changes for the `store` collection&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Unknown collection&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The full workflow
&lt;/h3&gt;

&lt;p&gt;With this setup, the app listens for realtime updates from your MongoDB collection and persists changes locally, ensuring an up-to-date inventory system even when offline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts: What makes Ably different
&lt;/h2&gt;

&lt;p&gt;I hope that gives you a good overview of what Ably LiveSync’s MongoDB Connector can do! Besides providing a potential alternative to Atlas Device Sync, Ably, as a realtime communications platform, is built for scalability and reliability. Here are some features of Ably Pub/Sub, the backbone upon which LiveSync’s database connector is built:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Predictable performance&lt;/strong&gt;: A low-latency and high-throughput global edge network, with median latencies of &amp;lt;50ms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guaranteed ordering &amp;amp; delivery&lt;/strong&gt;: Messages are delivered in order and exactly once, even after disconnections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fault-tolerant infrastructure&lt;/strong&gt;: Redundancy at regional and global levels with 99.999% uptime SLAs. 99.999999% (8x9s) message availability and survivability, even with datacenter failures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High scalability &amp;amp; availability&lt;/strong&gt;: Built and battle-tested to handle millions of concurrent connections at scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized build times and costs&lt;/strong&gt;: Deployments typically see a 21x lower cost and upwards of $1M saved in the first year.Try Ably today and explore our MongoDB connector.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>mongodb</category>
      <category>database</category>
      <category>realtime</category>
    </item>
    <item>
      <title>Low latency at scale: Gaining the competitive edge in sports betting</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Mon, 06 Jan 2025 09:27:04 +0000</pubDate>
      <link>https://dev.to/ably/low-latency-at-scale-gaining-the-competitive-edge-in-sports-betting-2ncc</link>
      <guid>https://dev.to/ably/low-latency-at-scale-gaining-the-competitive-edge-in-sports-betting-2ncc</guid>
      <description>&lt;p&gt;The sports betting industry has grown rapidly in recent years, fueled by changing regulations, advancements in technology, and a rising demand for realtime interactions from consumers. For many fans, in-play betting adds another dimension to how they can engage, making them feel closer to the action. When you then consider the increasing number of global followers many teams now have, it’s easy to understand why global revenues are projected to continue expanding at a compound annual growth rate (CAGR) exceeding 10%. To sustain this growth, data providers and betting companies face a key challenge: consistently ensuring fast, low latency delivery of data and services, even as they scale operations to meet growing demand – every fan needs to receive the same data at the same time.&lt;/p&gt;

&lt;p&gt;Low latency - the rapid transmission of data with minimal delay—plays a crucial role in the sports betting ecosystem. A single delay in odds updates or live bet placement can impact user retention and create financial exposure for operators. This challenge becomes even more pressing as companies expand into new geographies, requiring infrastructure that can deliver consistent, low latency experiences worldwide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why low latency matters in sports betting
&lt;/h2&gt;

&lt;p&gt;In sports betting, every second matters. For bettors, much of the appeal of sports betting lies in the immediacy of their interaction with live events. Odds must be delivered and displayed in realtime, and delays can result in missed opportunities for bettors and revenue loss for operators.&lt;/p&gt;

&lt;p&gt;If odds are not updated instantly after a critical game event, operators risk users betting on outdated odds. For data providers, failure to deliver realtime data can strain relationships with clients, harm reputation and even have legal implications.&lt;/p&gt;

&lt;h2&gt;
  
  
  The importance of a consistent user experience
&lt;/h2&gt;

&lt;p&gt;Scaling betting operations isn’t just about onboarding new users—it’s about ensuring these users have the same experience, regardless of location or time.&lt;/p&gt;

&lt;p&gt;There are two key infrastructure requirements involved in making this possible: Consistent low latency, and the ability to handle disconnections.&lt;/p&gt;

&lt;p&gt;When it comes to latency, users demand the same experience whether they’re betting from Europe, Asia, or the Americas. A lack of consistency can create an uneven playing field – or even cause users to abandon platforms as they develop a mistrust for their data. For both data providers and betting companies, this means achieving similar levels of low latency to clients in diverse geographies. From an infrastructure perspective, this means having a global network of points of presence which devices can connect to and achieve low latency, wherever your users are. &lt;/p&gt;

&lt;p&gt;When it comes to handling disconnections and subsequent reconnections, this is particularly key when serving users in markets with unstable network conditions – but is also useful when a user changes data networks or is travelling. For betting operators, they need a strategy to define how a bet is treated if it is placed as a user loses connectivity, and the odds change while they are offline for example – should the original odds be ‘retained’, or should the odds be refreshed upon reconnection? Operators need to consider how to ensure that users with worse network connectivity don’t become ‘worse off’ as a result.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Genius Sports delivers critical data at a global scale
&lt;/h3&gt;

&lt;p&gt;Genius Sports serves betting companies that demand instantaneous data delivery for live sporting events. Before adopting their current realtime infrastructure, they relied on a traditional centralised system to deliver live data to its betting clients. Maintaining live data performance at low latency and on a global scale, meant locating ever larger and more costly on-premise facilities, close to customers. The system struggled with scalability and latency consistency, especially during high-demand events like global tournaments. As costs and latency demands increased, Genius Sports needed a new realtime solution.&lt;/p&gt;

&lt;p&gt;To ensure a consistent user experience, the company switched to a cloud-native distributed infrastructure that could handle their need to reliably serve customers – regardless of their location. By leveraging a WebSocket-based realtime data streaming solution, Genius Sports is now able to deliver on the high expectations of both its B2B clients and the end-users who rely on their ultra-low latency data delivery without the headache of having to manage and scale the realtime infrastructure​.&lt;/p&gt;

&lt;h2&gt;
  
  
  Responding to the growing competition for fan engagement
&lt;/h2&gt;

&lt;p&gt;As the sports betting market becomes more competitive, betting companies are incorporating innovative ways to deliver realtime experiences to keep their customers ‘on-platform’. This means investing in infrastructure that can not only handle the fast-paced nature of betting and provide instant, low latency, updates but also enable the creation of interactive experiences.&lt;/p&gt;

&lt;p&gt;Sportsbet faced a unique challenge with its innovative “Bet With Mates” feature, a product allowing friends to pool resources and bet together. A critical component was a chat feature that mirrored the functionality and speed of modern messaging applications like WhatsApp. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74et90oa8icr03knli30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74et90oa8icr03knli30.png" alt="Bet with Mates from Sportsbet" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sportsbet’s existing infrastructure could not deliver the rich user experience their customers expected, especially for features like realtime updates, reactions, and comments. As well as latency needs, Sportsbet also had very stringent security and data handling requirements. For Sportsbet, opting for a cloud-native realtime platform that handled both the underlying infrastructure and latency complexities but also made building their chat solution easy, was game changing. The solution saved Sportsbet valuable time and effort compared to them building a basic implementation from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low latency, high stakes
&lt;/h2&gt;

&lt;p&gt;It is clear that the sports betting industry is at an inflection point. As regulatory changes continue to unlock new opportunities, companies must rise to the challenge of delivering low latency experiences at scale. Whether it’s B2B providers like Genius Sports ensuring data consistency or B2C platforms like Sportsbet creating seamless user experiences, the ability to operate in realtime, no matter where and when, will define success.&lt;/p&gt;

</description>
      <category>performance</category>
      <category>latency</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How Sportsbet handles 4.5M daily chat messages on its 'Bet With Mates' platform</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Fri, 20 Dec 2024 12:35:55 +0000</pubDate>
      <link>https://dev.to/ably/how-sportsbet-handles-45m-daily-chat-messages-on-its-bet-with-mates-platform-328j</link>
      <guid>https://dev.to/ably/how-sportsbet-handles-45m-daily-chat-messages-on-its-bet-with-mates-platform-328j</guid>
      <description>&lt;p&gt;Sportsbet is a leader in the Australian wagering market. Through their best-in-class platform, which includes ‘Bet With Mates’, they bring excitement to life for sports and racing enthusiasts - replicating the experience of punting with friends in the pub, no matter where they are.&lt;/p&gt;

&lt;p&gt;But after launching 'Bet With Mates', Sportsbet customers were still off the platform (second-screening) to talk about their bets and banter with each other in WhatsApp and other chat apps. Sportsbet wanted to introduce that functionality into ‘Bet With Mates’ and provide everything their customers needed without ever having to leave the platform.&lt;/p&gt;

&lt;p&gt;Sportsbet needed a chat feature that met their customers’ high expectations of messaging applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Selecting the right chat solution for 'Bet With Mates'
&lt;/h2&gt;

&lt;p&gt;When deciding which solution to use for the 'Bet With Mates' chat, there were a few key criteria.&lt;/p&gt;

&lt;p&gt;It had to be feature-rich including reaction and reply functionality and also update in realtime. &lt;/p&gt;

&lt;p&gt;As an extremely event-driven business with huge traffic spikes during major events like the Australian Football League, National Rugby League finals, and the Melbourne Cup, the solution needed to be highly performant and scalable. &lt;/p&gt;

&lt;p&gt;It had to demonstrate great frontend performance figures, integrate well into Sportsbet’s build pipeline, and be future-proofed for other realtime use cases that developers were planning. &lt;/p&gt;

&lt;p&gt;As well as latency needs, Sportsbet also had very stringent security and data handling requirements, so the solution needed to be hosted within Australia on a dedicated cluster.&lt;/p&gt;

&lt;p&gt;And finally, the team decided they needed to deliver the new product in less than four months so that it would be live and in customer hands before the next AFL season launch! &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Sportsbet chose Ably
&lt;/h2&gt;

&lt;p&gt;Based on their requirements, Sportsbet decided to move ahead with Ably after comparing them to other providers. Some key benefits for Sportsbet included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Great customer support:&lt;/strong&gt; Early assessments of Ably’s documentation encouraged Sportsbet to quickly move to build a proof of concept. This involved dedicated support from the Ably team to consult on requirements, providing access to SDKs for their chosen tech stack, and creating a sandbox environment to conduct some integration testing and analysis. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy to get started:&lt;/strong&gt; Sportsbet started out building a prototype using Ably React Hooks with their existing react client and were impressed with how quickly they could get a basic chat feature going without having to build services. They then added some components that published events including bet placements and group activity as well as features like reactions and comments in the same message stream.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support for data security:&lt;/strong&gt; Ably rapidly spun up a new dedicated cluster within Australia specifically for Sportsbet, which removed another potential barrier to being ready for the AFL season. Ably’s SAML integration also enabled Sportsbet to plug into their existing SSO system in record time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sportsbet + Ably: The results
&lt;/h2&gt;

&lt;p&gt;The ‘Bet With Mates’ chat feature has proven a hit with fans, contributing to the organic growth of the overall ‘Bet with Mates’ platform. It has also proven sticky – customers who use ‘Bet With Mates’ Chat use it regularly. &lt;/p&gt;

&lt;p&gt;Sportsbet put this success down to Ably’s unwavering reliability when it comes to performance and message delivery. They also reported that autoscaling has performed flawlessly without any incidents of concern in over a year, even on high traffic days. Peak figures for these high traffic periods have reached around 4.5 million published messages a day. &lt;/p&gt;

&lt;p&gt;Reflecting on the success of the project and relationship with Ably, Andy commented:&lt;/p&gt;

&lt;p&gt;“By choosing to partner with Ably, we were able to deliver a high quality outcome in a frankly impressive timeframe, and free up our delivery teams earlier to focus on other initiatives. It’s a testament to the strength of Ably’s offering how much of our time with them is spent discussing other potential use cases rather than the current implementation.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Ably: The definitive realtime experience platform. Built for scale.
&lt;/h2&gt;

&lt;p&gt;Sportsbet is one of the thousands of companies that depend on Ably to power realtime experiences for billions of people - including live updates, chat, collaboration, notifications and fan engagement. Reliably, securely and at serious scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why choose Ably?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;99.999% uptime SLA:&lt;/strong&gt; We guarantee 5x9s of uptime, but consistently do better. We've had 100% uptime for 5+ years.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No scale ceiling:&lt;/strong&gt; Ably handles massive amounts of data throughput and concurrent connections without SREs breaking into sweat.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strong data integrity:&lt;/strong&gt; Guaranteed data ordering, delivery, and exactly-once semantics. Even under unreliable network conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Almost-infinite elasticity:&lt;/strong&gt; Bursty connection traffic? Ably seamlessly and automatically absorbs millions of concurrent connections arriving at once.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composable realtime:&lt;/strong&gt; Our range of application building blocks and integrations enable developers to create the live experiences users and businesses demand. From live chat to data broadcast, and collaborative UXs to notifications, our SDKs unlock innovation - with no infrastructure to build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer-first pricing, affordable at scale:&lt;/strong&gt; Ably's pricing offers per-minute billing, consumption-based pricing, and volume-based discounts to keep you ROI positive, as you scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information, &lt;a href="https://hubs.la/Q0301ltG0" rel="noopener noreferrer"&gt;read our docs&lt;/a&gt;, or &lt;a href="https://hubs.la/Q0301lx50" rel="noopener noreferrer"&gt;sign up for free&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>webperf</category>
      <category>performance</category>
      <category>chat</category>
    </item>
    <item>
      <title>How Mentimeter deliver reliable live experiences at scale</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Thu, 19 Dec 2024 16:51:00 +0000</pubDate>
      <link>https://dev.to/ably/how-mentimeter-deliver-reliable-live-experiences-at-scale-25fg</link>
      <guid>https://dev.to/ably/how-mentimeter-deliver-reliable-live-experiences-at-scale-25fg</guid>
      <description>&lt;p&gt;There are no second chances when it comes to live events: Mentimeter's solutions have to work perfectly every time. Their audience engagement features must be accessible via mobile devices without user sign-in. They must also be fast. And they need to scale effortlessly to cope with huge spikes in demand; a single event can drive connections from zero to 70,000+ participants in a matter of seconds.&lt;/p&gt;

&lt;p&gt;Mentimeter's engineers designed their systems to cope with those exacting demands, but as the business experienced rapid growth, its realtime infrastructure provider struggled to keep pace. The platform's performance at scale started to suffer.&lt;/p&gt;

&lt;p&gt;Eventually, a tipping point came when a spike of a relatively small number of concurrent connections – ~35,000 – caused part of their realtime system provider's network to crash. Luckily, Mentimeter had fallback solutions in place for realtime communication so their services continued to operate, but with a degraded user experience for several hours. They knew it was time to find an alternative solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Selecting the right solution for growth-ready realtime
&lt;/h2&gt;

&lt;p&gt;Johan Bengtsson, Mentimeter CTO, swiftly dismissed the possibility of building realtime infrastructure in-house due to the time and cost to build, along with significant ongoing operating costs and engineering burden. His team set out to identify a new partner that could meet Mentimeter's exacting demands while supporting all Mentimeter's current and future use cases.&lt;/p&gt;

&lt;p&gt;Any new realtime provider needed seamless integration with Mentimeter's React front end, Ruby and Node.js backend, and AWS Kinesis for analytics. Performance, scalability, and reliability were essential requirements, particularly as Mentimeter had ambitions to raise its concurrent connections limit and cater for events of up to 150,000 participants or more. What's more, a new provider needed to provide Bengtsson with scope to innovate in bringing new features to the platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Mentimeter chose Ably&lt;/strong&gt;&lt;br&gt;
Mentimeter identified Ably as the ideal solution based on a market comparison exercise and a recommendation from &lt;a href="https://hubs.la/Q0301k_10" rel="noopener noreferrer"&gt;existing Ably customer Split.io&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;According to Bengtsson: "Ably is very transparent about engineering for reliability and scalability with its Four Pillars of Dependability. Along with a solid five-nines SLA and clear pricing, that gave me a lot of confidence. I also felt that Ably wanted to be more than a supplier. There was a real sense that it would be a true partner that would work closely with us, listening and responding to our needs and supporting our innovation roadmap."&lt;/p&gt;

&lt;p&gt;Bengtsson was able to get a proof of concept up and running within two hours. Even though Mentimeter's developer team was stretched at the time, migration to Ably was complete inside a month, thanks to expert and fast support and a wealth of documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mentimeter + Ably: The results
&lt;/h2&gt;

&lt;p&gt;The decision to move to Ably has been crucial for Mentimeter to deliver a consistent, high-quality customer experience. Bengtsson and his team are confident they can rely on its services to help them deliver business value through improved communication and audience engagement.&lt;/p&gt;

&lt;p&gt;First and foremost, Ably's meticulous design for elastic scale and high availability has solved the scalability and reliability issues that had become an issue previously. &lt;/p&gt;

&lt;p&gt;Bengtsson explains: "Ably has been super reliable, a part of our stack I know I can depend on. It copes easily with big loads generated by multiple presentations happening simultaneously across the globe. We put the reliability of presentation experience front and center. If Ably had issues, we'd know about them, but now we don't have to worry about stability, even when we get huge traffic spikes. Ably is a key partner for us, one that we refer to as 'Enably' because it allows us to innovate at pace to elevate our core proposition. In fact, we're so confident now, we're looking to triple our concurrent connections limit to 150,000."&lt;/p&gt;

&lt;p&gt;Mentimeter has used Ably to innovate as it seeks to enhance and extend the value its platform delivers to customers. For instance, the Mentimeter team builds upon the Ably presence feature to ensure customers collaborating upon interactive presentations can see who is online and carrying out edits. Before adopting Ably, this was functionality the team was going to build in-house.&lt;/p&gt;

&lt;p&gt;Bengtsson said: "We're delighted with the range of capabilities Ably gives us, while the support and documentation have helped to spark ideas at our end. The limitations with leaderboard scaling were hurting us in terms of our ability to innovate the core product. With Ably, those limits are a thing of the past."&lt;/p&gt;

&lt;p&gt;Finally, moving to Ably has helped Mentimeter find efficiencies, spend less time on maintenance and realtime incidents, and allow its engineers to develop the core platform.&lt;/p&gt;

&lt;p&gt;"It's part of our ethos that if there is a service providing the capabilities we need, we'll use it rather than trying to build and maintain ourselves," Bengtsson explained. "With Ably in place, our engineers can focus on what they are good at. They don't have to worry about realtime any more, which makes for a happier, more productive team."&lt;/p&gt;

&lt;h2&gt;
  
  
  Ably: The definitive realtime experience platform. Built for scale.
&lt;/h2&gt;

&lt;p&gt;Mentimeter is one of the thousands of companies that depend on Ably to power realtime experiences for billions of people - including live updates, chat, collaboration, notifications and fan engagement. Reliably, securely and at serious scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why choose Ably?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;99.999% uptime SLA:&lt;/strong&gt; We guarantee 5x9s of uptime, but consistently do better. We've had 100% uptime for 5+ years.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No scale ceiling:&lt;/strong&gt; Ably handles massive amounts of data throughput and concurrent connections without SREs breaking into sweat.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strong data integrity:&lt;/strong&gt; Guaranteed data ordering, delivery, and exactly-once semantics. Even under unreliable network conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Almost-infinite elasticity:&lt;/strong&gt; Bursty connection traffic? Ably seamlessly and automatically absorbs millions of concurrent connections arriving at once.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composable realtime:&lt;/strong&gt; Our range of application building blocks and integrations enable developers to create the live experiences users and businesses demand. From live chat to data broadcast, and collaborative UXs to notifications, our SDKs unlock innovation - with no infrastructure to build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer-first pricing, affordable at scale:&lt;/strong&gt; Ably's pricing offers per-minute billing, consumption-based pricing, and volume-based discounts to keep you ROI positive, as you scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information, &lt;a href="https://hubs.la/Q0301ltG0" rel="noopener noreferrer"&gt;read our docs&lt;/a&gt;, or &lt;a href="https://hubs.la/Q0301lx50" rel="noopener noreferrer"&gt;sign up for free&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>architecture</category>
      <category>webperf</category>
      <category>performance</category>
    </item>
    <item>
      <title>Scaling PubSub with WebSockets and Redis</title>
      <dc:creator>Steven Lindsay</dc:creator>
      <pubDate>Thu, 19 Dec 2024 14:24:24 +0000</pubDate>
      <link>https://dev.to/ably/scaling-pubsub-with-websockets-and-redis-5b2c</link>
      <guid>https://dev.to/ably/scaling-pubsub-with-websockets-and-redis-5b2c</guid>
      <description>&lt;p&gt;There is increasing demand for realtime data delivery as users expect faster experiences and instantaneous transactions. This means not only is lower latency required per message, but providers must also handle far greater capacity in a more globally-distributed way.&lt;br&gt;
When building realtime applications, the right tools for the job address these requirements at scale. WebSocket has become the dominant protocol for realtime web applications, and has widespread browser support. It offers a persistent, fully-duplexed connection to achieve low latency requirements. Publish/Subscribe (pub/sub) has been around even longer, and enables providers to scale data transmission systems dynamically, something that is crucial in providing these new capacity needs. While Pub/Sub itself is just an architectural pattern, tools like Redis have built out pub/sub functionality to make it easier to develop and deploy a scalable pub/sub system.&lt;/p&gt;

&lt;p&gt;This article details how to build a simple pub/sub service with these components (WebSockets and Redis). It also discusses the particular technical challenges of ensuring a smooth realtime service at scale, the architectural and maintenance considerations needed to keep it running, and how a realtime service provider could simplify these problems and your code.&lt;/p&gt;

&lt;p&gt;First, let’s briefly define the fundamental components at play.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is pub/sub?
&lt;/h2&gt;

&lt;p&gt;The core of pub/sub is messaging. Messages are discrete packets of data sent between systems. On top of this, channels (or topics) allows you to apply filtering semantics to messages; publishers (clients that create messages); and subscribers (clients that consume messages). A client can publish and consume messages from the same channel or many channels.&lt;/p&gt;

&lt;p&gt;Because of this, pub/sub can be used to support a number of different patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One-to-one:&lt;/strong&gt; Two clients that both need to subscribe and publish messages to the same channel. This is common in support chat use cases.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-to-many:&lt;/strong&gt; Many clients may receive information from a centralized source, as with dashboard notifications.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Many-to-one:&lt;/strong&gt; Many clients publish messages to one channel; for example, in a centralized logging system, all logs of a certain tag are routed to one repository.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Many-to-many:&lt;/strong&gt; Multiple clients send messages to all in the members of the group - Such as a group chat or presence feature.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pub/sub pattern allows clients to send and receive messages directly, without a direct connection to recipients with a message broker. It promotes an async and decoupled design, which lends itself to building scalable, distributed systems that can handle concurrent traffic for millions of connected clients. However, in practice, building a pub/sub system like this requires a reliable and performant data store.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl45ziev8s70bs84xau6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl45ziev8s70bs84xau6s.png" alt="pubsub-diagram" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Redis?
&lt;/h2&gt;

&lt;p&gt;Redis is a highly performant in-memory key-value store, though it can also persist data to non-volatile storage (disk). It is capable of processing millions of operations per second, so makes it ideal for systems with high throughput. Its feature set includes pub/sub, allowing clients to publish to a channel which is then broadcast out to all subscribers of that channel in realtime.&lt;/p&gt;
&lt;h2&gt;
  
  
  What are WebSockets?
&lt;/h2&gt;

&lt;p&gt;WebSockets are a protocol for bidirectional communication, and are a great choice where low latency and persistent connectivity are required. In contrast to something like HTTP long polling, which repeatedly queries the server for updates, WebSockets make a continuous connection between the client and server, so that messages flow in both directions as and when they occur. Because of this, WebsSockets can be ideal for applications like chat systems, live notifications, or collaborative tools. Having a persistent connection removes the need to poll a server for updates, this in turn reduces latency, a key metric in realtime applications.&lt;/p&gt;

&lt;p&gt;They are commonly used for communication between some web client (browser) and some backend server. The persistent connection is formed over &lt;a href="https://www.cloudflare.com/en-gb/learning/ddos/glossary/tcp-ip/" rel="noopener noreferrer"&gt;TCP&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  A tutorial: A simple pub/sub service
&lt;/h2&gt;

&lt;p&gt;Now, let’s see how a simple realtime pub/sub service comes together with Redis and WebSockets, where multiple clients can subscribe to a channel and receive channel messages.  &lt;/p&gt;

&lt;p&gt;A typical architecture consists of a &lt;strong&gt;WebSocket server&lt;/strong&gt; for handling client connections, backed by &lt;strong&gt;Redis&lt;/strong&gt; as the Pub/Sub layer for distributing new messages. A &lt;strong&gt;load balancer&lt;/strong&gt; like &lt;a href="https://nginx.org/" rel="noopener noreferrer"&gt;NGINX&lt;/a&gt;, or &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html" rel="noopener noreferrer"&gt;AWS ALB&lt;/a&gt; is used to handle incoming WebSocket connections and route them across multiple server instances; this is key to distributing load on our service.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Autoscaling&lt;/strong&gt; can be used to dynamically adjust our servers to match demand, maintaining performance while reducing cost during quiet periods. To do this, we would need our WebSocket servers to remain stateless.&lt;/p&gt;
&lt;h3&gt;
  
  
  Breakdown of components
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket server&lt;/strong&gt; - Clients connect to this server to receive updates from a channel. The server subscribes to Redis channels and forwards messages it receives through the pub/sub subscription. We can build this with something like &lt;a href="https://github.com/gorilla/websocket" rel="noopener noreferrer"&gt;&lt;strong&gt;Gorilla-WebSocket&lt;/strong&gt;&lt;/a&gt; or &lt;a href="https://github.com/HenningM/express-ws" rel="noopener noreferrer"&gt;&lt;strong&gt;Express-ws&lt;/strong&gt;&lt;/a&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis as a pub/sub layer&lt;/strong&gt; - The distribution service between publishers (backend services) and subscribers (our WebSocket servers). For simplicity, we will run a single instance of Redis. In production this will likely need to be clustered to avoid becoming a bottleneck.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load balancer (NGINX)&lt;/strong&gt; -  Distributes incoming WebSocket connections across multiple WebSocket server instances. It could do this naively in a round-robin fashion, or use some other method like IP hashing, though care should be taken to avoid creating hotspots.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autoscaler&lt;/strong&gt; - It’s highly likely we will experience changing traffic, so an autoscaler will adjust our WebSocket servers according to available resources. We can implement this with Kubernetes, which can monitor metrics and respond accordingly.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and logging -&lt;/strong&gt;  Monitoring tools like Prometheus and Grafana track system performance, latency, and message delivery success rates. This will allow us to see how our system is operating and catch any errors that occur. It’s a good idea to configure some alerting on these metrics too.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrkr4go12sqm7gkbdwm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrkr4go12sqm7gkbdwm1.png" alt="Redis-pubsub-architecture" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will write the server in &lt;a href="https://www.typescriptlang.org/" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt; (TS)  using &lt;a href="https://nodejs.org/en" rel="noopener noreferrer"&gt;Node&lt;/a&gt; as our runtime environment. We won’t cover setup and usage of auto scaling, but more information on this topic can be found &lt;a href="https://ably.com/topic/when-and-how-to-load-balance-websockets-at-scale" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Set up NGINX
&lt;/h3&gt;

&lt;p&gt;Install NGINX and get it running. On Mac or Linux operating systems, we could do this with a package manager like &lt;a href="https://brew.sh/" rel="noopener noreferrer"&gt;brew&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install nginx &amp;amp;&amp;amp; brew services start nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to configure it to route connections, forward requests to the server, and ensure NGINX &lt;a href="https://nginx.org/en/docs/http/websocket.html" rel="noopener noreferrer"&gt;handles the WebSocket requests correctly&lt;/a&gt;, something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Define the upstream WebSocket server
   upstream websocket_backend {
       server 127.0.0.1:8080; # Your WebSocket server address
   }
location /ws {
           # Proxy WebSocket traffic to the backend
           proxy_pass http://websocket_backend;
           # Required for WebSocket
           proxy_http_version 1.1;
           proxy_set_header Upgrade $http_upgrade;
           proxy_set_header Connection "Upgrade";
           # Pass the original Host header
           proxy_set_header Host $host;
           # Timeouts for WebSocket
           proxy_read_timeout 300s;
           proxy_send_timeout 300s;
       }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Set up Redis
&lt;/h3&gt;

&lt;p&gt;Install &lt;a href="https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt; and get it running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
brew install redis &amp;amp;&amp;amp; brew services start redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should now have a local instance available at &lt;code&gt;localhost&lt;/code&gt;. &lt;strong&gt;Note&lt;/strong&gt;: this creates an instance of Redis with no authentication, and is not an advisable setup for production. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Set up the server&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now, we will need to write the server code to handle client requests, integrate with our Redis instance, and allow WebSocket connections. TypeScript has a vast range of packages that we can use, such as &lt;a href="https://www.npmjs.com/package/ws" rel="noopener noreferrer"&gt;WS&lt;/a&gt;, which provides WebSocket client and server capabilities. WS implements the WebSocket protocol and abstracts away a lot of the details like handshakes, framing and connection management, so we get a nice high-level API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import WebSocket, { WebSocketServer } from 'ws';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will also need the &lt;a href="https://www.npmjs.com/package/ioredis" rel="noopener noreferrer"&gt;ioredis&lt;/a&gt; package, which provides a fully-featured client to manage our connections to Redis.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import Redis from 'ioredis';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s create a basic application that will do the following;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accept client requests and upgrade them to WebSockets
&lt;/li&gt;
&lt;li&gt;Provide simple handling of the WebSocket and Channel lifecycle
&lt;/li&gt;
&lt;li&gt;Store the new connections so we can send messages as needed
&lt;/li&gt;
&lt;li&gt;Connect to Redis and subscribe to a specified channel
&lt;/li&gt;
&lt;li&gt;When a new message is received on the channel, iterate through all subscribed clients and send the message along.
&lt;/li&gt;
&lt;li&gt;Remove unused redis connections&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
// Maps to track channel subscriptions
const redisClients: Map&amp;lt;string, Redis&amp;gt; = new Map(); // Redis client per channel


// Maps to track WebSocket connections
const channelSubscribers: Map&amp;lt;string, Set&amp;lt;WebSocket&amp;gt;&amp;gt; = new Map(); // WebSocket clients per channel


// WebSocket Server
const wss = new WebSocketServer({ port: 3000 });
console.log('WebSocket server is listening on ws://localhost:3000');


wss.on('connection', (ws: WebSocket) =&amp;gt; {
   console.log('New WebSocket connection established');


   ws.on('message', (message: string) =&amp;gt; {
       try {
           const data = JSON.parse(message);


           // Validate the message format
           if (data.action === 'subscribe' &amp;amp;&amp;amp; typeof data.channel === 'string') {
               const channel = data.channel;


               // Ensure a Redis client exists for this channel
               if (!redisClients.has(channel)) {
                   const redisClient = new Redis();
                   redisClients.set(channel, redisClient);


                   // Attempt to subscribe to the Redis channel
                   redisClient.subscribe(channel, (err) =&amp;gt; {
                       if (err) {
                           console.error(`Failed to subscribe to Redis channel ${channel}:`, err);
                           ws.send(JSON.stringify({ error: `Failed to subscribe to channel ${channel}` }));
                       } else {
                           console.log(`Subscribed to Redis channel: ${channel}`);
                       }
                   });


                   // Handle incoming messages from the Redis channel
                   redisClient.on('message', (chan, message) =&amp;gt; {
                       if (channelSubscribers.has(chan)) {
                           channelSubscribers.get(chan)?.forEach((client) =&amp;gt; {
                               if (client.readyState === WebSocket.OPEN) {
                                   client.send(JSON.stringify({ channel: chan, message }));
                               }
                           });
                       }
                   });
               }


               // Add this WebSocket client to the channel's subscribers
               if (!channelSubscribers.has(channel)) {
                   channelSubscribers.set(channel, new Set());
               }
               channelSubscribers.get(channel)?.add(ws);
               console.log(`WebSocket subscribed to channel: ${channel}`);
           } else {
               ws.send(JSON.stringify({ error: 'Invalid action or channel name' }));
           }
       } catch (err) {
           console.error('Error processing WebSocket message:', err);
           ws.send(JSON.stringify({ error: 'Invalid message format' }));
       }
   });


   ws.on('close', () =&amp;gt; {
       console.log('WebSocket connection closed');


       // Remove this WebSocket from all subscribed channels
       channelSubscribers.forEach((subscribers, channel) =&amp;gt; {
           subscribers.delete(ws);


           // If no more subscribers for the channel, clean up Redis client
           if (subscribers.size === 0) {
               redisClients.get(channel)?.quit();
               redisClients.delete(channel);
               channelSubscribers.delete(channel);
               console.log(`No more subscribers; unsubscribed and cleaned up Redis client for channel: ${channel}`);
           }
       });
   });


   ws.on('error', (err) =&amp;gt; {
       console.error('WebSocket error:', err);
   });
});


}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;4. Send some messages&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Produce some messages on the channel so that the server can begin sending the data on to connected clients. Here, as an example, we'll query for telemetry data of the International Space Station (ISS) each second, then publish this to our channel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import axios from 'axios';
import Redis from 'ioredis';

const telemetryUrl = 'http://api.open-notify.org/iss-now.json'; 

const redisChannel = 'iss.telemetry'; // Redis channel name
const publishInterval = 1000; // Interval in milliseconds

// Create a Redis client
const redis = new Redis();

// Function to query URL and publish to Redis
async function queryAndPublish() {
   try {
       console.log(`Fetching data from ${telemetryUrl}...`);

       // Query the URL
       const response = await axios.get(telemetryUrl);

       // Get the response payload
       const payload = response.data;

       // Publish the payload to the Redis channel
       const payloadString = JSON.stringify(payload);
       await redis.publish(redisChannel, payloadString);

       console.log(`Published data to Redis channel "${redisChannel}":`, payloadString);
   } catch (error) {
       console.error('Error publishing data:', error);
   } finally {
       // Schedule the next query and publish task
       setTimeout(queryAndPublish, publishInterval);
   }
}

// Start the query and publish loop
queryAndPublish()

// Handle graceful shutdown
process.on('SIGINT', () =&amp;gt; {
   console.log('Shutting down...');
   redis.disconnect();
   process.exit();
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;5. Create the client&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Construct a simple client that will point to our NGINX server and establish a WebSocket connection. It should begin receiving messages published to the channel. We have kept things simple here, but a production solution would need to handle things like heartbeats, retry mechanics, failover, authentication and more.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import WebSocket from 'ws';

const serverUrl = 'ws://localhost/ws';
const channelName = 'iss.telemetry';

// Create a new WebSocket connection
const ws = new WebSocket(serverUrl);

// Handle the connection open event
ws.on('open', () =&amp;gt; {
   console.log('Connected to WebSocket server');

   // Send a subscription request for the specified channel
   const subscriptionMessage = JSON.stringify({ action: 'subscribe', channel: channelName });
   ws.send(subscriptionMessage);
   console.log(`Subscribed to channel: ${channelName}`);
});

// Handle incoming messages
ws.on('message', (message) =&amp;gt; {
   console.log(`Message received on channel ${channelName}:`, message.toString());
});

// Handle connection close event
ws.on('close', () =&amp;gt; {
   console.log('WebSocket connection closed');
});

// Handle errors
ws.on('error', (error) =&amp;gt; {
   console.error('WebSocket error:', error);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After starting up the server, publisher, and client, we should see data being printed out to the standard output (stdout).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Challenges of a Redis + WebSockets build at scale&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The code we’ve written is a basic implementation, but at scale in production, we will have to account for additional complexities. There are some issues with using Redis at scale, including:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Message persistence&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In Redis pub/sub, messages are not persisted and are lost if no subscriber is listening (fire-and-forget). For some realtime use cases, like chat, this is likely not acceptable.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Horizontal scaling&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A single instance of Redis is insufficient at scale, and as there is a limit to how far it can be vertically scaled, so a horizontally-scaled infrastructure is going to be your best option.&lt;br&gt;&lt;br&gt;
Redis &lt;em&gt;can&lt;/em&gt; be deployed in a clustered mode for high availability and scalability. This essentially turns your single Redis instances into a network of interconnected redis nodes, then data can be distributed across these nodes. It allows for potentially much higher throughput and greater numbers of subscribers. Each node would take a portion of the data, this can be managed automatically, to distribute the workload (through sharding). But this comes with its own complexities, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Needing active management&lt;/strong&gt; to ensure even data distribution and cluster health - not specific to Redis, but general distributed systems issues, including:

&lt;ul&gt;
&lt;li&gt;Failover mechanisms when a node goes down
&lt;/li&gt;
&lt;li&gt;Manual node provisioning in case there are no replica sets
&lt;/li&gt;
&lt;li&gt;Robust monitoring to catch network issues
&lt;/li&gt;
&lt;li&gt;Increased networking overheads for cross-node communication
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node discovery&lt;/strong&gt;, leader election and split brain (where multiple leader/primary nodes become disconnected and we need to work out which nodes we will consider to still be active). Redis cluster uses a gossip protocol to share cluster state and quorum based voting for leader election and to help prevent split brain.

&lt;ul&gt;
&lt;li&gt;There should be an odd number of nodes in a redis cluster setup, so if a network partition occurs, a quorum can still be reached.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of strong consistency guarantees&lt;/strong&gt; - redis cluster prioritises throughput and works on eventual consistency, this means you can face data loss in some cases. You could use something like &lt;a href="https://github.com/redislabs/redisraft" rel="noopener noreferrer"&gt;redisraft&lt;/a&gt; (experimental) that implements the &lt;a href="https://raft.github.io/" rel="noopener noreferrer"&gt;RAFT&lt;/a&gt; consensus algorithm, but this writes to only a single leader at a time, creating a bottle neck for write-heavy workloads. (And RAFT does not account for &lt;a href="https://blog.cloudflare.com/a-byzantine-failure-in-the-real-world/" rel="noopener noreferrer"&gt;byzantine failures&lt;/a&gt;, though these are rare in practice.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F210oci82bb9z2feazbdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F210oci82bb9z2feazbdl.png" alt="leader-follower-nodes" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Disaster recovery and geo distribution&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For realtime use cases at scale, a globally-distributed system is non-negotiable. For example, what happens if our single datacenter fails? Or what if customers on the other side of the world require the lowest latency possible? Forcing a request between geographic regions would incur unacceptable latency penalties. Both problems would require us to geolocate our cluster in multiple regions.&lt;/p&gt;

&lt;p&gt;A Redis cluster can be globally distributed - in the case of AWS, this would mean running in multiple &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html" rel="noopener noreferrer"&gt;availability&lt;/a&gt; zones. But this introduces more management and maintenance complexities on top of those we’ve already accumulated for horizontal scaling, like the need to ensure correct sharding, manage reconciliation of concurrent writes across regions, route traffic in the case of a net split, and rebalance to avoid hot nodes.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Challenges of WebSockets as a protocol&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;WebSockets on their own have potential issues to consider in the context of building out a realtime service:&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Implementation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;WebSocket libraries provide the foundations to build a WebSocket service, but they aren’t a solution we can use straight out of the box. For example, how do you handle routing requests when your server is becoming overloaded? What monitoring do you put in place? What scaling policies do you have in place, and what capacity do we have to deal with large traffic spikes? Home brewed implementations often fall short for large-scale, realtime apps, a gap that usually requires a huge amount of additional development to fill.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Authentication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;WebSockets also complicate authentication, as they lack the standardized mechanisms found in HTTP protocols, like HTTP headers. Also, many client-side WebSocket libraries don’t provide much support for authentication, so developers have to implement custom solutions themselves. These gaps can complicate the process of securing WebSocket connections, so it’s paramount to select an approach here that gives a good balance of security with usability.&lt;/p&gt;

&lt;p&gt;Basic Authentication, passing API keys as URL parameters, is a simple but insecure method and should not be used for client-side applications. A more secure and flexible alternative is Token-Based Authentication like JSON Web Tokens (JWT).&lt;br&gt;&lt;br&gt;
How do we manage these keys though, and how do we implement token revocation? How do we ensure a token is refreshed before a client loses access to resources?&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Handling failover&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;WebSocket connections can encounter issues that often require &lt;a href="https://ably.com/blog/websocket-compatibility" rel="noopener noreferrer"&gt;fallback mechanisms&lt;/a&gt;. Secure ports like 443 are preferred for reliability and security, but fallback transports such as XHR polling or SockJS are needed for when networks have stricter restrictions. It’s likely we would need to implement this too if you’re looking to implement your own WebSocket solution.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Power consumption and optimization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;WebSocket connections are persistent, so thought must be taken when considering devices like mobiles, as the drain on battery life must be well managed. Heartbeat mechanisms like &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#pings_and_pongs_the_heartbeat_of_websockets" rel="noopener noreferrer"&gt;Ping/Pong frames&lt;/a&gt; (aka heartbeats) can help maintain connection status but may increase power consumption. We could use push notifications, waking up apps without maintaining an active connection, but they don’t provide the same kind of reliability or ordering guarantees of WebSocket streams.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Handling discontinuity&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;WebSocket connections are persistent, so thought must be taken when considering devices like mobiles, as the drain on battery life must be well managed. Heartbeat mechanisms like &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#pings_and_pongs_the_heartbeat_of_websockets" rel="noopener noreferrer"&gt;Ping/Pong frames&lt;/a&gt; (aka heartbeats) can help maintain connection status but may increase power consumption. We could use push notifications, waking up apps without maintaining an active connection, but they don’t provide the same kind of reliability or ordering guarantees of WebSocket streams.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;A simpler way: dedicated services&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As we can see, developing and maintaining WebSockets means &lt;a href="https://ably.com/topic/the-challenge-of-scaling-websockets%23challenges-of-horizontal-scaling-for-web-sockets" rel="noopener noreferrer"&gt;managing complexities&lt;/a&gt; like authentication, reconnections, and fallback mechanisms if we want to provide a robust realtime experience.&lt;br&gt;&lt;br&gt;
The considerations needed across Redis and WebSockets for an at-scale realtime service need an entire team dedicated to them. If this sounds like a headache, that’s because it is! We should avoid doing this unless we absolutely have to - whole companies have been set up to solve these problems for us.&lt;br&gt;&lt;br&gt;
&lt;a href="https://ably.com/four-pillars-of-dependability" rel="noopener noreferrer"&gt;Ably&lt;/a&gt; provides a globally distributed pub/sub service, with high availability and a scalable system able to meet any demand while maintaining low latencies that are crucial for realtime experiences.&lt;br&gt;&lt;br&gt;
There is a way we can achieve the same results with Ably while alleviating all of the issues discussed in previous sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket management&lt;/strong&gt; - Ably provides pre-built WebSocket support in multiple SDKs, with SLAs that guarantee availability and reliability, and infrastructure currently handling over 1.4 billion connections daily.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection reliability&lt;/strong&gt; - Ably’s SDKs are designed to maintain consistent connectivity under all circumstances. They have &lt;strong&gt;automatic failover&lt;/strong&gt;, selecting the best transport mechanism and ensuring connections remain stable even during network issues. If a temporary disconnection occurs, the SDKs &lt;strong&gt;automatically resume&lt;/strong&gt; connections and retrieve any missed messages. Additionally, &lt;strong&gt;message persistence&lt;/strong&gt; is supported through REST APIs, allowing retrieval of messages missed due to connection failures or extended offline periods.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability and capacity&lt;/strong&gt; - Ably uses AWS &lt;strong&gt;load balancers&lt;/strong&gt; to distribute WebSocket traffic efficiently across stateless servers, enabling limitless scaling to handle connections. With &lt;strong&gt;autoscaling&lt;/strong&gt; policies in place, the infrastructure dynamically adjusts to meet changes in demand while maintaining a 50% capacity buffer to handle sudden spikes. Ably’s high-availability SLA guarantees &lt;strong&gt;99.999% uptime&lt;/strong&gt;, relying on redundant architecture and globally distributed routing centers.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global distribution&lt;/strong&gt; - Ably operates core routing &lt;strong&gt;datacenters in seven regions&lt;/strong&gt;, with &lt;strong&gt;data persistence&lt;/strong&gt; across two availability zones per region. This setup ensures message survivability, achieving a 99.999999% guarantee by accounting for the probability of availability zone failures and enabling quick replication. Connections are routed to the closest datacenter to minimize latency, achieving a &lt;strong&gt;P99 latency of less than 50ms globally&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data distribution&lt;/strong&gt; - Ably uses consistent hashing for channel processing, but also has dynamically scaling resources and will distribute the load as needed. This includes things like connection shedding.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operations and maintenance&lt;/strong&gt; - Since Ably is a fully-managed service, it has dedicated monitoring, alerting and support engineers with expertise in operating a distributed system at scale.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDKs&lt;/strong&gt; - 25+ SDKs for different environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, Ably gives a comprehensive, managed Pub/Sub solution that handles the difficulties of building a distributed system with high availability, reliability, and survivability guarantees. This means developers can focus on their core application without the overhead of managing complex infrastructure like this.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Ably in practice: a code example&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s try using Ably’s ably-js SDK.&lt;/p&gt;

&lt;p&gt;We can condense all of the code in the previous example into a few lines, for subscribing to our data channel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as Ably from 'ably';

let client: Ably.Realtime;

async function subscribeClient() {
 client = new Ably.Realtime('xxx.yyy');

 const channel = client.channels.get('iss.data');
 try {
   await channel.subscribe((message) =&amp;gt; {
     console.log(`Received message: ${message.data}`);
   });
 } catch (error) {
   console.error('Error subscribing:', error);
 }
}

// Handler for SIGINT
function exitHandler() {
 if (client) {
   client.close();
   console.log('The client connection with the Ably server is closed');
 }
 console.log('Process interrupted');
 process.exit();
}

// Handle SIGINT event
process.on('SIGINT', exitHandler);


(async () =&amp;gt; {
 await subscribeClient();
})();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And publishing to our data channel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as Ably from 'ably';
import axios from 'axios';

let client: Ably.Realtime;

async function publishData() {
 client = new Ably.Realtime('xxx.yyy');
 const channel = client.channels.get('iss.data');

 try {
   // Query the endpoint
   const response = await axios.get('http://api.open-notify.org/iss-now.json');

   // Get the response
   const payload = response.data;

   // Publish the payload to the Ably channel
   const payloadString = JSON.stringify(payload);
   await channel.publish({ data: payloadString });
 } catch (error) {
   console.error('Error publishing:', error);
 }
}

// Handler for SIGINT
function exitHandler() {
 if (client) {
   client.close();
   console.log('The client connection with the Ably server is closed');
 }
 console.log('Process interrupted');
 process.exit();
}

// Handle SIGINT event
process.on('SIGINT', exitHandler);

(async () =&amp;gt; {
 await publishData();
})();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And in just a few lines of code, we have removed the need to handle the infrastructural complexities we mentioned earlier. If you’re interested in trying Ably, &lt;a href="https://ably.com/signup" rel="noopener noreferrer"&gt;sign up&lt;/a&gt; for a free account today.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How Genius Sports slashed costs and lowered latencies for last-mile data delivery</title>
      <dc:creator>Ably Blog</dc:creator>
      <pubDate>Wed, 18 Dec 2024 10:18:46 +0000</pubDate>
      <link>https://dev.to/ably/how-genius-sports-slashed-costs-and-lowered-latencies-for-last-mile-data-delivery-30og</link>
      <guid>https://dev.to/ably/how-genius-sports-slashed-costs-and-lowered-latencies-for-last-mile-data-delivery-30og</guid>
      <description>&lt;p&gt;Genius Sports provides its customers with live sports and betting data across 240,000 events worldwide – each generating 10,000+ realtime messages. &lt;/p&gt;

&lt;p&gt;Its value proposition is built on reliable, live data and on-premises infrastructure, but rapid growth brought challenges when it came to maintaining live data performance at low latency and global scale.&lt;/p&gt;

&lt;p&gt;Genius Sports had been forced to locate ever larger, on-premises RabbitMQ clusters physically close to customers to deliver the reliable, predictable realtime performance at low latency that in-play betting demands. If data arrives late, players can bet on events that have already happened, costing Genius Sports’ customers money and damaging its reputation.&lt;/p&gt;

&lt;p&gt;As a result, hardware capital and support costs were increasing exponentially, while infrastructure teams were constantly firefighting hardware issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wondering how they overcame this challenge?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hubs.la/Q0300B590" rel="noopener noreferrer"&gt;Genius Sports chose to migrate to Ably&lt;/a&gt; to benefit from our elastically scalable realtime infrastructure for the edge. Using Ably enabled Genius Sports to essentially hand off the heavy-lifting associated with live data processing and delivery, and gave developers the insight and flexibility they needed to innovate the core service and to offer customers more granular control of the data they subscribe to. &lt;/p&gt;

&lt;p&gt;We spoke to Gary Williams, IT Infrastructure Team Lead at Genius Sports in our webinar - and you can now &lt;a href="https://www.youtube.com/watch?v=MNheNTVUEJM&amp;amp;t=15s" rel="noopener noreferrer"&gt;watch it on demand&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key takeaways were:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shifting to the cloud was key:&lt;/strong&gt; Genius Sports chose Ably to provide realtime messaging infrastructure as a service in an early implementation of an ongoing shift to a cloud-first strategy – safe in the knowledge that the migration from RabbitMQ would require no significant refactoring of its wider architecture, coding or developer work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The migration was straightforward:&lt;/strong&gt; According to Gary Williams, "Overall, even with a super-cautious approach we had Ably live on our production system in less than two months. For a system handling business critical data, that is quite an achievement."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The move has freed up time and budget:&lt;/strong&gt; The migration to Ably has eliminated annual hardware costs and freed up around 30 hours per month in RabbitMQ maintenance that Genius Sports’ developers and engineers can now refocus on service optimisation and innovation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://hubs.la/Q0300B7X0" rel="noopener noreferrer"&gt;Sign up&lt;/a&gt; and see how you could benefit from moving to Ably.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>dataengineering</category>
      <category>latency</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Chat API pricing: Comparing MAU and per-minute consumption models</title>
      <dc:creator>Carolina Carriazo</dc:creator>
      <pubDate>Tue, 10 Dec 2024 15:48:47 +0000</pubDate>
      <link>https://dev.to/ably/chat-api-pricing-comparing-mau-and-per-minute-consumption-models-3nd5</link>
      <guid>https://dev.to/ably/chat-api-pricing-comparing-mau-and-per-minute-consumption-models-3nd5</guid>
      <description>&lt;p&gt;Pricing is critical to deciding which chat API you will use - however, it can often feel like there are limited options. Whether you are looking to gradually scale a chat app or anticipate large and sudden spikes in traffic, pricing models can make or break the bank depending on your usage - and most vendors will expect you to accept the one or two industry standards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ably.com/blog/best-chat-api" rel="noopener noreferrer"&gt;Chat API&lt;/a&gt; providers predictably fall into a handful of pricing model categories. We’ll explore them in this article by explaining them, comparing them, and ultimately concluding which is best for each particular use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chat APIs: Common pricing models
&lt;/h2&gt;

&lt;p&gt;Chat API pricing models are designed to align with different usage patterns - like a steady user base and usage, or periodic spikes - but they also introduce trade-offs depending on an application’s scale and messaging demands. These models are generally categorized as forms of consumption-based pricing, where costs are tied to how the service is used. Let’s look at the most common pricing models in use today:&lt;/p&gt;

&lt;h3&gt;
  
  
  Monthly active users (MAU)
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Monthly Active Users (MAU) model&lt;/strong&gt; is one of the most widely used pricing models in the industry. Providers like &lt;a href="https://www.cometchat.com/" rel="noopener noreferrer"&gt;CometChat&lt;/a&gt;, &lt;a href="https://sendbird.com/" rel="noopener noreferrer"&gt;Sendbird&lt;/a&gt;, &lt;a href="https://www.twilio.com/" rel="noopener noreferrer"&gt;Twilio&lt;/a&gt;, and &lt;a href="https://getstream.io/" rel="noopener noreferrer"&gt;Stream&lt;/a&gt; charge based on the number of unique active users per month.&lt;/p&gt;

&lt;p&gt;You pay for each user who interacts with the chat API within a given month, regardless of the number of messages they send or receive. While this can simplify billing, it comes with the tradeoff of assuming the “typical usage” of a monthly active user. For example, an individual MAU may actually use much less active connection time or send much fewer messages than what is assumed for an average MAU. Simply put, this method is not granular.&lt;/p&gt;

&lt;p&gt;This model is predictable for applications with small and steady user bases, since, if you’re not expecting much user volatility, it’s easy to estimate costs. But any volatility in workloads, like experiencing a brief viral period and dipping back down, can result in overpaying for peak costs (peak MAUs) in a monthly period.&lt;/p&gt;

&lt;p&gt;For chat services operating at scale, the monthly amount spent on peak MAUs often grossly exceeds the bill for actual usage; it wastes allocated resources and money.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx5ayuhmplg3xd43tchi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx5ayuhmplg3xd43tchi.png" alt="peak usage" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a pricing model designed to tackle these pricing issues at scale, however - and we use it at Ably.&lt;/p&gt;

&lt;h3&gt;
  
  
  Per-minute consumption
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;per-minute consumption model&lt;/strong&gt; goes beyond traditional consumption-based pricing by billing customers based on their actual usage of service resources—connection time, channels, and messages. This approach directly addresses the inefficiencies inherent in MAU pricing models. This isn’t a common model in the industry, but we’ve adopted it here at &lt;a href="https://ably.com/" rel="noopener noreferrer"&gt;Ably&lt;/a&gt; to meet the usage needs of our customers at scale.&lt;/p&gt;

&lt;p&gt;Per-minute consumption measures actual usage in fine-grained units, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Connection minutes&lt;/strong&gt;: The total time devices are connected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Channel minutes&lt;/strong&gt;: The time channels remain active.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message events&lt;/strong&gt;: Each message sent or received by users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By tracking usage at this granular level, it ensures customers only pay for what they consume, without overpaying for resources they don’t use. Traffic spikes don’t necessarily lead to hugely increased costs either - the pricing is distributed across these dimensions, smoothing the overall impact. For example, livestreaming events, which may have a huge number of messages at their peak but a low number of channels, would see a more modest increase in cost than if they were billed by user count. Instead of penalizing a single metric, this approach provides greater predictability and reflects resource utilization more holistically.&lt;/p&gt;

&lt;p&gt;Per-minute consumption also incentivizes resource optimization, such as reducing idle connections or batching messages, which can further mitigate cost surges during spikes. (Batching comes in handy when many:many chat interactions lead to an exponential increase in delivered messages, which &lt;a href="https://ably.com/blog/making-fan-experiences-economically-viable" rel="noopener noreferrer"&gt;we're implementing soon at Ably on the server side&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Popular pricing models compared
&lt;/h2&gt;

&lt;p&gt;Deciding whether an MAU, message throughput, or per-minute consumption pricing model works for you depends on your use case - but if you are looking to scale a chat application to any considerable degree, as a general rule, &lt;em&gt;per-minute consumption will be the best option&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;MAU pricing assumes a “typical user” for billing purposes. This involves bundling resources such as connection time, message throughput, and storage into a fixed monthly fee per active user, which doesn’t accurately reflect the actual usage of the user.&lt;/p&gt;

&lt;p&gt;Now imagine a customer operating a live event platform. They’re running a live event for &lt;strong&gt;two hours in the month&lt;/strong&gt; that peaks at &lt;strong&gt;50,000 users&lt;/strong&gt;. What would the monthly prices look like between an MAU model and a per-minute consumption model?&lt;/p&gt;

&lt;p&gt;Let’s say that the MAU model assumes that each “average” user will send 1k messages per month. While the bill comes in based on user count only, built into the cost per user is an assumption on how much one would use (in this case, a total of 50 million messages for 50k users). The MAU model then bills the whole month &lt;strong&gt;based on the peak of 50,000 users&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k0o1fj3xe1nsdr71xj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k0o1fj3xe1nsdr71xj4.png" alt="mau utilization" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With per-minute consumption, costs reflect the actual connection time and messages used - we’ll estimate generously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Connection time&lt;/strong&gt;: 50,000 users × 240 minutes (accounting for pre- and post-event activity) = 12 million connection minutes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message volume&lt;/strong&gt;: 50,000 users sending an average of 200 messages = 10 million messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Channels and channel time&lt;/strong&gt;: let’s say 5 channels x 240 minutes = 1,200 channel minutes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even without specific prices to hand, we can see that billing for a typical user is inefficient in this scenario. Per-minute billing focuses on ensuring fairness and transparency for highly volatile traffic situations like these (for more information on this, Matt O’Riordan, Ably’s CEO, talks about pricing model issues in &lt;a href="https://ably.com/blog/consumption-based-pricing" rel="noopener noreferrer"&gt;his blog post&lt;/a&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  What does this mean in practice?
&lt;/h3&gt;

&lt;p&gt;This table breaks it down:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Model&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Best suited to&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Challenges&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Monthly Active Users (MAU)&lt;/td&gt;
&lt;td&gt;Stream, Sendbird, Twilio&lt;/td&gt;
&lt;td&gt;Apps with steady or low user activity&lt;/td&gt;
&lt;td&gt;Paying for peak costs during volatile usage periods&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-minute consumption&lt;/td&gt;
&lt;td&gt;Ably&lt;/td&gt;
&lt;td&gt;Apps with scalable, high-volume messaging&lt;/td&gt;
&lt;td&gt;Requires tracking of usage metrics&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Ably’s per-minute consumption model
&lt;/h2&gt;

&lt;p&gt;If the per-minute consumption model we discussed above sounds promising to you, here’s some more information on how this works specifically with Ably.&lt;/p&gt;

&lt;p&gt;At Ably, we’ve developed a pricing model designed to align more closely with the needs of realtime chat applications. Unlike traditional MAU or throughput-based models, Ably offers &lt;a href="https://ably.com/pricing" rel="noopener noreferrer"&gt;per-minute pricing&lt;/a&gt; that scales predictably and transparently with your application.&lt;/p&gt;

&lt;p&gt;Here’s how Ably stands out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Pay only for what you use, with no penalties for growing user bases or unexpected spikes in message throughput.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Ably’s infrastructure supports billions of messages daily, with costs optimized for applications of any scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency&lt;/strong&gt;: Ably’s pricing eliminates the hidden costs often associated with rigid MAU or throughput models, giving you full visibility into your expenses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ably’s platform is built on a globally-distributed infrastructure designed for high-performance, scalable, and dependable messaging. With support for exactly-once delivery, message ordering, and &amp;lt;50ms global average latency, Ably ensures a seamless chat experience for users anywhere in the world.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://ably.com/blog/best-chat-api#what-should-you-look-for-in-a-live-chat-solution" rel="noopener noreferrer"&gt;Chat SDK&lt;/a&gt; in private beta offers fully-fledged chat features, like chat rooms at any scale; typing indicators; read receipts; presence tracking; and more. And of course, our per-minute pricing means that your consumption is as cost-effective as possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forms.gle/UmvASDpVtzg6yncCA" rel="noopener noreferrer"&gt;Sign up for private beta&lt;/a&gt; today to try out Ably Chat.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>realtime</category>
    </item>
    <item>
      <title>Data integrity in Ably Pub/Sub</title>
      <dc:creator>Carolina Carriazo</dc:creator>
      <pubDate>Thu, 21 Nov 2024 14:40:26 +0000</pubDate>
      <link>https://dev.to/ably/data-integrity-in-ably-pubsub-1nol</link>
      <guid>https://dev.to/ably/data-integrity-in-ably-pubsub-1nol</guid>
      <description>&lt;p&gt;When you publish a message to &lt;a href="https://ably.com/pubsub" rel="noopener noreferrer"&gt;Ably Pub/Sub&lt;/a&gt;, you can be confident that the message will be delivered to subscribing clients, wherever they are in the world.&lt;/p&gt;

&lt;p&gt;Ably is fast: we have a 99th percentile &lt;strong&gt;transmit latency of &amp;lt;50ms&lt;/strong&gt; from any of our 635 global PoPs, that receive at least 1% of our global traffic. But being fast isn’t enough; Ably is also dependable and scalable. Ably doesn’t sacrifice data integrity for speed or scale; it’s fast &lt;em&gt;and&lt;/em&gt; safe.&lt;/p&gt;

&lt;p&gt;This post describes the Ably Pub/Sub architecture and features that guarantee your Pub/Sub message is delivered, in order, exactly once, to clients globally; while protecting against regional data center failures, individual instance failures, and cross-region network partitions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Ably regional Pub/Sub architecture and persistence works
&lt;/h2&gt;

&lt;p&gt;Each region in Ably is capable of operating entirely independently, but regions also coordinate with each other to share and replicate messages globally.&lt;/p&gt;

&lt;p&gt;In each region, a single Pub/Sub channel has exactly one primary location across a fleet of Ably servers. When a message is published by a client attached to that region, the message is processed and stored by that single Pub/Sub channel location before the message is ACKed. Once the publishing client receives the &lt;code&gt;ACK&lt;/code&gt;, they can be confident that the message will not be lost, and will be delivered to all subscribing clients.&lt;/p&gt;

&lt;p&gt;As well as exactly one &lt;em&gt;primary&lt;/em&gt; location, an Ably Pub/Sub channel also has exactly one &lt;em&gt;secondary&lt;/em&gt; location. Both the primary and secondary location durably store a copy of the message before the &lt;code&gt;ACK&lt;/code&gt; is sent to the client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpre8kbf2ywb4yho1ne8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpre8kbf2ywb4yho1ne8r.png" alt="How Ably's global pub/sub architecture persists messages across primary and secondary locations in each region for durability." width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the primary location of a Pub/Sub channel fails, the secondary location is ready to take over. The secondary location already has a copy of all the required message data and becomes the primary. A new secondary location is created and the message data is replicated to that new location. This means that clients are isolated from individual instance failure.&lt;/p&gt;

&lt;p&gt;The message is immediately replicated to the primary and secondary locations in other regions globally who have subscribing clients. Ably will store up to 14 copies of each message, globally, to mitigate against the failure of entire regions of the Ably service.&lt;/p&gt;

&lt;p&gt;All messages are persisted durably for two minutes, but Pub/Sub channels can be configured to persist messages for longer periods of time using the &lt;a href="https://ably.com/docs/storage-history/storage#all-message-persistence" rel="noopener noreferrer"&gt;persisted messages feature&lt;/a&gt;. Persisted messages are additionally written to &lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Cassandra&lt;/a&gt;. Multiple copies of the message are stored in a quorum of globally-distributed Cassandra nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
   How Ably SDK clients interact with the Pub/Sub architecture
&lt;/h2&gt;

&lt;p&gt;Each Ably region is capable of operating independently, so Ably SDK clients can connect to any region. By default clients connect to the region providing the lowest latency, but if an issue with that region is detected (perhaps the region is erroring or is slow to respond), the SDKs will connect to another fallback region, and continue operating as normal.&lt;/p&gt;

&lt;p&gt;Clients are isolated from region failures. There is no single point of failure in the regional Ably architecture. Regional failures will not affect the global availability of Ably, because clients will fallback to another region and continue operating.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exactly-once delivery
&lt;/h3&gt;

&lt;p&gt;Ably Pub/Sub messages support &lt;a href="https://ably.com/blog/achieving-exactly-once-message-processing-with-ably" rel="noopener noreferrer"&gt;exactly-once delivery&lt;/a&gt;, which is achieved through two mechanisms: idempotent publishing, and message delivery on the SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Idempotent publishing&lt;/strong&gt;: Pub/Sub messages have a unique ID field which is used to deduplicate messages. When a message arrives in a region – either because it was published by an Ably SDK connected to that region, or replicated from another region – the primary Pub/Sub channel location verifies the message’s uniqueness by checking its ID. This idempotency check is performed against 2 minutes of message history, and ensures against a client accidentally publishing a message twice. The message is persisted at the primary location and checked for uniqueness in a single atomic operation, which guards against a race between checking uniqueness and durably storing the message.&lt;/p&gt;

&lt;p&gt;The primary location in each region performs the idempotency check both for messages that are published to it directly, and for messages replicated from another region, so that the SDKs can retry the publish against another region, and that message will still be checked for uniqueness and not delivered to subscribing clients twice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message delivery on the SDK&lt;/strong&gt;: On the subscribing client, the messages are delivered with a series ID. If the client disconnects, they can provide the last-seen series ID when reconnecting with the &lt;code&gt;resume&lt;/code&gt; operation. This resume operation allows the client to pick back up from the exact point it was processed in the stream of Pub/Sub messages, ensuring no duplicates or gaps in the message stream. By default, the SDKs will retry failed message publishes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Message ordering
&lt;/h3&gt;

&lt;p&gt;The order of messages published by a Pub/Sub WebSocket client on a single WebSocket will remain fully ordered. The regional location of the Pub/Sub channel will share those messages in order with all the other regional locations. Those Pub/Sub messages will be delivered in the same relative order to all subscribing clients, regardless of the region that client is connected to.&lt;/p&gt;

&lt;p&gt;To make sure that regions in Ably Pub/Sub can operate independently, there’s no guaranteed order between clients connected and publishing to different regions. This is to allow the two clients, connected to different regions, to be able to publish concurrently at high throughputs and low latency without needing to coordinate globally with each other. The messages published by each client in their respective WebSocket connections will always retain the same order relative to other messages &lt;em&gt;on that same WebSocket connection&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In classic distributed system parlance, each client retains their own causal consistency so that &lt;code&gt;Hello&lt;/code&gt; -&amp;gt; &lt;code&gt;World&lt;/code&gt; never becomes &lt;code&gt;World&lt;/code&gt; -&amp;gt; &lt;code&gt;Hello&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4w8ycol8yvnnfjmqpd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4w8ycol8yvnnfjmqpd2.png" alt="How Ably ensures ordered message delivery across and within regions. Messages retain their order relative to other messages published to the same region." width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, based on what we know about regional Ably architecture, the message ordering that each region sees is defined by when those messages arrive in that region; either by being replicated from another region, or by a client connected to that region performing a message publish. Once the order of messages is established in a single region, all the clients connected to that region will see those messages in the exact same order, but clients in a different region might have a slightly different regional ordering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ably's data integrity: a summary
&lt;/h2&gt;

&lt;p&gt;When using a Pub/Sub product, you want it to “just work”. You don’t want to have to worry about if your subscribing clients are going to receive the message you published, or if the messages are going to keep their published order, or if there’s failure in some data center that Ably is running in. You don’t want to have to care. We’ve spent a long time thinking about failure cases, and designing a system that’s both super fast, but also retains the integrity of the data published on Pub/Sub channels.&lt;/p&gt;

&lt;p&gt;This post describes the internals for some of the features we use to ensure that Ably Pub/Sub “just works”.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>architecture</category>
      <category>data</category>
      <category>database</category>
    </item>
  </channel>
</rss>
