<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrei</title>
    <description>The latest articles on DEV Community by Andrei (@elomarket).</description>
    <link>https://dev.to/elomarket</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/elomarket"/>
    <language>en</language>
    <item>
      <title>Building fair real-time odds for CS2 Twitch stream predictions</title>
      <dc:creator>Andrei</dc:creator>
      <pubDate>Thu, 19 Mar 2026 19:46:19 +0000</pubDate>
      <link>https://dev.to/elomarket/building-fair-real-time-odds-for-cs2-stream-predictions-4d3l</link>
      <guid>https://dev.to/elomarket/building-fair-real-time-odds-for-cs2-stream-predictions-4d3l</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/elomarket/how-i-synced-real-time-cs2-predictions-with-twitch-stream-delay-53lg"&gt;my last post&lt;/a&gt;, I wrote about stream delay and why realtime on Twitch is secretly a timeline problem.&lt;/p&gt;

&lt;p&gt;This time: odds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84qc05rgouqvvpxbngr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84qc05rgouqvvpxbngr1.png" alt="number of kills prediction" width="743" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am building &lt;a href="https://elo.market" rel="noopener noreferrer"&gt;elo.market&lt;/a&gt;, a live prediction system for CS2 streams. Viewers bet virtual elo on things like round winners, bomb plants, and clutches.&lt;/p&gt;

&lt;p&gt;At first, dynamic odds sounded easy.&lt;/p&gt;

&lt;p&gt;Votes come in -&amp;gt; odds move.&lt;/p&gt;

&lt;p&gt;Done, right?&lt;/p&gt;

&lt;p&gt;Not really.&lt;/p&gt;

&lt;p&gt;Because if odds move too slowly, the market feels fake.&lt;/p&gt;

&lt;p&gt;And if they move too fast, one smart whale can make the whole thing feel rigged.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem In One Line
&lt;/h2&gt;

&lt;p&gt;This was the failure mode I kept running into:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;small early vote -&amp;gt; juicy odd appears -&amp;gt; whale grabs it -&amp;gt; market feels broken
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or visually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+------------------+     +--------------------+     +------------------+     +------------------+
| Small early vote | --&amp;gt; | Big price appears  | --&amp;gt; | Whale takes it   | --&amp;gt; | Everyone else    |
| from normal user |     | on the other side  |     | before it settles|     | thinks: "lol ok" |
+------------------+     +--------------------+     +------------------+     +------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is technically a dynamic market.&lt;/p&gt;

&lt;p&gt;It is also terrible product design.&lt;/p&gt;

&lt;h2&gt;
  
  
  The First Version Was Too Naive
&lt;/h2&gt;

&lt;p&gt;My first version was basically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;start from some base probability&lt;/li&gt;
&lt;li&gt;add a seed so the first bet does not nuke the market&lt;/li&gt;
&lt;li&gt;recalculate odds from the current pool&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conceptually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;adjustedProb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;amountOnOutcome&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;baseProb&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;seed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;totalPool&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;seed&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That was fine as a first step.&lt;/p&gt;

&lt;p&gt;But it had two problems.&lt;/p&gt;

&lt;p&gt;First, some markets should not even start 50/50.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;round_winner&lt;/code&gt; market is not actually even if one side is broke and the other has rifles.&lt;/p&gt;

&lt;p&gt;So I started feeding real game context into the base probabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;loss streaks&lt;/li&gt;
&lt;li&gt;score momentum&lt;/li&gt;
&lt;li&gt;streamer economy as a team signal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That helped the opening price feel smarter.&lt;/p&gt;

&lt;p&gt;But it still did not solve the whale problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Whale Problem
&lt;/h2&gt;

&lt;p&gt;The obvious model is amount-based.&lt;/p&gt;

&lt;p&gt;More money on one side means the odds shift harder.&lt;/p&gt;

&lt;p&gt;Sounds fair, until one large bet starts overpowering the market.&lt;/p&gt;

&lt;p&gt;So I tried blending two signals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how much money was bet&lt;/li&gt;
&lt;li&gt;how many people bet that side&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That was better, but my first attempt still had a bug in the idea.&lt;/p&gt;

&lt;p&gt;I used average bet size in the formula.&lt;/p&gt;

&lt;p&gt;Which meant a whale did not just affect the amount part. They also inflated the "crowd consensus" part by making average bet size bigger.&lt;/p&gt;

&lt;p&gt;That was dumb.&lt;/p&gt;

&lt;p&gt;I accidentally gave large bets influence twice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix That Actually Mattered
&lt;/h2&gt;

&lt;p&gt;The best change was surprisingly small.&lt;/p&gt;

&lt;p&gt;I stopped using average bet size for voter influence and switched to a fixed vote unit instead.&lt;/p&gt;

&lt;p&gt;So the logic became:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;effectiveAmount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;voterWeight&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;rawAmount&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
  &lt;span class="nx"&gt;voterWeight&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;voterCount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;FIXED_VOTE_UNIT&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That changed the feel of the market a lot.&lt;/p&gt;

&lt;p&gt;Now a whale can still move odds because size should matter.&lt;/p&gt;

&lt;p&gt;But they cannot also pretend to be "the crowd" just because their bet was huge.&lt;/p&gt;

&lt;p&gt;That one change made the system feel much less abusable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Then I Hit The Next Problem
&lt;/h2&gt;

&lt;p&gt;Even with a better formula, fixed seeds still broke at different economy sizes.&lt;/p&gt;

&lt;p&gt;If one streamer has average bets around 50 elo and another has average bets around 1000, the same seed value behaves completely differently.&lt;/p&gt;

&lt;p&gt;So I made the seed scale with season behavior instead of hardcoding it.&lt;/p&gt;

&lt;p&gt;Conceptually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;dynamicMinSeed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;seasonAvgBet&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;effectiveSeed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dynamicMinSeed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;totalPool&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;seedMultiplier&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That made early odds much harder to snipe across both small and large channels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not Every Market Should Behave The Same
&lt;/h2&gt;

&lt;p&gt;Another lesson: a bomb prediction and a map-wide prediction should not react with the same personality.&lt;/p&gt;

&lt;p&gt;So I ended up with templates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;dynamic&lt;/code&gt; for markets that can move more&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;balanced&lt;/code&gt; for standard round predictions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stable&lt;/code&gt; for short live windows&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;anchored&lt;/code&gt; for longer markets that should respect the opening line more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That was much better than pretending one formula could fit everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delay Came Back Again
&lt;/h2&gt;

&lt;p&gt;And because Twitch loves making every problem slightly worse, delay showed up here too.&lt;/p&gt;

&lt;p&gt;If I delayed both &lt;code&gt;PREDICTION_CREATED&lt;/code&gt; and &lt;code&gt;ODDS_UPDATE&lt;/code&gt; the same way, viewers saw stale odds.&lt;/p&gt;

&lt;p&gt;So the final model became:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;delay the prediction event&lt;/li&gt;
&lt;li&gt;send odds updates immediately&lt;/li&gt;
&lt;li&gt;buffer them on the client until the prediction is actually visible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That way, when the card appears, viewers see current odds instead of ancient history.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fairness Needed One More Layer
&lt;/h2&gt;

&lt;p&gt;Even after all that, there was still one ugly edge case:&lt;/p&gt;

&lt;p&gt;the user clicks at one price, but by the time the vote reaches the backend, the odds are worse.&lt;/p&gt;

&lt;p&gt;So I added slippage protection.&lt;/p&gt;

&lt;p&gt;If the odds move too far against the user, the vote is rejected and the UI can ask them to accept the new number.&lt;/p&gt;

&lt;p&gt;That turned out to matter almost as much as the formula itself.&lt;/p&gt;

&lt;p&gt;Because fair pricing is not just math.&lt;/p&gt;

&lt;p&gt;It is also about whether users feel tricked.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Landed
&lt;/h2&gt;

&lt;p&gt;The current system is basically this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;game state priors
      +
adaptive seed
      +
voter consensus
      +
slippage protection
      +
delay-aware delivery
      =
dynamic odds that still feel fair
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;smarter base probabilities from real game state&lt;/li&gt;
&lt;li&gt;adaptive seed so early votes do not create nonsense&lt;/li&gt;
&lt;li&gt;voter weighting so one whale does not dominate instantly&lt;/li&gt;
&lt;li&gt;fixed vote units so big bets do not fake consensus&lt;/li&gt;
&lt;li&gt;different templates for different market types&lt;/li&gt;
&lt;li&gt;slippage protection for bad fills&lt;/li&gt;
&lt;li&gt;delay-aware delivery so live odds still look live&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The funny part is that dynamic odds sounded like a tiny feature when I started.&lt;/p&gt;

&lt;p&gt;In reality, it turned into a product trust problem.&lt;/p&gt;

&lt;p&gt;If the odds feel fake, people stop caring.&lt;/p&gt;

&lt;p&gt;If the odds feel exploitable, people stop trusting.&lt;/p&gt;

&lt;p&gt;The goal was not "perfect market making."&lt;/p&gt;

&lt;p&gt;The goal was simpler:&lt;/p&gt;

&lt;p&gt;make odds move enough to feel alive, but not enough to become a farming strategy for whales.&lt;/p&gt;




&lt;p&gt;If you have built low-liquidity markets, game economies, or live pricing systems, I would love to hear what tradeoffs you made.&lt;/p&gt;

</description>
      <category>cs2</category>
      <category>twitch</category>
      <category>stream</category>
      <category>games</category>
    </item>
    <item>
      <title>How I synced real-time CS2 predictions with Twitch stream delay</title>
      <dc:creator>Andrei</dc:creator>
      <pubDate>Mon, 16 Mar 2026 01:14:39 +0000</pubDate>
      <link>https://dev.to/elomarket/how-i-synced-real-time-cs2-predictions-with-twitch-stream-delay-53lg</link>
      <guid>https://dev.to/elomarket/how-i-synced-real-time-cs2-predictions-with-twitch-stream-delay-53lg</guid>
      <description>&lt;p&gt;I am building &lt;a href="https://elo.market" rel="noopener noreferrer"&gt;elo.market&lt;/a&gt;, a real-time prediction system for CS2 streams.&lt;/p&gt;

&lt;p&gt;At first, this sounded simple: the game emits events, I create predictions, viewers vote.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd291c2sigov3vay9t341.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd291c2sigov3vay9t341.png" alt="elo.market prediction: How many kills streamer will make?" width="743" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I ran into the actual problem: my backend knew about round starts, bomb plants, and round ends before Twitch viewers saw them.&lt;/p&gt;

&lt;p&gt;So this stopped being a "realtime UI" problem and turned into a timeline problem.&lt;/p&gt;

&lt;p&gt;The hard part was not generating predictions.&lt;/p&gt;

&lt;p&gt;The hard part was preserving a believable timeline for each viewer.&lt;/p&gt;

&lt;p&gt;Because on Twitch, there is no single "the stream is delayed by X seconds" value.&lt;/p&gt;

&lt;p&gt;There are at least two different delays involved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the broadcaster's intentional stream delay&lt;/li&gt;
&lt;li&gt;each viewer's personal playback delay caused by Twitch buffering, network conditions, device behavior, and low-latency mode&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you get this wrong, the whole product feels broken:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;predictions open too early and spoil what is about to happen&lt;/li&gt;
&lt;li&gt;predictions close before the viewer even sees the moment&lt;/li&gt;
&lt;li&gt;odds update before the actual prediction card appears&lt;/li&gt;
&lt;li&gt;late votes get rejected even though the UI still looks valid&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I ended up building a two-layer delay system: part server-side, part client-side.&lt;/p&gt;

&lt;p&gt;But the bigger lesson was this: delay was only the visible problem.&lt;/p&gt;

&lt;p&gt;The deeper problems were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;event ordering&lt;/li&gt;
&lt;li&gt;stale data&lt;/li&gt;
&lt;li&gt;fairness in vote validation&lt;/li&gt;
&lt;li&gt;keeping one backend timeline compatible with many viewer timelines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;The event pipeline looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CS2 GSI -&amp;gt; Oracle -&amp;gt; Redis Streams -&amp;gt; Worker -&amp;gt; Redis Pub/Sub -&amp;gt; API SSE -&amp;gt; Frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important bit is that predictions are created from real game events, not from the Twitch video stream.&lt;/p&gt;

&lt;p&gt;That means the backend always knows about a round start or bomb plant before the viewer sees it on Twitch.&lt;/p&gt;

&lt;p&gt;So I had to answer a simple question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When should each viewer see each prediction event?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The answer became:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;apply the streamer's configured broadcast delay on the server&lt;/li&gt;
&lt;li&gt;apply the viewer's additional personal delay on the client&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That split turned out to be much cleaner than trying to fully solve everything in one place.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem Was Not Delay - It Was Causality
&lt;/h2&gt;

&lt;p&gt;At first I thought I just needed to "shift events by X seconds."&lt;/p&gt;

&lt;p&gt;That was naive.&lt;/p&gt;

&lt;p&gt;What I actually needed was a system where the UI still made sense after delay was applied.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a prediction card should exist before its activity feed starts filling up&lt;/li&gt;
&lt;li&gt;odds should not update for a prediction the viewer has not seen yet&lt;/li&gt;
&lt;li&gt;a LIVE prediction should not appear after it has already resolved&lt;/li&gt;
&lt;li&gt;the frontend and backend should agree on whether a vote is still valid&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once I started thinking in those terms, the architecture changed a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 1: Streamer Delay on the Server
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir5bujqglona8822wqms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir5bujqglona8822wqms.png" alt=" " width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each streamer can configure a delay in seconds.&lt;/p&gt;

&lt;p&gt;I use that value in the SSE layer and buffer most outgoing events before they are sent to clients.&lt;/p&gt;

&lt;p&gt;Conceptually it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;delayMs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;streamerDelaySeconds&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;delayedEvents$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;streamerEvents$&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;delayMs&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This handles the obvious case: if a streamer intentionally runs a 60-second delay, predictions should also arrive roughly 60 seconds later.&lt;/p&gt;

&lt;p&gt;But this alone is not enough.&lt;/p&gt;

&lt;p&gt;Two viewers can watch the same streamer and still be several seconds apart. One is on desktop with low-latency mode, another is on mobile with buffering, another is watching inside a Twitch extension.&lt;/p&gt;

&lt;p&gt;So I needed a second layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The First Versions
&lt;/h2&gt;

&lt;p&gt;Initially, I added a frontend control where viewers could manually configure their own extra delay.&lt;/p&gt;

&lt;p&gt;It technically worked. It was also bad UX.&lt;/p&gt;

&lt;p&gt;Most people do not want to think in terms of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;streamer delay&lt;/li&gt;
&lt;li&gt;personal delay&lt;/li&gt;
&lt;li&gt;countdown drift&lt;/li&gt;
&lt;li&gt;whether they should add 3 seconds or 7 seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even worse, negative adjustments were confusing. They mostly changed timers, not reality.&lt;/p&gt;

&lt;p&gt;So while the manual control was useful for debugging and edge cases, it was clearly not a good primary solution.&lt;/p&gt;

&lt;p&gt;Then I tried a semi-automatic calibration flow.&lt;/p&gt;

&lt;p&gt;The system would wait for a real game event from the backend, and then ask the viewer to click when they saw that same moment on stream.&lt;/p&gt;

&lt;p&gt;In one iteration, I used round end events for calibration: the backend knew exactly when the round ended, and the viewer clicked when they saw that same moment on stream.&lt;/p&gt;

&lt;p&gt;This was much better than a raw slider because the user no longer had to guess.&lt;/p&gt;

&lt;p&gt;But it still had problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it required active user participation&lt;/li&gt;
&lt;li&gt;it was easy to miss the moment&lt;/li&gt;
&lt;li&gt;calibration quality depended on reaction time&lt;/li&gt;
&lt;li&gt;it felt like setup, which is never what you want in a live product&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was a useful bridge, but still not the end state.&lt;/p&gt;

&lt;p&gt;The funny part is that the cleaner answer was sitting in Twitch's own APIs the whole time.&lt;/p&gt;

&lt;p&gt;I just had not looked carefully enough at the docs.&lt;/p&gt;

&lt;p&gt;On the web player side, Twitch exposes &lt;code&gt;hlsLatencyBroadcaster&lt;/code&gt; through playback stats.&lt;/p&gt;

&lt;p&gt;On the Twitch Extension side, a similar value is available through the extension context API.&lt;/p&gt;

&lt;p&gt;So the final detection rule became simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;userDelay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;viewerLatencyToBroadcaster&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;streamerDelay&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That made the manual and click-to-calibrate flows much less important. They became fallback ideas, not the main path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Measuring Delay Was the Easy Part
&lt;/h2&gt;

&lt;p&gt;In the web app, I poll the Twitch player and read &lt;code&gt;hlsLatencyBroadcaster&lt;/code&gt;. In the extension, Twitch pushes similar latency through its context API.&lt;/p&gt;

&lt;p&gt;I still smooth the values with a rolling median and only apply meaningful changes, because raw samples jump around.&lt;/p&gt;

&lt;p&gt;But honestly, delay measurement ended up being less interesting than what came next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Did Not Bake Delay Into Stored Timestamps
&lt;/h2&gt;

&lt;p&gt;One of the most important design decisions was to keep timestamps pure.&lt;/p&gt;

&lt;p&gt;I do not store delayed timestamps in the database.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;createdAt&lt;/code&gt;, &lt;code&gt;lockedAt&lt;/code&gt;, &lt;code&gt;resolvedAt&lt;/code&gt;, and &lt;code&gt;closesAt&lt;/code&gt; all remain real server-side times.&lt;/p&gt;

&lt;p&gt;Delay is applied only at consumption and validation time.&lt;/p&gt;

&lt;p&gt;That matters because different viewers can be on different effective delays for the same prediction.&lt;/p&gt;

&lt;p&gt;If I had baked delay into stored timestamps, I would have mixed transport concerns with business state and made the whole system much harder to reason about.&lt;/p&gt;

&lt;p&gt;Instead, delay is applied in a few explicit places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSE buffering on the server for streamer delay&lt;/li&gt;
&lt;li&gt;client event queueing for viewer-specific delay&lt;/li&gt;
&lt;li&gt;vote validation for grace periods&lt;/li&gt;
&lt;li&gt;countdown rendering for display&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That separation kept the model sane.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not All Events Can Be Delayed the Same Way
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumrtvunzpzwh7ltqy2iu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumrtvunzpzwh7ltqy2iu.png" alt=" " width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At first glance, you might think: just delay every SSE event by the same amount.&lt;/p&gt;

&lt;p&gt;That does not work.&lt;/p&gt;

&lt;p&gt;Some events are structural:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;PREDICTION_CREATED&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PREDICTION_UPDATED&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GAME_STATE_UPDATED&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those can be delayed safely.&lt;/p&gt;

&lt;p&gt;But some events are highly dynamic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;ODDS_UPDATE&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;VOTE_PLACED&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I delayed those blindly on the server, viewers would get stale odds and activity could arrive for predictions they had not seen yet.&lt;/p&gt;

&lt;p&gt;So I ended up with a hybrid approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;most game-driven events are delayed server-side by streamer delay&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ODDS_UPDATE&lt;/code&gt; and &lt;code&gt;VOTE_PLACED&lt;/code&gt; are sent immediately&lt;/li&gt;
&lt;li&gt;the client buffers them until the related prediction is actually visible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conceptually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;immediateEvents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ODDS_UPDATE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;VOTE_PLACED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;delayedEvents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;everythingElse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;streamerDelayMs&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For odds updates, I only keep the latest pending update per prediction.&lt;/p&gt;

&lt;p&gt;For vote activity, I queue events until the corresponding prediction card has been seen.&lt;/p&gt;

&lt;p&gt;That way the UI stays causally consistent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no odds for invisible predictions&lt;/li&gt;
&lt;li&gt;no activity for predictions that do not exist yet&lt;/li&gt;
&lt;li&gt;no stale creation-time odds for viewers watching with delay&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also enrich delayed &lt;code&gt;PREDICTION_CREATED&lt;/code&gt; events with the latest odds right before sending them. That fixes another subtle issue: by the time a delayed viewer sees a new prediction, the original odds may already be outdated.&lt;/p&gt;

&lt;p&gt;This was probably the most important architectural shift in the whole feature.&lt;/p&gt;

&lt;p&gt;I stopped thinking in terms of "delay everything" and started thinking in terms of "preserve causal order."&lt;/p&gt;

&lt;h2&gt;
  
  
  LIVE Predictions Were a Special Edge Case
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftf280n9sehvsm877jl3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftf280n9sehvsm877jl3e.png" alt=" " width="737" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some predictions are created in the middle of a round and have a very short countdown window.&lt;/p&gt;

&lt;p&gt;For example, a bomb-related prediction might only have a few seconds of real voting time.&lt;/p&gt;

&lt;p&gt;That creates a strange edge case: by the time the delayed &lt;code&gt;PREDICTION_CREATED&lt;/code&gt; event is ready to be delivered, the prediction may already be resolved.&lt;/p&gt;

&lt;p&gt;So before sending delayed live predictions, I check their latest status.&lt;/p&gt;

&lt;p&gt;If a prediction is already resolved or canceled, I skip sending the stale creation event altogether.&lt;/p&gt;

&lt;p&gt;That avoids showing viewers nonsense like a fresh prediction card for something that already ended.&lt;/p&gt;

&lt;p&gt;This is the kind of bug that makes a realtime product feel fake even when the backend is technically correct.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Backend Also Has to Be Delay-Aware
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forsqzb7tgyaj6pmaudmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forsqzb7tgyaj6pmaudmj.png" alt=" " width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Synchronizing the UI is only half the problem.&lt;/p&gt;

&lt;p&gt;The backend also has to accept votes in a way that matches what the viewer actually saw.&lt;/p&gt;

&lt;p&gt;So vote validation uses a delay context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;graceWindowMs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;streamerDelayMs&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;userDelayMs&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;bufferMs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;votable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;closesAt&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;graceWindowMs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That context is used for two important checks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;whether a countdown-based prediction is still votable&lt;/li&gt;
&lt;li&gt;whether a locked or even recently resolved prediction is still inside a grace window for delayed viewers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is what makes the system feel fair.&lt;/p&gt;

&lt;p&gt;Without it, the UI could say "you still have time" while the API says "too late".&lt;/p&gt;

&lt;p&gt;That mismatch kills trust immediately.&lt;/p&gt;

&lt;p&gt;This part is easy to miss when talking about stream delay. People usually think about rendering, but fairness is really a validation problem too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time Sync Matters More Than It Looks
&lt;/h2&gt;

&lt;p&gt;Another subtle problem: if the client clock drifts, your countdown math is wrong.&lt;/p&gt;

&lt;p&gt;So I also added periodic &lt;code&gt;TIME_SYNC&lt;/code&gt; events from the server and calculate countdowns against server-adjusted time instead of raw &lt;code&gt;Date.now()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That sounds minor, but once you are combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backend event timestamps&lt;/li&gt;
&lt;li&gt;streamer delay&lt;/li&gt;
&lt;li&gt;viewer delay&lt;/li&gt;
&lt;li&gt;countdown windows&lt;/li&gt;
&lt;li&gt;grace periods&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...even small clock differences become visible in the UI.&lt;/p&gt;

&lt;p&gt;So there are really three layers here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the game event timeline&lt;/li&gt;
&lt;li&gt;the delayed delivery timeline&lt;/li&gt;
&lt;li&gt;the client's local clock&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If those are not reconciled carefully, countdowns and vote windows drift apart.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Final Model Looks Like
&lt;/h2&gt;

&lt;p&gt;The current system is basically this:&lt;/p&gt;

&lt;h3&gt;
  
  
  Server-side
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;streamer config defines the base delay&lt;/li&gt;
&lt;li&gt;API buffers most SSE events by that amount&lt;/li&gt;
&lt;li&gt;delayed prediction creation gets refreshed with latest odds&lt;/li&gt;
&lt;li&gt;stale live creations are filtered out&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client-side
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;web reads Twitch player latency via &lt;code&gt;hlsLatencyBroadcaster&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;extension reads similar latency from Twitch extension context&lt;/li&gt;
&lt;li&gt;a median-filtered calculator derives additional viewer delay&lt;/li&gt;
&lt;li&gt;events are queued client-side by that extra amount&lt;/li&gt;
&lt;li&gt;odds and activity are buffered until their prediction is visible&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Validation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;vote windows are checked against streamer delay + viewer delay + buffer&lt;/li&gt;
&lt;li&gt;countdown rendering uses server-synced time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is that two viewers watching the same stream can see the same prediction at different times, and both still get a coherent experience.&lt;/p&gt;

&lt;p&gt;That was the real goal.&lt;/p&gt;

&lt;p&gt;Not "perfect clocks".&lt;/p&gt;

&lt;p&gt;Just a system where predictions feel aligned with what each viewer is actually seeing.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Few Things I Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;stream delay is not one number; it is a layered system&lt;/li&gt;
&lt;li&gt;measuring delay is easier than preserving event ordering under delay&lt;/li&gt;
&lt;li&gt;keeping timestamps pure is much easier than storing delay-adjusted values&lt;/li&gt;
&lt;li&gt;you cannot treat all realtime events equally; some need hybrid delivery&lt;/li&gt;
&lt;li&gt;fairness lives in backend validation as much as in frontend rendering&lt;/li&gt;
&lt;li&gt;Twitch already exposed the latency data I needed - I just found it later than I should have&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I still have more technical stories here, especially around dynamic odds under delayed delivery. But the delay problem was the first one that forced me to stop thinking about "realtime" as one timeline shared by everyone.&lt;/p&gt;

&lt;p&gt;On Twitch, realtime is per viewer.&lt;/p&gt;

&lt;p&gt;And once I accepted that, the architecture got much better.&lt;/p&gt;




&lt;p&gt;If you are building anything interactive on top of live streams, I would love to hear how you handled playback delay, event ordering, or fairness windows.&lt;/p&gt;

</description>
      <category>cs2</category>
      <category>games</category>
      <category>devchallenge</category>
      <category>twitch</category>
    </item>
  </channel>
</rss>
