<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marco Messina</title>
    <description>The latest articles on DEV Community by Marco Messina (@marcotwzrd).</description>
    <link>https://dev.to/marcotwzrd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marcotwzrd"/>
    <language>en</language>
    <item>
      <title>Backtesting an ICT strategy at 184 speed: timezone-cache + bisect lookup</title>
      <dc:creator>Marco Messina</dc:creator>
      <pubDate>Sun, 17 May 2026 04:02:30 +0000</pubDate>
      <link>https://dev.to/marcotwzrd/backtesting-an-ict-strategy-at-184x-speed-timezone-cache-bisect-lookup-2pap</link>
      <guid>https://dev.to/marcotwzrd/backtesting-an-ict-strategy-at-184x-speed-timezone-cache-bisect-lookup-2pap</guid>
      <description>&lt;p&gt;I have been running an ICT-based reversal strategy live on US500 for a few months. The strategy itself is fine, but the bottleneck was nowhere near the strategy logic. It was in the backtest harness. A 30-day single-instrument simulation took &lt;strong&gt;27 minutes&lt;/strong&gt; when I wrote the first version. Iterating on parameters was painful, exploring alternative setups was effectively impossible.&lt;/p&gt;

&lt;p&gt;After two evenings of profiling and one targeted change, the same 30-day backtest now runs in &lt;strong&gt;8.9 seconds&lt;/strong&gt;. That is a 184× speedup, and the change was almost embarrassingly small.&lt;/p&gt;

&lt;p&gt;This is the story of what was slow, why it was slow, and the cache-plus-bisect pattern that fixed it. If you write your own backtesting code in Python, you are very probably leaving a similar speedup on the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;The strategy is a Smart Money Reversal style entry with LRB (liquidity-run break) re-entries. The harness is a fairly standard event-driven loop. For each minute bar in the historical data, we evaluate signal conditions, manage open positions, check pyramid re-entries, and update P&amp;amp;L. The data is roughly 7000 minute bars per US500 trading day, multiplied across 30 days gives around 210k bars per simulation.&lt;/p&gt;

&lt;p&gt;210k bars in 27 minutes is 130 bars per second, which is laughable for what is essentially a tight numeric loop in Python. Even with pandas overhead I expected 10× better. Time to profile.&lt;/p&gt;

&lt;h2&gt;
  
  
  The profiler told a clear story
&lt;/h2&gt;

&lt;p&gt;I dropped cProfile in front of the harness and got the breakdown. The top function by cumulative time was not the strategy evaluator or the order manager. It was &lt;code&gt;pandas.tslib.tz_convert&lt;/code&gt;, called from inside the bar iterator. Specifically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bar&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;iterrows&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;local_ts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tz_convert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;America/New_York&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;is_in_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;local_ts&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The naive code converts the bar timestamp to NY time on every single iteration. pandas timestamp conversion is not free. It runs through tzdata lookup, calculates DST offsets, allocates new Timestamp objects. On a single conversion call that is microseconds, no problem. Called 210k times per backtest, suddenly you are spending eight or nine minutes inside pandas internal C extension before even hitting your own code.&lt;/p&gt;

&lt;p&gt;The second-slowest function was a &lt;code&gt;bisect_left&lt;/code&gt; on a sorted list of session boundaries that I had written naively as a linear scan. That was eating another four minutes per simulation. The third was unnecessary DataFrame slicing to find the previous N bars, which I had also written as &lt;code&gt;df.loc[prev_ts:ts]&lt;/code&gt; and was doing index lookups linearly.&lt;/p&gt;

&lt;p&gt;So three independent issues, all rooted in the same mistake: I was doing in the hot loop what should have been done once at the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix, part one: timezone cache
&lt;/h2&gt;

&lt;p&gt;Instead of converting every bar timestamp on the fly, I precomputed a single column of NY-local timestamps when loading the historical data, and dropped the conversion entirely from the hot loop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before (per-iteration conversion, killing perf)
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bar&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;iterrows&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;local_ts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tz_convert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;America/New_York&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;minute&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;local_ts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hour&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;local_ts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;minute&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;NY_OPEN&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;minute&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;NY_CLOSE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="c1"&gt;# After (one-shot conversion at load, then plain int comparison)
&lt;/span&gt;&lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ny_minute&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tz_convert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;America/New_York&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hour&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;minute&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;NY_MINUTES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ny_minute&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;# In the hot loop:
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;NY_OPEN&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;NY_MINUTES&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;NY_CLOSE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The session-check becomes a single integer comparison against a numpy int. Zero pandas overhead, zero timezone object allocation, zero string lookup. The pre-computation cost is essentially free, it runs once at the start of the simulation in under 200ms for a month of data.&lt;/p&gt;

&lt;p&gt;This change alone took the backtest from 27 minutes down to about 4 minutes. A nice 7× speedup, but I was not done.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix, part two: bisect over sorted boundaries
&lt;/h2&gt;

&lt;p&gt;The strategy uses session-relative reference points (NY session open, midnight UTC, last hour of trading, etc.). My naive implementation rebuilt these references for every bar by walking back through the data. The right fix is to precompute boundary timestamps as a sorted array and bisect into them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bisect&lt;/span&gt;

&lt;span class="c1"&gt;# Precompute once
&lt;/span&gt;&lt;span class="n"&gt;ny_session_starts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ny_minute&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;NY_OPEN&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_list&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# In the hot loop, find the most recent session start
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;session_start_for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bisect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bisect_right&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ny_session_starts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ny_session_starts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;bisect_right&lt;/code&gt; is O(log n) where n is the number of session-starts. For 30 days that is around 22 (US500 trading days). log2(22) is about 4.5 comparisons per lookup. Compare to the original linear walk which averaged 11 comparisons per lookup. The win per call is modest, but the constant factor (bisect is C-level builtin, my original Python loop was interpreter-level) is large.&lt;/p&gt;

&lt;p&gt;This brought the backtest down to about 45 seconds. 36× total speedup. Still not done.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix, part three: numpy-native bar windows
&lt;/h2&gt;

&lt;p&gt;The strategy needs to evaluate features over rolling windows of recent bars (last 5, last 20, last 60). My original code was doing &lt;code&gt;bars.loc[prev_ts:ts]&lt;/code&gt; for each window for each bar, which does an index lookup and returns a DataFrame slice. DataFrame slicing has noticeable per-call overhead in pandas.&lt;/p&gt;

&lt;p&gt;The fix was to precompute the entire OHLC data as numpy arrays at load time, and then slice them by integer index in the hot loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Precompute
&lt;/span&gt;&lt;span class="n"&gt;OPENS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;open&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;HIGHS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;high&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;LOWS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;low&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;CLOSES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;close&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# In the hot loop (i is the current bar index)
&lt;/span&gt;&lt;span class="n"&gt;last_20_highs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;HIGHS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;last_20_lows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;LOWS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Numpy slicing is O(1) view creation, no copy. Pandas slicing on a DatetimeIndex with the same intent allocates intermediate objects. The difference for a single call is small. Multiplied by 210k bars across multiple window sizes per bar, the difference is dramatic.&lt;/p&gt;

&lt;p&gt;This last fix brought the final number to 8.9 seconds. From 27 minutes start to 8.9 seconds end, the total speedup is 182×, or 184× depending on how you round the original measurement.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this unlocks
&lt;/h2&gt;

&lt;p&gt;A 184× speedup is not just nice to have. It changes what is possible in strategy research. With a 27-minute baseline, exploring a parameter grid of 20 combinations took 9 hours. You think hard before launching the run, you wait until next morning, you batch experiments carefully. With a 9-second baseline, the same 20-combination grid finishes in 3 minutes. You explore freely, you try ideas that would have been too expensive to test before, you actually see the parameter landscape.&lt;/p&gt;

&lt;p&gt;For me, the practical consequence has been a faster cycle on the live strategy that runs at &lt;a href="https://tgsignals.com" rel="noopener noreferrer"&gt;tgsignals.com&lt;/a&gt;, the production system I run on US500 NY session. Strategy ideas that would have taken a week of backtest babysitting now take an afternoon. That difference compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The general lesson
&lt;/h2&gt;

&lt;p&gt;The bigger pattern here is that Python performance bottlenecks for backtesting almost always live in the same three places: timezone handling, slow lookups inside hot loops, and pandas slicing where numpy slicing would do. None of these are exotic. Any decent Python developer profiling the code would find them. The reason they survive in real codebases is that the first version of a backtest is written to be correct, not fast, and once it is correct nobody bothers to optimize.&lt;/p&gt;

&lt;p&gt;Profile your hot loop. Convert timezones once. Bisect into sorted arrays. Use numpy slicing instead of pandas slicing when you can. None of these are hard, and any one of them might give you the 10× that turns "I will run this overnight" into "I will run it now."&lt;/p&gt;

&lt;p&gt;The 184× I got was the lucky combination of all three landing on the same codebase. Your mileage will vary, but most backtest harnesses I have seen have at least one of these wins waiting to be picked up.&lt;/p&gt;

</description>
      <category>algotrading</category>
      <category>python</category>
      <category>performance</category>
      <category>backtesting</category>
    </item>
    <item>
      <title>From paid-credit chat rooms to Telegram-direct DMs: how Italy's adult chat economy quietly restructured (2010-2026)</title>
      <dc:creator>Marco Messina</dc:creator>
      <pubDate>Sun, 17 May 2026 03:44:32 +0000</pubDate>
      <link>https://dev.to/marcotwzrd/from-paid-credit-chat-rooms-to-telegram-direct-dms-how-italys-adult-chat-economy-quietly-hcl</link>
      <guid>https://dev.to/marcotwzrd/from-paid-credit-chat-rooms-to-telegram-direct-dms-how-italys-adult-chat-economy-quietly-hcl</guid>
      <description>&lt;p&gt;Adult creator economy stories tend to focus on US-centric platforms (OnlyFans, Fansly, Patreon). Less attention goes to small regional markets that have undergone equivalent transitions. Italy's chat industry is one such market and it has a story worth dissecting if you care about how creator economies disintermediate over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the old market looked like
&lt;/h2&gt;

&lt;p&gt;For two decades, Italy had a visible adult-chat industry built on web platforms with paid-credit business models. Users bought credit packs, entered a multi-user web chat room, and consumed credits during conversation. The platforms employed chatters in shifts to maintain conversation availability during low-traffic hours. Brand recognition was built through late-night TV advertising, with slogans that became culturally embedded in Italian internet language.&lt;/p&gt;

&lt;p&gt;The economic structure was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Platform took roughly 30-40% of every credit transaction&lt;/li&gt;
&lt;li&gt;Chatters were paid hourly plus a share of credit consumption during their shifts&lt;/li&gt;
&lt;li&gt;Users had no continuity: returning users were not recognized, every session started from zero&lt;/li&gt;
&lt;li&gt;Brand-platform coupling was tight: leaving the platform meant losing all relationships and history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was a centralized marketplace model with the platform as both gatekeeper and intermediary. It worked well enough for the platforms during the era of expensive customer acquisition (TV advertising amortized over high-credit-purchase users), but it had three structural weaknesses that became fatal as conditions changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three structural weaknesses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Weakness one: high CAC dependency.&lt;/strong&gt; The model required new users to keep entering the funnel because the per-user lifetime value capped at relatively low ceiling (credit fatigue, time fatigue, eventual move to other entertainment). When TV advertising lost effectiveness in the late 2010s, the customer acquisition cost per high-LTV user rose faster than the platforms could compensate, compressing margins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weakness two: trust erosion via paid chatters.&lt;/strong&gt; The presence of platform-employed chatters in chat rooms was an open secret by the mid 2010s. Users gradually learned to detect them (faster response times, scripted phrases, lack of personal continuity), and the perceived authenticity of the service eroded. Trust erosion in a marketplace is a slow then sudden process: users tolerate suspicion for years, then leave en masse when an alternative becomes credible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weakness three: creator lock-in friction.&lt;/strong&gt; Chatters working on these platforms could not easily migrate clients to direct relationships, because the platform deliberately prevented external contact information sharing. This kept clients trapped in the platform's pricing model but also kept creators trapped at platform-determined earnings. As Telegram-based alternatives emerged, creators had strong incentive to move and bring clients with them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What replaced it
&lt;/h2&gt;

&lt;p&gt;Starting around 2020, a parallel creator-direct model began emerging on Telegram. The mechanics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Individual Italian creators ran their own Telegram channels or direct-DM businesses&lt;/li&gt;
&lt;li&gt;No platform intermediation, no credit system, no shift-based work&lt;/li&gt;
&lt;li&gt;Discovery happened through SEO landing pages, social posts, word of mouth across creators&lt;/li&gt;
&lt;li&gt;Payment happened directly between creator and client, using whatever method both preferred&lt;/li&gt;
&lt;li&gt;Brand entities began emerging that owned discovery layers but did not own creators&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a textbook marketplace disintermediation: the centralized platform got replaced by a discovery layer plus direct creator-client relationships. The new model has different economics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Platform cut on transactions: 0% (no platform in the loop)&lt;/li&gt;
&lt;li&gt;Creator hourly equivalent earnings: 2-3x the old model&lt;/li&gt;
&lt;li&gt;Client cost per equivalent service: down 30-50%&lt;/li&gt;
&lt;li&gt;User continuity: high (same creator, ongoing relationship, recurring revenue)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Where did the 30-40% platform margin go? Roughly half captured by creators (higher earnings), half captured by clients (lower prices). The classic disintermediation surplus split.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brand migration as a strategic problem
&lt;/h2&gt;

&lt;p&gt;The most interesting part of this story is what happened to the old brands. Italian adult chat had developed strong consumer brand recognition during the TV advertising era. Names like &lt;em&gt;Chat Monella&lt;/em&gt; were category-defining entities, similar to how Kleenex defines tissues in English-speaking markets.&lt;/p&gt;

&lt;p&gt;When the underlying paradigm shifted to Telegram-mediated DMs, these brands faced a strategic choice: stay coupled to the old model and lose relevance as users left, or attempt to migrate the brand value into the new paradigm. The brands that handled the migration well executed a dual-property strategy: keep the legacy website running as a legacy product for users still using the old paradigm, and spawn a parallel new property that uses the brand name but routes traffic to Telegram-based creators.&lt;/p&gt;

&lt;p&gt;An example of this pattern is the relationship between chatmonella.it (the legacy property, still running on the credit-based web chatroom model) and &lt;a href="https://www.chatmonella.com" rel="noopener noreferrer"&gt;chatmonella.com&lt;/a&gt; (the modern property, which functions as a discovery layer pointing users to Telegram-based creators under the same brand umbrella). The two coexist, target different user segments (older users staying on legacy paradigm, newer users entering through the modern paradigm), and let the brand straddle the transition without forcing a hard cutover.&lt;/p&gt;

&lt;p&gt;This dual-property pattern is replicable in other categories undergoing similar paradigm shifts. The key insight is that brand value can outlive paradigm changes if you decouple the brand from the specific implementation it was originally built on, and you give yourself permission to run multiple implementations under the same brand simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  What generalizes to other markets
&lt;/h2&gt;

&lt;p&gt;The Italian adult chat case has some lessons that generalize to creator-economy markets internationally:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized marketplaces with high take rates are perpetually under disintermediation pressure&lt;/strong&gt; once a credible direct alternative exists. Once a creator can capture meaningfully more revenue by going direct, eventually most will.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trust erosion via platform-employed pseudo-users is a delayed but fatal failure mode.&lt;/strong&gt; When platforms compensate for marketplace thinness by paying participants to simulate presence, they buy short-term liquidity at the cost of long-term trust.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Distribution platforms beat feature-rich verticals.&lt;/strong&gt; Telegram had no features specific to adult chat. It just had distribution and a native one-to-one primitive. That was enough to displace specialized platforms that had years of feature investment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Brand value is portable across paradigm shifts if you act before the brand decays.&lt;/strong&gt; Brands that migrated early (while still relevant) survived. Brands that waited for the migration to complete are now irrelevant.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disintermediation surplus typically splits roughly evenly between producers and consumers&lt;/strong&gt;, with some accruing to whoever owns the new discovery layer. This is consistent with theoretical predictions and matches observations from larger markets.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Italian adult chat is a small market, but it is a clean small market: self-contained, observable, with a compressed time horizon. For anyone studying creator-economy dynamics, it is a useful case to examine, because the data is unusually legible and the lessons appear to generalize.&lt;/p&gt;

</description>
      <category>italy</category>
      <category>creatoreconomy</category>
      <category>marketplaces</category>
      <category>retrospective</category>
    </item>
  </channel>
</rss>
