<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joseph Boone</title>
    <description>The latest articles on DEV Community by Joseph Boone (@tavari).</description>
    <link>https://dev.to/tavari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tavari"/>
    <language>en</language>
    <item>
      <title>1m Tokens (&amp; WebSocket)</title>
      <dc:creator>Joseph Boone</dc:creator>
      <pubDate>Thu, 19 Mar 2026 21:32:25 +0000</pubDate>
      <link>https://dev.to/tavari/1m-tokens-websocket-1f0c</link>
      <guid>https://dev.to/tavari/1m-tokens-websocket-1f0c</guid>
      <description>&lt;p&gt;Greetings readers, I made a threading engine with many optimizations (including ML) and WebSocket task controls per operation.  &lt;/p&gt;

&lt;p&gt;Even when computing a slow moving series like Leibniz PI at 1 million token executions the tasks all resolved as expected in ~200 seconds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ── LAYER 0: TERM TOKENS ──────────────────────────────────────────────────────
&lt;/span&gt;&lt;span class="nd"&gt;@task_token_guard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;operation_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pi_term&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;weight&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;light&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;compute_pi_term&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Compute a single Leibniz term: (-1)^n / (2n + 1)
    Returns as string to preserve Decimal precision across token boundary.
    Light weight — 1,000,000 of these fire simultaneously.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="nf"&gt;getcontext&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;prec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DECIMAL_PRECISION&lt;/span&gt;
    &lt;span class="n"&gt;sign&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;
    &lt;span class="n"&gt;term&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sign&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nc"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;term&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# ── LAYER 1: CHUNK TOKENS ─────────────────────────────────────────────────────
&lt;/span&gt;&lt;span class="nd"&gt;@task_token_guard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;operation_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pi_chunk&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;weight&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;light&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;sum_chunk&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;term_strings&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Sum a batch of Leibniz terms.
    Receives resolved term strings from Layer 0 tokens.
    Light weight — 1,000 of these, each summing 1,000 terms.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="nf"&gt;getcontext&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;prec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DECIMAL_PRECISION&lt;/span&gt;
    &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;term_strings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# ── LAYER 2: PARTIAL TOKENS ───────────────────────────────────────────────────
&lt;/span&gt;&lt;span class="nd"&gt;@task_token_guard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;operation_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pi_partial&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;weight&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;medium&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;sum_partial&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk_strings&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Sum a batch of chunk sums.
    Receives resolved chunk strings from Layer 1 tokens.
    Medium weight — 10 of these, each summing 100 chunks.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="nf"&gt;getcontext&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;prec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DECIMAL_PRECISION&lt;/span&gt;
    &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunk_strings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Leibniz is intentionally the slowest converging PI series, it needs ~10 million terms for 7 correct digits. That makes it a good stress test: maximum token volume, minimum mathematical payoff.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnk3ei5arzovfacfrunv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnk3ei5arzovfacfrunv7.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Note: 64 workers with SMT enabled is only ~7% faster on a 7800X3D — more workers doesn't always mean more throughput, especially for micro-ops where execution port contention becomes the real ceiling.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc3m5s20g23ene0obdk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc3m5s20g23ene0obdk6.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tokens move through async admission and resolve on pinned workers, CPU heavy tasks stay on core 1, light tasks distribute across the rest. Failure nets, duplication safety, and WebSocket controls prevent runaway processes at the process level.&lt;/p&gt;

&lt;p&gt;Take a look at the repo: &lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/TavariAgent" rel="noopener noreferrer"&gt;
        TavariAgent
      &lt;/a&gt; / &lt;a href="https://github.com/TavariAgent/Py-TokenGate" rel="noopener noreferrer"&gt;
        Py-TokenGate
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Experimental Python concurrency model using token-managed routing
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;TokenGate&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Welcome to the TokenGate repository.&lt;/p&gt;




&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;What it is:&lt;/h3&gt;
&lt;/div&gt;

&lt;p&gt;A small experimental system for routing decorated synchronous functions&lt;br&gt;
through a token-managed concurrency model. It is intended to operate as&lt;br&gt;
its own concurrency workflow rather than alongside normal threading patterns.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;What it is not:&lt;/h3&gt;
&lt;/div&gt;
&lt;p&gt;It is not presented as production code.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Overview:&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;TokenGate is an exploration of token-managed concurrency: a&lt;br&gt;
concept for coordinating async orchestration with thread-backed&lt;br&gt;
work in a structured way.&lt;/p&gt;
&lt;p&gt;This repository is &lt;strong&gt;a proof of concept, not a finished product&lt;/strong&gt;.&lt;br&gt;
It is experimental, still evolving, and shared in the spirit of&lt;br&gt;
exploration.&lt;/p&gt;
&lt;p&gt;If you'd like the fuller overview, please start here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/TavariAgent/Py-TokenGate/./DOCS/proof-of-concept.md" rel="noopener noreferrer"&gt;Proof of Concept&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If anything here is useful, interesting, or sparks an&lt;br&gt;
idea, that already makes this project worthwhile.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;How to Use (Two Versions, Two Decorators)&lt;/h2&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Note: Do not attempt to decorate an async function.&lt;/h3&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;&lt;em&gt;The token decorator uses asyncio, but the decorated function itself should&lt;/em&gt;&lt;/h4&gt;…&lt;/div&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/TavariAgent/Py-TokenGate" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;




</description>
      <category>programming</category>
      <category>python</category>
      <category>webdev</category>
      <category>performance</category>
    </item>
    <item>
      <title>Threading Async Together</title>
      <dc:creator>Joseph Boone</dc:creator>
      <pubDate>Mon, 16 Mar 2026 22:39:11 +0000</pubDate>
      <link>https://dev.to/tavari/threading-async-together-hf1</link>
      <guid>https://dev.to/tavari/threading-async-together-hf1</guid>
      <description>&lt;p&gt;Hello readers,&lt;/p&gt;

&lt;p&gt;I built a proof-of-concept application I call TokenGate. It’s a high performance async/threaded event bus, with control mechanisms designed to be extremely minimalist.&lt;/p&gt;

&lt;p&gt;The core concept is to produce parallelism in concurrent operations through async token gathering and coordinated threading workers.&lt;/p&gt;

&lt;p&gt;Here's what "TokenGate" uses to thread an operation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# -- Python 3.12 -- #
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;token_system&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;task_token_guard&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;operations_coordinator&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OperationsCoordinator&lt;/span&gt;

&lt;span class="c1"&gt;# 1. Decorated standard synchronous function for threading
&lt;/span&gt;&lt;span class="nd"&gt;@task_token_guard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;operation_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;string_ops&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;weight&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;light&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;string_operation_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# This function is now threaded
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;

&lt;span class="c1"&gt;# 2. Starts the coordinator (through a running loop)
&lt;/span&gt;&lt;span class="n"&gt;coordinator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OperationsCoordinator&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;coordinator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# 3. finally or an exception stops on close
&lt;/span&gt;&lt;span class="n"&gt;coordinator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Task tokens are generated by using a wrapped decorator.&lt;/p&gt;

&lt;p&gt;Here's some test results on operations in a "release mechanism" that dispatches batches of mixed tasks incrementally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONCURRENCY BURST: Medium x8 | release 1464 (8 tasks)
======================================================================
  Submit spread (barrier jitter): 0.19ms
  Overall wall-clock:             0.009045s
  Min task duration:              0.007818s
  Max task duration:              0.008432s
  Mean task duration:             0.008148s
  Stdev (clustering indicator):   0.000218s

  Duration per task (tight clustering = true concurrency):
    Task 00: 0.007928s  
    Task 01: 0.008000s  
    Task 02: 0.008136s  
    Task 03: 0.008209s  
    Task 04: 0.008432s  
    Task 05: 0.008300s  
    Task 06: 0.008362s  
    Task 07: 0.007818s  

  Serial estimate (sum):  0.065186s
  Actual wall-clock:      0.009045s
  Concurrency ratio:      7.21x  (concurrent)

CONCURRENCY BURST [Medium x8 | release 1464] PASSED
======================================================================
CONCURRENCY WINDOW: Sustained mixed releases (30s)
======================================================================
  Releases:                       1484
  Total tasks:                    11872
  Overall wall-clock:             30.070291s
  Min task duration:              0.001157s
  Max task duration:              0.105874s
  Mean task duration:             0.014970s
  Stdev (clustering indicator):   0.025983s

  Serial estimate (sum):          177.728067s
  Actual wall-clock:              30.070291s
  Sustained concurrency ratio:    5.91x  (concurrent)

CONCURRENCY WINDOW [Sustained mixed releases (30s)] PASSED

CONCURRENCY SUITE COMPLETE.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Concurrency ratios of up to 7.21x were witnessed on an 8 core CPU with ~32 dynamic workers &lt;em&gt;in ideal conditions&lt;/em&gt;, which is roughly 90% of the 8x concurrent operation ceiling.)&lt;/p&gt;

&lt;p&gt;I've tested a wide variety of normally threaded operations with result delivery as expected.&lt;/p&gt;

&lt;p&gt;It's still a just a proof, however I've used it in various side-projects with good results.&lt;/p&gt;

&lt;p&gt;For anyone interested here's my project on GitHub (with proofs):&lt;/p&gt;

&lt;p&gt;Repo link - &lt;a href="https://github.com/TavariAgent/Py-TokenGate" rel="noopener noreferrer"&gt;https://github.com/TavariAgent/Py-TokenGate&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>opensource</category>
      <category>code</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
