<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: codelluis</title>
    <description>The latest articles on DEV Community by codelluis (@codelluis).</description>
    <link>https://dev.to/codelluis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/codelluis"/>
    <language>en</language>
    <item>
      <title>Per-account task concurrency without a lock service</title>
      <dc:creator>codelluis</dc:creator>
      <pubDate>Thu, 07 May 2026 22:13:04 +0000</pubDate>
      <link>https://dev.to/codelluis/per-account-task-concurrency-without-a-lock-service-1b9k</link>
      <guid>https://dev.to/codelluis/per-account-task-concurrency-without-a-lock-service-1b9k</guid>
      <description>&lt;p&gt;Many background jobs call an external system on behalf of separate accounts,&lt;br&gt;
tenants, or installations. The external system allows parallel calls across&lt;br&gt;
different accounts, but it does not allow two calls for the &lt;em&gt;same&lt;/em&gt; account to&lt;br&gt;
run at the same time.&lt;/p&gt;

&lt;p&gt;That is not a global rate limit. It is concurrency by key: the key might be&lt;br&gt;
&lt;code&gt;account_id&lt;/code&gt;, &lt;code&gt;tenant_id&lt;/code&gt;, or another argument that identifies the shared&lt;br&gt;
quota or state boundary.&lt;/p&gt;

&lt;p&gt;You want the worker pool busy across many accounts, while each account stays&lt;br&gt;
serial. Without that guard, two workers eventually pick up work for the same&lt;br&gt;
account in parallel. The external system may throttle the account, reject the&lt;br&gt;
second call, or leave you with a partial update to reconcile.&lt;/p&gt;

&lt;p&gt;The usual fixes are external locks, one queue per account, or retry/backoff&lt;br&gt;
logic around every call. They can work, but they add another coordination&lt;br&gt;
layer to the job system.&lt;/p&gt;

&lt;p&gt;Pynenc's orchestrator already tracks running invocations and their arguments.&lt;br&gt;
With &lt;code&gt;running_concurrency=KEYS&lt;/code&gt; and &lt;code&gt;key_arguments=("account_id",)&lt;/code&gt;, it can&lt;br&gt;
enforce one in-flight invocation per account key while still running different&lt;br&gt;
accounts in parallel. &lt;code&gt;reroute_on_concurrency_control&lt;/code&gt; decides whether blocked&lt;br&gt;
work waits or is dropped, and &lt;code&gt;registration_concurrency=KEYS&lt;/code&gt; can collapse&lt;br&gt;
duplicate work before a worker sees it.&lt;/p&gt;

&lt;p&gt;Full sample: &lt;a href="https://github.com/pynenc/samples/tree/main/concurrency_demo" rel="noopener noreferrer"&gt;samples/concurrency_demo&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The demo
&lt;/h2&gt;

&lt;p&gt;Four tiny files, each doing one thing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;concurrency_demo/
├── api_server.py     # tiny HTTP app: pretends to be the external provider
├── tasks.py          # PynencBuilder app + 4 tasks (the whole story)
├── enqueue.py        # CLI: enqueue one scenario, print results
└── sample.py         # one-command demo: boots api+worker, runs all scenarios
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "external provider" is a small HTTP app that holds an account in flight&lt;br&gt;
for 0.4 seconds per call and records a &lt;em&gt;collision&lt;/em&gt; whenever a second request&lt;br&gt;
arrives while the first is still in flight. In a real integration, that&lt;br&gt;
collision could be a 429, a rejected write, or an inconsistent refresh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# api_server.py — the part that matters
&lt;/span&gt;&lt;span class="nd"&gt;@app.post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/call/{account_id}/{op}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;HOLD_SECONDS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;acc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;collided&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;in_flight&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="n"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collisions&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;collided&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;in_flight&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  [&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;COLLISION&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;collided&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ok       &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;] &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;flush&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hold&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;accounts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;in_flight&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outcome&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;collision&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;collided&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pynenc app and the four tasks fit on one screen. The whole pynenc&lt;br&gt;
configuration — SQLite backend, in-process thread runner, logging — sits&lt;br&gt;
fluently in &lt;code&gt;tasks.py&lt;/code&gt; next to the tasks that use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# tasks.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pynenc&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PynencBuilder&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pynenc.conf.config_task&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ConcurrencyControlType&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;Mode&lt;/span&gt;

&lt;span class="n"&gt;API_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://127.0.0.1:8765&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nc"&gt;PynencBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;app_id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;concurrency_demo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqlite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;concurrency_demo.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;thread_runner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;min_threads&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_threads&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logging_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdout&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logging_level&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DEMO_LOG_LEVEL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;info&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max_pending_seconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;3.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_hit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hold&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;hold&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;hold&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;API_URL&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/call/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;10.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;raise_for_status&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outcome&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


&lt;span class="nd"&gt;@app.task&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call_unsafe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;_hit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="nd"&gt;@app.task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;running_concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Mode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;KEYS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;key_arguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;account_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,),&lt;/span&gt;
    &lt;span class="n"&gt;reroute_on_concurrency_control&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call_keyed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;_hit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="nd"&gt;@app.task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;running_concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Mode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;KEYS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;key_arguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;account_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,),&lt;/span&gt;
    &lt;span class="n"&gt;reroute_on_concurrency_control&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call_keyed_drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;_hit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="nd"&gt;@app.task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;running_concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Mode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;KEYS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;registration_concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Mode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;KEYS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;key_arguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;account_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,),&lt;/span&gt;
    &lt;span class="n"&gt;reroute_on_concurrency_control&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;refresh_once&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;_hit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;refresh&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to run it
&lt;/h2&gt;

&lt;p&gt;You can launch the demo two ways. The four-terminal flow is useful when you&lt;br&gt;
want to watch the API, the worker, and the pynenc monitor at the same time.&lt;br&gt;
The one-command flow boots the API and worker for you and runs every scenario&lt;br&gt;
in sequence; it is the path used by CI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# four terminals — recommended for exploring&lt;/span&gt;
uv run uvicorn api_server:app &lt;span class="nt"&gt;--port&lt;/span&gt; 8765      &lt;span class="c"&gt;# 1. API&lt;/span&gt;
uv run pynenc &lt;span class="nt"&gt;--app&lt;/span&gt; tasks.app runner start     &lt;span class="c"&gt;# 2. worker&lt;/span&gt;
uv run pynenc monitor                          &lt;span class="c"&gt;# 3. monitor (optional) at http://127.0.0.1:8000&lt;/span&gt;
uv run python enqueue.py all                   &lt;span class="c"&gt;# 4. enqueue scenarios&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# one command — recommended for CI&lt;/span&gt;
uv run python sample.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What the API observes
&lt;/h2&gt;

&lt;p&gt;All four scenarios, end to end, on a single pynmon timeline. Read it left to&lt;br&gt;
right: scenario A starts with overlapping calls for the same accounts; B fans&lt;br&gt;
out into three account lanes that stay serial per account; C drops blocked&lt;br&gt;
work instead of rerouting it; D collapses duplicate refresh requests before a&lt;br&gt;
worker ever sees them.&lt;/p&gt;

&lt;p&gt;Two pynenc state names appear in the screenshots and logs. &lt;code&gt;REROUTED&lt;/code&gt; means&lt;br&gt;
the worker tried to start an invocation, found the account key already busy,&lt;br&gt;
and put the invocation back on the queue. &lt;code&gt;CONCURRENCY_CONTROLLED_FINAL&lt;/code&gt;&lt;br&gt;
means the invocation was blocked by the key rule and intentionally finished&lt;br&gt;
without running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbqcr75y0c3n56fsjtos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbqcr75y0c3n56fsjtos.png" alt="All four scenarios on one pynmon invocation timeline" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Four scenarios, four stories. Each one below pairs the per-scenario&lt;br&gt;
summary, the API server's collision log, and the matching pynmon timeline.&lt;/p&gt;
&lt;h3&gt;
  
  
  Scenario A — no concurrency control
&lt;/h3&gt;

&lt;p&gt;The baseline pain. Different provider operations, same &lt;code&gt;account_id&lt;/code&gt; key.&lt;br&gt;
The runner can hold up to eight invocations in flight, and it does — most&lt;br&gt;
of the 12 invocations start essentially together. The first call per&lt;br&gt;
account reaches the provider cleanly; everything that overlaps the same&lt;br&gt;
account is recorded as &lt;code&gt;COLLISION&lt;/code&gt; — the stand-in for a real 429,&lt;br&gt;
throttle, or inconsistent response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=== A. unsafe — no concurrency control ===
  12 enqueued -&amp;gt; 12 calls, 9 collisions, 1.42s
   X acme     calls=4  collisions=3
   X globex   calls=4  collisions=3
   X initech  calls=4  collisions=3

--- reset @ 11:49:40 A. unsafe — no concurrency control ---
  [ok       ] acme     fetch_profile
  [COLLISION] acme     list_invoices
  [ok       ] globex   fetch_profile
  [COLLISION] acme     update_metadata
  [COLLISION] acme     refresh_usage
  [COLLISION] globex   refresh_usage
  [COLLISION] globex   list_invoices
  [COLLISION] globex   update_metadata
  [ok       ] initech  fetch_profile
  [COLLISION] initech  list_invoices
  [COLLISION] initech  refresh_usage
  [COLLISION] initech  update_metadata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyp56mxyuq94hvqv245c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyp56mxyuq94hvqv245c.png" alt="Scenario A timeline — 12 invocations, four per account, all running in parallel, nine recorded as collisions" width="800" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario B — &lt;code&gt;running_concurrency=KEYS&lt;/code&gt;, &lt;code&gt;reroute=True&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Same 12 calls as A, no collisions. The orchestrator indexes invocation&lt;br&gt;
arguments and refuses to start a second &lt;code&gt;call_keyed&lt;/code&gt; while one with the same&lt;br&gt;
&lt;code&gt;account_id&lt;/code&gt; is already running. When a worker tries to pick up a blocked&lt;br&gt;
invocation, &lt;code&gt;reroute_on_concurrency_control=True&lt;/code&gt; puts it back on the queue&lt;br&gt;
so it can run when the slot frees up. The timeline shows three clean lanes,&lt;br&gt;
one per account, with blocked invocations moving through &lt;code&gt;REROUTED&lt;/code&gt; until&lt;br&gt;
they get their turn.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=== B. keyed — running_concurrency=KEYS, reroute=True ===
  12 enqueued -&amp;gt; 12 calls, 0 collisions, 2.14s
  OK acme     calls=4  collisions=0
  OK globex   calls=4  collisions=0
  OK initech  calls=4  collisions=0

--- reset @ 11:49:41 B. keyed — running_concurrency=KEYS, reroute=True ---
  [ok       ] acme     fetch_profile
  [ok       ] globex   fetch_profile
  [ok       ] initech  fetch_profile
  [ok       ] initech  list_invoices
  [ok       ] acme     update_metadata
  [ok       ] globex   list_invoices
  [ok       ] initech  refresh_usage
  [ok       ] globex   update_metadata
  [ok       ] acme     refresh_usage
  [ok       ] globex   refresh_usage
  [ok       ] initech  update_metadata
  [ok       ] acme     list_invoices
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6njbcz3cjoydwqm5yi8u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6njbcz3cjoydwqm5yi8u.png" alt="Scenario B timeline — three serial lanes (one per account), parallel across accounts, blocked invocations rerouted until their slot opens" width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario C — &lt;code&gt;running_concurrency=KEYS&lt;/code&gt;, &lt;code&gt;reroute=False&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Same guard, opposite policy. &lt;code&gt;reroute_on_concurrency_control=False&lt;/code&gt; tells&lt;br&gt;
the orchestrator not to re-queue blocked invocations — they land in&lt;br&gt;
&lt;code&gt;CONCURRENCY_CONTROLLED_FINAL&lt;/code&gt; and &lt;code&gt;inv.result&lt;/code&gt; raises &lt;code&gt;KeyError&lt;/code&gt;. Only&lt;br&gt;
the first invocation per &lt;code&gt;account_id&lt;/code&gt; ever reaches the provider; the other&lt;br&gt;
nine are dropped. The timeline ends almost as soon as the first three&lt;br&gt;
invocations finish.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=== C. drop — running_concurrency=KEYS, reroute=False ===
  12 enqueued -&amp;gt; 3 calls (9 dropped), 0 collisions, 0.67s
  OK acme     calls=1  collisions=0
  OK globex   calls=1  collisions=0
  OK initech  calls=1  collisions=0

--- reset @ 11:49:43 C. drop — running_concurrency=KEYS, reroute=False ---
  [ok       ] acme     fetch_profile
  [ok       ] globex   fetch_profile
  [ok       ] initech  fetch_profile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9li00gbl13r689u3yihv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9li00gbl13r689u3yihv.png" alt="Scenario C timeline — three running invocations, the other nine dropped to CONCURRENCY_CONTROLLED_FINAL" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario D — &lt;code&gt;registration_concurrency=KEYS&lt;/code&gt; + &lt;code&gt;running_concurrency=KEYS&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;A different question. &lt;code&gt;registration_concurrency&lt;/code&gt; checks at &lt;em&gt;enqueue&lt;/em&gt; time:&lt;br&gt;
when refresh request number two for &lt;code&gt;acme&lt;/code&gt; arrives, there is already one&lt;br&gt;
registered, so the producer gets back a &lt;code&gt;ReusedInvocation&lt;/code&gt; pointing to the&lt;br&gt;
first. 24 logical "please refresh this account" events, eight per account,&lt;br&gt;
collapse to 3 actual API calls before a worker picks them up. The&lt;br&gt;
&lt;code&gt;running_concurrency&lt;/code&gt; guard is the safety net for the case where a worker&lt;br&gt;
picks up the first task before all duplicates have registered.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=== D. dedupe — registration + running KEYS ===
  24 enqueued -&amp;gt; 3 calls (21 deduped), 0 collisions, 0.57s
  OK acme     calls=1  collisions=0
  OK globex   calls=1  collisions=0
  OK initech  calls=1  collisions=0

--- reset @ 11:49:44 D. dedupe — registration + running KEYS ---
  [ok       ] acme     refresh
  [ok       ] globex   refresh
  [ok       ] initech  refresh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0g3ga5dwxo6te21n9y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0g3ga5dwxo6te21n9y0.png" alt="Scenario D timeline — 24 enqueues collapse to 3 invocations at registration time, one per account" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When to reach for which
&lt;/h2&gt;

&lt;p&gt;The pattern applies whenever an argument marks the boundary for shared state&lt;br&gt;
or quota. The external system may be a third-party API, an internal service,&lt;br&gt;
or a resource that should only be touched by one task at a time for a given&lt;br&gt;
key. The system may allow broad parallelism overall, while still requiring&lt;br&gt;
serialization for each account, tenant, installation, or resource id.&lt;/p&gt;

&lt;p&gt;The two settings cover most of what people reach for an external&lt;br&gt;
rate-limiter or a per-tenant lock service for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;running_concurrency=KEYS&lt;/code&gt; on &lt;code&gt;account_id&lt;/code&gt; (or &lt;code&gt;tenant_id&lt;/code&gt;, or
&lt;code&gt;oauth_installation_id&lt;/code&gt;, or &lt;code&gt;client_token&lt;/code&gt;), with &lt;code&gt;reroute=True&lt;/code&gt;&lt;/strong&gt; — when
the rule is “no two calls in flight for the same client account”, but you
still want all calls to eventually complete. Blocked calls re-queue and
retry until a slot opens. Good for distinct operations (op1, op2, op3…)
that all need to run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Same, with &lt;code&gt;reroute=False&lt;/code&gt;&lt;/strong&gt; — when the rule is “if a call for this
account is already running, drop the new one”. Queue depth stays flat;
no retry buildup. Good for “trigger a refresh, but if one is already in
flight, skip it”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;registration_concurrency=KEYS&lt;/code&gt; + &lt;code&gt;running_concurrency=KEYS&lt;/code&gt; on the
same key&lt;/strong&gt; — when "do this once per client right now" is enough,
regardless of how many places triggered it. "Refresh client dashboard",
"rebuild client index", "regenerate client report". A noisy internal
event bus firing the same refresh 50 times per second is a bug; deduping
it before it reaches a worker keeps queue depth honest. The running guard
is the safety net: if a worker is fast enough to pick up the first task
before all duplicates register, the second flag prevents a parallel run.
Together they guarantee exactly one call per account, regardless of
timing. Scenario D in the sample.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In production terms, the useful part is that this does not require a separate&lt;br&gt;
lock service. The orchestrator already tracks invocations to route work;&lt;br&gt;
checking whether a matching key is already running uses that same state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simpler scopes when you don't need keys
&lt;/h2&gt;

&lt;p&gt;This post zooms in on &lt;code&gt;KEYS&lt;/code&gt;, but it is one of four scopes. The same two&lt;br&gt;
flags (&lt;code&gt;running_concurrency&lt;/code&gt; and &lt;code&gt;registration_concurrency&lt;/code&gt;) accept any&lt;br&gt;
value of &lt;code&gt;ConcurrencyControlType&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;DISABLED&lt;/code&gt;&lt;/strong&gt; — the default. No concurrency check.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;TASK&lt;/code&gt;&lt;/strong&gt; — at most one invocation of &lt;em&gt;the task itself&lt;/em&gt; in the chosen
state, regardless of arguments. “Only one nightly cleanup may run.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ARGUMENTS&lt;/code&gt;&lt;/strong&gt; — at most one invocation per &lt;em&gt;full&lt;/em&gt; argument tuple. Two
calls with identical arguments collapse; calls that differ in any
argument run in parallel. “Don't run the same export twice.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;KEYS&lt;/code&gt;&lt;/strong&gt; — at most one invocation per chosen &lt;em&gt;subset&lt;/em&gt; of arguments
(&lt;code&gt;key_arguments=(...)&lt;/code&gt;). The mode this post is about: serialise on the
account key, ignore the operation name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The scope you pick controls &lt;em&gt;what counts as a duplicate&lt;/em&gt;. The flag you put&lt;br&gt;
it on (&lt;code&gt;registration_concurrency&lt;/code&gt; vs &lt;code&gt;running_concurrency&lt;/code&gt;) controls&lt;br&gt;
&lt;em&gt;when the check happens&lt;/em&gt; — at enqueue time or at run time.&lt;/p&gt;

&lt;p&gt;Full reference, including how &lt;code&gt;key_arguments&lt;/code&gt; interacts with each scope and&lt;br&gt;
the other concurrency knobs, is in the pynenc docs:&lt;br&gt;
&lt;a href="https://docs.pynenc.org/en/latest/usage_guide/use_case_003_concurrency_control.html" rel="noopener noreferrer"&gt;Concurrency Control use case&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's not in the box yet
&lt;/h2&gt;

&lt;p&gt;Two things people will (correctly) ask for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-slot concurrency&lt;/strong&gt; — "up to 5 in flight per key", not just 1.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time-window rate limits&lt;/strong&gt; — "100 calls per minute per key".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are on the roadmap. The current primitive - one in-flight invocation per&lt;br&gt;
task/key - already covers a common integration problem: systems that allow&lt;br&gt;
parallelism across accounts but not overlapping work for the same account.&lt;br&gt;
The bigger controls build on the same orchestrator machinery.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/pynenc/samples
&lt;span class="nb"&gt;cd &lt;/span&gt;samples/concurrency_demo
uv &lt;span class="nb"&gt;sync
&lt;/span&gt;uv run python sample.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full sample and README are at&lt;br&gt;
&lt;a href="https://github.com/pynenc/samples/tree/main/concurrency_demo" rel="noopener noreferrer"&gt;github.com/pynenc/samples/tree/main/concurrency_demo&lt;/a&gt;.&lt;br&gt;
The pynenc framework is on PyPI as&lt;br&gt;
&lt;a href="https://pypi.org/project/pynenc/" rel="noopener noreferrer"&gt;&lt;code&gt;pynenc&lt;/code&gt;&lt;/a&gt; and the source is at&lt;br&gt;
&lt;a href="https://github.com/pynenc/pynenc" rel="noopener noreferrer"&gt;github.com/pynenc/pynenc&lt;/a&gt;. Issues, ideas,&lt;br&gt;
and "this would be great if it also did X" comments are welcome.&lt;/p&gt;

</description>
      <category>python</category>
      <category>distributedsystems</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Distribute your Python app without rewriting it</title>
      <dc:creator>codelluis</dc:creator>
      <pubDate>Mon, 27 Apr 2026 15:00:00 +0000</pubDate>
      <link>https://dev.to/codelluis/distribute-your-python-app-without-rewriting-it-3e6i</link>
      <guid>https://dev.to/codelluis/distribute-your-python-app-without-rewriting-it-3e6i</guid>
      <description>&lt;p&gt;You have a Python function that processes one item. You call it in a loop over a list. The list grows. The loop slows down. The work is real — an LLM API call, an embedding, a scrape, a database query, a model inference — the kind of thing that does not get faster with prettier code.&lt;/p&gt;

&lt;p&gt;Distribution is the answer. Distribution usually means rewriting every call site to handle queues, futures, and result objects. So the loop stays slow and a progress bar gets added.&lt;/p&gt;

&lt;p&gt;This post is about removing the migration cost. &lt;strong&gt;One decorator. One environment variable. Five reports go from 2.51 seconds to 0.54 seconds. Zero call sites change.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The whole demo is in the &lt;a href="https://github.com/pynenc/samples/tree/main/direct_task_demo" rel="noopener noreferrer"&gt;direct_task_demo&lt;/a&gt; sample of the &lt;a href="https://github.com/pynenc/samples" rel="noopener noreferrer"&gt;pynenc samples&lt;/a&gt; repository. The example happens to generate sales reports because it needs a concrete I/O-bound function with a list-shaped input — but the pattern is the same for batch LLM calls, embedding generation, RAG indexing, web scraping, ETL enrichment, or any workload of the form "slow function, list of items, want it parallel".&lt;/p&gt;

&lt;h2&gt;
  
  
  The original code
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;tasks_original.py&lt;/code&gt; is plain Python. No decorators, no imports from any framework, no infrastructure assumptions. It does what the existing codebase already does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# tasks_original.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;md5&lt;/span&gt;

&lt;span class="n"&gt;PERIODS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q1-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q2-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q3-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q4-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q1-2026&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# simulates DB queries + aggregation
&lt;/span&gt;    &lt;span class="n"&gt;seed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;md5&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;()[:&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;revenue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50_000&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;seed&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;950_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;seed&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;9_900&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;period&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;revenue&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;revenue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orders&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;avg_order_value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;revenue&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_reports&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running it produces five reports in 2.51 seconds. That is the baseline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The migration
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;tasks.py&lt;/code&gt; is the same file with three additions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gi"&gt;+ from pynenc import Pynenc
+ app = Pynenc()
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gi"&gt;+ @app.direct_task
&lt;/span&gt;  def generate_report(period: str) -&amp;gt; dict:
      return _build_report(period)
&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gi"&gt;+ @app.direct_task(parallel_func=_per_period, aggregate_func=_flatten)
&lt;/span&gt;  def generate_reports(periods: list[str]) -&amp;gt; list[dict]:
      return [_build_report(p) for p in periods]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Function bodies, signatures, and return types are identical. The two helpers &lt;code&gt;_per_period&lt;/code&gt; and &lt;code&gt;_flatten&lt;/code&gt; are added to support the parallel decorator — they read the caller's actual arguments, they do not synthesize anything out of thin air:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_per_period&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;tuple&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]]]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[([&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;],)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;periods&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_flatten&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;report&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;_per_period&lt;/code&gt; reads the &lt;code&gt;periods&lt;/code&gt; argument the caller passed and yields one period per worker. &lt;code&gt;_flatten&lt;/code&gt; collects the per-worker results back into a single list. The decorator does the routing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sync mode: the decorators are inert
&lt;/h2&gt;

&lt;p&gt;Setting &lt;code&gt;PYNENC__DEV_MODE_FORCE_SYNC_TASKS=True&lt;/code&gt; runs every decorated call inline in the caller's thread — no runner, no broker, no database writes. Behaviour is identical to &lt;code&gt;tasks_original.py&lt;/code&gt;: 5 reports in 2.52s, same values, same order. This is the strangler-fig migration pattern: decorate one function at a time, keep the env var on so existing tests stay green, then remove it in production. No call site needs to change.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ PYNENC__DEV_MODE_FORCE_SYNC_TASKS=True python sample_sync.py

Sync mode: 5 reports in 2.52s (expected ~2.5s — sequential, like the original)
  Q1-2025     revenue=$  477,381  orders=  381  AOV=$1252.97
  Q2-2025     revenue=$  798,638  orders= 7838  AOV=$101.89
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Distributed mode: the same calls, with workers
&lt;/h2&gt;

&lt;p&gt;Removing the env var and starting a &lt;code&gt;ThreadRunner&lt;/code&gt; makes the decorators distribute work over a SQLite-backed broker. The call sites do not change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python sample_distributed.py

Sequential calls on runner: 5 reports in 3.18s (each call blocks before the next starts)

Concurrent caller threads: 5 reports in 0.54s (N caller threads -&amp;gt; N workers running in parallel)
  Q1-2025     revenue=$  477,381  ...
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two patterns appear here. The sequential loop is the original code, unchanged — each &lt;code&gt;generate_report(p)&lt;/code&gt; blocks before the next call starts. That is by design: &lt;code&gt;@app.direct_task&lt;/code&gt; preserves the calling contract of a regular Python function. The caller waits, gets the value back, and exception handling works as it always did. That guarantee is what makes the migration zero-cost.&lt;/p&gt;

&lt;p&gt;For caller-side concurrency, &lt;code&gt;ThreadPoolExecutor&lt;/code&gt; is the standard Python pattern, and it composes naturally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;concurrent.futures&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ThreadPoolExecutor&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;ThreadPoolExecutor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_workers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PERIODS&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;reports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;generate_report&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;PERIODS&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each thread blocks on its own call; the runner processes them in parallel. Five reports in 0.54 seconds — five times faster on the same machine, with no broker change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Single-call fan-out
&lt;/h2&gt;

&lt;p&gt;Sometimes the parallelism belongs inside the function rather than at the call site. The caller passes a list, expects a list back, and does not need to change a single line of code. That is what &lt;code&gt;parallel_func&lt;/code&gt; is for: a small helper that describes how to split the arguments into individual work items. Pynenc dispatches one task per item — across whatever workers are running — then reassembles the results via &lt;code&gt;aggregate_func&lt;/code&gt; before returning to the caller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# tasks.py
&lt;/span&gt;&lt;span class="nd"&gt;@app.direct_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parallel_func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;_per_period&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;aggregate_func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;_flatten&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_reports&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The caller calls it exactly as in &lt;code&gt;tasks_original.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;reports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generate_reports&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PERIODS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Behind the decorator, &lt;code&gt;_per_period&lt;/code&gt; reads &lt;code&gt;args["periods"]&lt;/code&gt; and yields one argument tuple per period. Pynenc triggers one task per tuple and routes each to an available worker. &lt;code&gt;_flatten&lt;/code&gt; collects the per-worker results back into a single list. The caller receives the same shape it always did:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python sample_parallel.py

Parallel fan-out: 5 reports in 0.65s (one call, 5 workers running in parallel)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function signature is honest. Nothing is "ignored". The argument the caller passes is the argument &lt;code&gt;parallel_func&lt;/code&gt; reads.&lt;/p&gt;

&lt;p&gt;For higher throughput, pynenc's native parallel API goes further: instead of aggregating before returning, the function exposes a result group that the caller can iterate as results arrive. Each item is available as soon as the worker that produced it finishes — no waiting for the slowest one. The &lt;code&gt;parallel_func&lt;/code&gt; pattern shown here is the zero-migration-cost option: same signature, same return type, same call site, parallelism handled entirely by the decorator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not just use &lt;code&gt;asyncio&lt;/code&gt; / &lt;code&gt;multiprocessing&lt;/code&gt; / Celery?
&lt;/h2&gt;

&lt;p&gt;These are the obvious alternatives and each one solves a different slice of the problem.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;asyncio.gather&lt;/code&gt;&lt;/strong&gt; parallelises async I/O on a single event loop. It works only if the function is already &lt;code&gt;async&lt;/code&gt;, only on one machine, and only for I/O-bound work. Synchronous functions need to be rewritten.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;multiprocessing.Pool.map&lt;/code&gt;&lt;/strong&gt; parallelises across CPU cores on a single host. It cannot scale beyond one machine, struggles with large arguments (everything is pickled and copied), and the call site changes from &lt;code&gt;f(x)&lt;/code&gt; to &lt;code&gt;pool.map(f, xs)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;concurrent.futures.ThreadPoolExecutor&lt;/code&gt;&lt;/strong&gt; is a clean primitive but stops at the process boundary. With &lt;code&gt;@app.direct_task&lt;/code&gt; it composes — use it on the caller side and pynenc handles the worker side, optionally on different machines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Celery / RQ / Dramatiq&lt;/strong&gt; scale across machines but break the calling contract: &lt;code&gt;f(x)&lt;/code&gt; becomes &lt;code&gt;f.delay(x).get()&lt;/code&gt; or similar. Every call site has to change. There is no in-process sync mode for unit tests — you run a worker or you mock.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;@app.direct_task&lt;/code&gt; is the option that gives you all three properties at once: distributed across machines, the call site does not change, and a single environment variable runs everything inline for tests and local development.&lt;/p&gt;

&lt;h2&gt;
  
  
  When &lt;code&gt;direct_task&lt;/code&gt; is the right tool
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;@app.direct_task&lt;/code&gt; always blocks the caller. That is the point: it preserves the calling contract that the original code already relied on. Migration is a copy-the-decorator operation, not a rewrite.&lt;/p&gt;

&lt;p&gt;For fire-and-forget semantics — enqueue work and continue without blocking — &lt;code&gt;@app.task&lt;/code&gt; is the right decorator. It returns an &lt;code&gt;Invocation&lt;/code&gt; and exposes &lt;code&gt;.result&lt;/code&gt; for explicit waiting. The two decorators are complementary; the right choice is whichever one preserves the call pattern the codebase already has.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# uv: https://docs.astral.sh/uv/getting-started/installation/&lt;/span&gt;
git clone https://github.com/pynenc/samples.git
&lt;span class="nb"&gt;cd &lt;/span&gt;samples/direct_task_demo
uv &lt;span class="nb"&gt;sync

&lt;/span&gt;uv run python tasks_original.py                                       &lt;span class="c"&gt;# baseline&lt;/span&gt;
&lt;span class="nv"&gt;PYNENC__DEV_MODE_FORCE_SYNC_TASKS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True uv run python sample_sync.py   &lt;span class="c"&gt;# decorators inert&lt;/span&gt;
uv run python sample_distributed.py                                   &lt;span class="c"&gt;# workers, two patterns&lt;/span&gt;
uv run python sample_parallel.py                                      &lt;span class="c"&gt;# single-call fan-out&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/pynenc" rel="noopener noreferrer"&gt;pynenc&lt;/a&gt; — the framework&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.pynenc.org/usage_guide/use_case_008_direct_task.html" rel="noopener noreferrer"&gt;direct_task usage guide&lt;/a&gt; — full documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/samples" rel="noopener noreferrer"&gt;pynenc samples&lt;/a&gt; — runnable demos&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/pynenc/discussions" rel="noopener noreferrer"&gt;GitHub Discussions&lt;/a&gt; — open questions, feedback&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Killed a Python Worker Mid-Task. Here's What Should Have Happened.</title>
      <dc:creator>codelluis</dc:creator>
      <pubDate>Sun, 19 Apr 2026 13:59:56 +0000</pubDate>
      <link>https://dev.to/codelluis/i-killed-a-python-worker-mid-task-heres-what-should-have-happened-1kpl</link>
      <guid>https://dev.to/codelluis/i-killed-a-python-worker-mid-task-heres-what-should-have-happened-1kpl</guid>
      <description>&lt;p&gt;I ran &lt;code&gt;kill -9&lt;/code&gt; on a worker that was processing three tasks. They vanished. No error. No retry. I checked the queue: empty. I checked the results: nothing. The work was just gone.&lt;/p&gt;

&lt;p&gt;This is not a bug. This is the default behavior of many Python task frameworks. A worker dies mid-execution, and whatever it was doing disappears.&lt;/p&gt;

&lt;p&gt;So I built a framework where the system heals itself. Here is what that looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem nobody talks about
&lt;/h2&gt;

&lt;p&gt;Here is what usually happens when a worker crashes in the middle of a task:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A task starts running on Worker-1.&lt;/li&gt;
&lt;li&gt;Worker-1 gets OOM-killed (or crashes, or the host dies).&lt;/li&gt;
&lt;li&gt;The task message was already acknowledged and removed from the queue.&lt;/li&gt;
&lt;li&gt;The task is gone: no record, no detection, no recovery.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Typical workarounds teams build by hand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Late acknowledgement, which reduces task loss but increases duplicate execution risk.&lt;/li&gt;
&lt;li&gt;External monitoring, which detects failures but still requires manual re-queueing.&lt;/li&gt;
&lt;li&gt;Strict idempotency layers everywhere, which are useful but still need a recovery trigger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not complete solutions. They are patches around a missing core capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  So I killed a worker. Here is what happened
&lt;/h2&gt;

&lt;p&gt;I ran the same crash scenario with &lt;a href="https://github.com/pynenc/pynenc" rel="noopener noreferrer"&gt;pynenc&lt;/a&gt;: three tasks running, then &lt;code&gt;SIGKILL&lt;/code&gt;, then a second worker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;STEP 1: Starting Worker-1...
  Worker-1 started (PID 12345)

STEP 2: Submitting 3 long-running tasks...
  -&amp;gt; Submitted slow_task(0)
  -&amp;gt; Submitted slow_task(1)
  -&amp;gt; Submitted slow_task(2)

  Waiting for Worker-1 to pick up and start running tasks...

STEP 3: Simulating a worker crash!
  X Killing Worker-1 (PID 12345) with SIGKILL...
  X Worker-1 terminated (exit code -9)

  The in-progress task is now orphaned — no worker owns it.

STEP 4: Starting Worker-2 (the recovery worker)...
  Worker-2 started (PID 12346)

STEP 5: Waiting for recovery and task completion...
  OK slow_task completed: task_0_completed
  OK slow_task completed: task_1_completed
  OK slow_task completed: task_2_completed

  ALL 3 TASKS COMPLETED SUCCESSFULLY
  Tasks from the crashed worker were recovered automatically!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Worker-1 died mid-execution. Worker-2 detected the stale heartbeat, recovered orphaned tasks, and finished all three with zero manual intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring view
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxur6pdslguc8lhkmwqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxur6pdslguc8lhkmwqm.png" alt="Pynmon monitoring view during recovery demo" width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Click to open the image at full size.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the same monitoring view used during the run. From here you can inspect the timeline across runners, open each invocation detail, and follow the logs around state changes to understand what happened step by step.&lt;/p&gt;

&lt;h2&gt;
  
  
  How recovery works
&lt;/h2&gt;

&lt;p&gt;Every runner sends periodic heartbeats. As long as heartbeats arrive, the runner is healthy.&lt;/p&gt;

&lt;p&gt;When heartbeats stop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The recovery service marks the runner as stale.&lt;/li&gt;
&lt;li&gt;Orphaned running invocations are claimed safely.&lt;/li&gt;
&lt;li&gt;Tasks are re-routed to the broker.&lt;/li&gt;
&lt;li&gt;Healthy runners pick them up.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is built in. No external watcher process required.&lt;/p&gt;

&lt;p&gt;Recovery re-executes the full task, so designing tasks to be idempotent remains a best practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The code
&lt;/h2&gt;

&lt;p&gt;The task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# tasks.py (simplified — full version in the repo)
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pynenc&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Pynenc&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Pynenc&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.task&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;slow_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;slow_task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[slow_task(&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)] Starting — will run for 8 seconds&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;second&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;slow_task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[slow_task(&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)] progress &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;second&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;task_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_completed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The demo configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# pyproject.toml (key settings — full config in the repo)&lt;/span&gt;
&lt;span class="nn"&gt;[tool.pynenc]&lt;/span&gt;
&lt;span class="py"&gt;app_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"recovery_demo"&lt;/span&gt;
&lt;span class="py"&gt;orchestrator_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SQLiteOrchestrator"&lt;/span&gt;
&lt;span class="py"&gt;broker_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SQLiteBroker"&lt;/span&gt;
&lt;span class="py"&gt;state_backend_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SQLiteStateBackend"&lt;/span&gt;
&lt;span class="py"&gt;runner_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ThreadRunner"&lt;/span&gt;

&lt;span class="c"&gt;# Fast recovery timeouts for demo purposes.&lt;/span&gt;
&lt;span class="c"&gt;# Production systems use much higher values (defaults: 10 min heartbeat, 15 min recovery cron).&lt;/span&gt;
&lt;span class="py"&gt;runner_considered_dead_after_minutes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;          &lt;span class="c"&gt;# 6 seconds — heartbeat expiry&lt;/span&gt;
&lt;span class="py"&gt;recover_running_invocations_cron&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"* * * * *"&lt;/span&gt;      &lt;span class="c"&gt;# every minute (fastest cron resolution)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full demo is in the public &lt;a href="https://github.com/pynenc/samples/tree/main/recovery_demo" rel="noopener noreferrer"&gt;recovery_demo&lt;/a&gt; folder of the samples repository.&lt;/p&gt;

&lt;p&gt;The entrypoint script is &lt;a href="https://github.com/pynenc/samples/blob/main/recovery_demo/sample.py" rel="noopener noreferrer"&gt;recovery_demo/sample.py&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Requires uv — install: https://docs.astral.sh/uv/getting-started/installation/&lt;/span&gt;
git clone https://github.com/pynenc/samples.git
&lt;span class="nb"&gt;cd &lt;/span&gt;samples/recovery_demo
uv &lt;span class="nb"&gt;sync
&lt;/span&gt;uv run python sample.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Docker. No Redis. No external services. One demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What teams usually build by hand
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;The problem&lt;/th&gt;
&lt;th&gt;Typical approach&lt;/th&gt;
&lt;th&gt;What pynenc does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Worker dies mid-task&lt;/td&gt;
&lt;td&gt;Lost task or duplicate retries&lt;/td&gt;
&lt;td&gt;Automatic recovery via heartbeat detection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Detecting dead workers&lt;/td&gt;
&lt;td&gt;External monitoring stack&lt;/td&gt;
&lt;td&gt;Built-in runner heartbeat checks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Re-queuing orphaned tasks&lt;/td&gt;
&lt;td&gt;Manual scripts and intervention&lt;/td&gt;
&lt;td&gt;Automatic re-routing to broker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Recovery in clusters&lt;/td&gt;
&lt;td&gt;Custom distributed locking&lt;/td&gt;
&lt;td&gt;Atomic global recovery service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Understanding incidents&lt;/td&gt;
&lt;td&gt;Log spelunking&lt;/td&gt;
&lt;td&gt;Invocation state history and timeline views&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What is next
&lt;/h2&gt;

&lt;p&gt;Pynenc is open source and actively maintained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/pynenc" rel="noopener noreferrer"&gt;pynenc&lt;/a&gt; - core framework&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/samples" rel="noopener noreferrer"&gt;samples&lt;/a&gt; - runnable demos&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.pynenc.org" rel="noopener noreferrer"&gt;docs&lt;/a&gt; - full documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How does your team handle crashed workers today? Join the conversation in &lt;a href="https://github.com/pynenc/pynenc/discussions" rel="noopener noreferrer"&gt;GitHub Discussions&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>python</category>
      <category>backend</category>
      <category>distributedsystems</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
