<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Krishna Bajpai</title>
    <description>The latest articles on DEV Community by Krishna Bajpai (@krishna_bajpai_2501c97dcb).</description>
    <link>https://dev.to/krishna_bajpai_2501c97dcb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/krishna_bajpai_2501c97dcb"/>
    <language>en</language>
    <item>
      <title>Designing for Sub-Microsecond Latency (link)</title>
      <dc:creator>Krishna Bajpai</dc:creator>
      <pubDate>Tue, 30 Dec 2025 16:03:11 +0000</pubDate>
      <link>https://dev.to/krishna_bajpai_2501c97dcb/designing-for-sub-microsecond-latency-link-1876</link>
      <guid>https://dev.to/krishna_bajpai_2501c97dcb/designing-for-sub-microsecond-latency-link-1876</guid>
      <description>&lt;p&gt;Lessons from Building a Minimal Execution Engine&lt;br&gt;
Modern systems are fast — but predictable fast is rare.&lt;/p&gt;

&lt;p&gt;Most frameworks optimize for throughput, developer velocity, or horizontal scalability. When you care about tail latency, determinism, and sub-microsecond critical paths, those abstractions often become liabilities.&lt;/p&gt;

&lt;p&gt;I built SubMicro Execution Engine to explore what happens when latency — not features — is the primary design constraint. Below are a few practical lessons that shaped the system.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Latency Lives in the Edges, Not the Core Logic
The actual “work” a system performs is rarely the bottleneck.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Latency hides in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;memory allocation&lt;/li&gt;
&lt;li&gt;cache-line contention&lt;/li&gt;
&lt;li&gt;branch misprediction&lt;/li&gt;
&lt;li&gt;scheduler handoffs&lt;/li&gt;
&lt;li&gt;&lt;p&gt;synchronization primitives&lt;br&gt;
The engine minimizes these by:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;keeping hot paths allocation-free&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;favoring flat, cache-friendly data layouts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;avoiding implicit synchronization&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;designing execution flows that fit in L1/L2 cache&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you can’t draw the hot path from memory, you don’t control latency.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Determinism Beats Raw Throughput
A system that does 1M ops/sec sometimes is less useful than one that does 200k ops/sec always.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Design choices were guided by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stable execution order&lt;/li&gt;
&lt;li&gt;predictable scheduling&lt;/li&gt;
&lt;li&gt;minimal dynamic behavior in hot paths&lt;/li&gt;
&lt;li&gt;This trades peak throughput for tight latency distributions, which matter far more in real-time and trading-style systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Abstractions Have a Cost — Measure Them Ruthlessly
Abstractions aren’t bad, but unmeasured abstractions are dangerous.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In low-latency systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;virtual dispatch can cost more than the logic itself&lt;/li&gt;
&lt;li&gt;generic containers hide memory access patterns&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“clean” interfaces often fragment the execution path&lt;br&gt;
The engine favors:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;explicit control over execution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;visible data movement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;simple, inspectable components&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code clarity is preserved by removing layers, not adding them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Scheduling Is a Latency Feature
Schedulers decide when work happens — which is as important as what happens.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Design considerations include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;minimal context switching&lt;/li&gt;
&lt;li&gt;optional busy-polling strategies&lt;/li&gt;
&lt;li&gt;execution models that avoid OS interference in hot paths
The goal is to keep execution close to the CPU, not bouncing between queues and threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Measure the Tail, Not the Average
Average latency lies.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The engine is designed with the assumption that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;p99 and p99.9 matter more than the mean&lt;/li&gt;
&lt;li&gt;occasional spikes break real-time systems&lt;/li&gt;
&lt;li&gt;instrumentation must be lightweight enough for production use
If you don’t measure the tail, you are optimizing blind.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Closing Thoughts&lt;br&gt;
Sub-microsecond systems are not built by adding optimizations — they’re built by removing uncertainty.&lt;/p&gt;

&lt;p&gt;This project is intentionally minimal. It is not a framework. It is an exploration of how far you can push latency control when every design decision answers one question:&lt;/p&gt;

&lt;p&gt;Does this reduce or increase unpredictability?&lt;/p&gt;

&lt;p&gt;Repo: submicro-execution-engine&lt;br&gt;
GitHub: &lt;a href="https://github.com/krish567366/submicro-execution-engine" rel="noopener noreferrer"&gt;https://github.com/krish567366/submicro-execution-engine&lt;/a&gt;&lt;br&gt;
website: &lt;a href="https://submicro.krishnabajpai.me/" rel="noopener noreferrer"&gt;https://submicro.krishnabajpai.me/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>python</category>
      <category>hft</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
