<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: KaykCaputo</title>
    <description>The latest articles on DEV Community by KaykCaputo (@caputokayk).</description>
    <link>https://dev.to/caputokayk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/caputokayk"/>
    <language>en</language>
    <item>
      <title>Stop Merging Slow Code: Catching Python Performance Regressions Before They Hit Production with Oracletrace</title>
      <dc:creator>KaykCaputo</dc:creator>
      <pubDate>Wed, 22 Apr 2026 19:31:59 +0000</pubDate>
      <link>https://dev.to/caputokayk/stop-merging-slow-code-catching-python-performance-regressions-before-they-hit-production-with-2ajb</link>
      <guid>https://dev.to/caputokayk/stop-merging-slow-code-catching-python-performance-regressions-before-they-hit-production-with-2ajb</guid>
      <description>&lt;p&gt;We spend a significant amount of time ensuring our Python code is clean, linted, and logically sound. We write unit tests to verify correctness and integration tests to ensure systems talk to each other. Yet, there is a massive blind spot in most modern CI/CD pipelines: performance regressions.&lt;/p&gt;

&lt;p&gt;Most teams only realize a new feature has introduced a 30% latency spike after the code is deployed and the monitoring alerts start firing. By then, the damage is done. Recovering from a performance leak in production is significantly more expensive than catching it during the pull request stage.&lt;/p&gt;

&lt;p&gt;Standard unit tests aren't designed to measure execution speed, and full-scale profilers are often too heavy to run as part of a rapid development loop. This is why we need to shift-left our performance testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table Of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introducing oracletrace&lt;/li&gt;
&lt;li&gt;Performance Tracing in 60 Seconds&lt;/li&gt;
&lt;li&gt;The Delta: Branch vs. Branch Comparison&lt;/li&gt;
&lt;li&gt;Automated Enforcement&lt;/li&gt;
&lt;li&gt;Visualizing Call Flows&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Introducing oracletrace: The CI-First Profiler
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;oracletrace&lt;/strong&gt; is a performance-focused tool designed specifically to prevent slow code from ever reaching your main branch. Unlike traditional profilers that output overwhelming amounts of data, oracletrace is built for comparison and enforcement.&lt;/p&gt;

&lt;p&gt;Under the hood, it leverages Python’s &lt;code&gt;sys.setprofile()&lt;/code&gt; mechanism. This allows it to be remarkably lightweight while maintaining the precision required to trace function calls across your entire application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/KaykCaputo" rel="noopener noreferrer"&gt;
        KaykCaputo
      &lt;/a&gt; / &lt;a href="https://github.com/KaykCaputo/oracletrace" rel="noopener noreferrer"&gt;
        oracletrace
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Lightweight Python tool to detect performance regressions and compare execution traces with call graph visualization.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;OracleTrace — Detect Python Performance Regressions with Execution Diff&lt;/h1&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Detect performance regressions between runs of your Python script in seconds.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/KaykCaputo/oracletrace/master/oracletracecat.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FKaykCaputo%2Foracletrace%2Fmaster%2Foracletracecat.png" alt="OracleTrace Logo" width="185"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;strong&gt;Fail your CI when performance regresses.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;OracleTrace is a &lt;strong&gt;git diff for performance.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Run your script twice and instantly see what got slower — with function-level precision.&lt;/strong&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/div&gt;
&lt;p&gt;&lt;a href="https://pypi.org/project/oracletrace" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/27a47115ce3cf4f5fdf41377d6a152f9f843e877565d13afbf35477821ef2904/68747470733a2f2f696d672e736869656c64732e696f2f707970692f762f6f7261636c6574726163653f6c6162656c3d50795049" alt="PyPI"&gt;&lt;/a&gt;
&lt;a href="https://pepy.tech/projects/oracletrace" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/6b8bd9bd91e4a8e7cfb690e166f583a1c00a89a26c81634ac380afb442adc769/68747470733a2f2f7374617469632e706570792e746563682f706572736f6e616c697a65642d62616467652f6f7261636c6574726163653f706572696f643d746f74616c26756e6974733d494e5445524e4154494f4e414c5f53595354454d266c6566745f636f6c6f723d424c41434b2672696768745f636f6c6f723d475245454e266c6566745f746578743d646f776e6c6f616473" alt="PyPI Downloads"&gt;&lt;/a&gt;
&lt;a href="https://github.com/KaykCaputo/oracletrace/stargazers" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/163de5624d17d04087bcab2603d91c8514a2f4cdb1868309333bae49b1cc5f73/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f4b61796b43617075746f2f6f7261636c6574726163653f7374796c653d736f6369616c" alt="GitHub Stars"&gt;&lt;/a&gt;
&lt;a href="https://github.com/KaykCaputo/oracletrace/network/members" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/1ebebb9aecf933536681167126e32a145c80d2d6ba889425691818b39a62109e/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f666f726b732f4b61796b43617075746f2f6f7261636c6574726163653f7374796c653d736f6369616c" alt="GitHub Forks"&gt;&lt;/a&gt;
&lt;a href="https://github.com/KaykCaputo/oracletrace/actions/workflows/tests.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/KaykCaputo/oracletrace/actions/workflows/tests.yml/badge.svg" alt="CI Tests"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Documentation: &lt;a href="https://kaykcaputo.github.io/oracletrace/" rel="nofollow noopener noreferrer"&gt;https://kaykcaputo.github.io/oracletrace/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Featured in:&lt;/strong&gt; &lt;a href="https://pycoders.com/issues/729" rel="nofollow noopener noreferrer"&gt;PyCoder's Weekly #729&lt;/a&gt; • &lt;a href="https://github.com/taowen/awesome-debugger" rel="noopener noreferrer"&gt;awesome-debugger&lt;/a&gt; • &lt;a href="https://github.com/msaroufim/awesome-profiling" rel="noopener noreferrer"&gt;awesome-profiling&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Installation&lt;/h3&gt;
&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;pip install oracletrace&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Quick Start&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;1. See where your program spends time instantly:&lt;/h3&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;oracletrace app.py&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;2. Compare runs and detect regressions:&lt;/h3&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;oracletrace app.py --json baseline.json
oracletrace app.py --json new.json --compare baseline.json&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;See it in action&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;See exactly which functions got slower between runs:&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/KaykCaputo/oracletrace/master/oracletrace-cli-demo.gif"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FKaykCaputo%2Foracletrace%2Fmaster%2Foracletrace-cli-demo.gif" alt="OracleTrace CLI demo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Example Output&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;pre class="notranslate"&gt;&lt;code&gt;Starting application
Iteration 1:
  &amp;gt; Processing data...
    &amp;gt; Calculating results...

Iteration 2:
  &amp;gt; Processing data...
    &amp;gt; Calculating results...

Application finished.

Summary:
                         Top functions by Total Time
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Function                     ┃ Total Time (s) ┃ Calls ┃ Avg. Time/Call (ms) ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ my_app.py:main               │         0.6025 │&lt;/code&gt;&lt;/pre&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/KaykCaputo/oracletrace" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;strong&gt;Official Documentation:&lt;/strong&gt; &lt;a href="https://kaykcaputo.github.io/oracletrace/" rel="noopener noreferrer"&gt;kaykcaputo.github.io/oracletrace/&lt;/a&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;oracletrace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Performance Tracing in 60 Seconds
&lt;/h3&gt;

&lt;p&gt;The barrier to entry for profiling should be zero. Once installed, you can profile any Python script directly from your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oracletrace my_script.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is a structured view of your function calls, showing execution time and call counts. This immediate visibility allows developers to see the performance impact of their changes locally before even pushing to a remote branch.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Delta: Branch vs. Branch Comparison
&lt;/h3&gt;

&lt;p&gt;The core value proposition of oracletrace is the ability to compare execution data between two different states of your codebase. This allows you to quantify exactly how much a refactor or a new feature has impacted your performance budget.&lt;/p&gt;

&lt;p&gt;The workflow is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Capture a Baseline:&lt;/strong&gt; Run your script on your stable branch (e.g., &lt;code&gt;main&lt;/code&gt;) and export the results to a JSON file.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oracletrace main_app.py &lt;span class="nt"&gt;--json&lt;/span&gt; baseline.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Compare the Feature Branch:&lt;/strong&gt; Run the same command on your feature branch using the &lt;code&gt;--compare&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oracletrace main_app.py &lt;span class="nt"&gt;--compare&lt;/span&gt; baseline.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The resulting report includes a &lt;strong&gt;Delta percentage&lt;/strong&gt; column. If a core utility function has slowed down by 20%, oracletrace highlights it immediately. This transforms performance from a vague feeling into a concrete metric that can be debated and addressed during code reviews.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated Enforcement
&lt;/h3&gt;

&lt;p&gt;For teams that prioritize system stability, oracletrace can act as a gatekeeper. By using the &lt;code&gt;--fail-on-regression&lt;/code&gt; flag, the tool will return a non-zero exit code if any function exceeds a specified performance threshold.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oracletrace main_app.py &lt;span class="nt"&gt;--compare&lt;/span&gt; baseline.json &lt;span class="nt"&gt;--fail-on-regression&lt;/span&gt; &lt;span class="nt"&gt;--threshold&lt;/span&gt; 15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this scenario, if your code is more than 15% slower than the baseline, the process fails. This ensures that performance standards are enforced automatically, rather than relying on manual oversight.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visualizing Call Flows and Data Export
&lt;/h3&gt;

&lt;p&gt;Beyond simple timing, oracletrace generates visual call graphs that represent the execution flow of your program. This is particularly useful for identifying "hot paths"—functions that are called thousands of times in a loop and represent the best opportunities for optimization.&lt;/p&gt;

&lt;p&gt;Furthermore, because oracletrace supports JSON and CSV exports, the performance data can be ingested by external tools for long-term trend analysis or historical tracking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Protect Your Main Branch
&lt;/h2&gt;

&lt;p&gt;Performance is a feature, not an afterthought. Integrating a lightweight profiling step into your workflow ensures that your application remains fast as it grows in complexity.&lt;/p&gt;

&lt;p&gt;It takes less than ten minutes to set up oracletrace, but it provides a safety net that protects your production environment from the silent creep of technical debt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How are you currently catching performance drops before they hit production?&lt;/strong&gt; Share your approach in the comments below!&lt;/p&gt;




&lt;p&gt;&lt;a href="https://github.com/KaykCaputo/oracletrace" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Star oracletrace on GitHub ⭐&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>python</category>
      <category>performance</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
