<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: CodSpeed</title>
    <description>The latest articles on DEV Community by CodSpeed (@codspeed).</description>
    <link>https://dev.to/codspeed</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/codspeed"/>
    <language>en</language>
    <item>
      <title>Pinpoint performance regressions with CI-Integrated differential profiling</title>
      <dc:creator>Adrien Cacciaguerra</dc:creator>
      <pubDate>Mon, 23 Oct 2023 13:27:17 +0000</pubDate>
      <link>https://dev.to/codspeed/pinpoint-performance-regressions-with-ci-integrated-differential-profiling-546k</link>
      <guid>https://dev.to/codspeed/pinpoint-performance-regressions-with-ci-integrated-differential-profiling-546k</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Checkout the &lt;a href="https://codspeed.io/blog/pinpoint-performance-regressions-with-ci-integrated-differential-profiling"&gt;original post on CodSpeed&lt;/a&gt; to use interactive flame graphs components 📊&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Flame Graphs are a visualization tool that helps you understand how your software is performing. They display the call stack of a program and the time spent in each function. They are a powerful tool to quickly identify performance bottlenecks.&lt;/p&gt;

&lt;p&gt;Differential Flame Graphs combine two flame graphs to highlight the differences between them. They allow you to easily spot performance regressions and improvements and gain invaluable insights into your software's performance.&lt;/p&gt;

&lt;p&gt;Below you can find an example of a differential flame graph following a change in a codebase. At a glance, we can understand where the code has become slower. Here it is the &lt;code&gt;parse_issue_fixed&lt;/code&gt; function as it has the brightest red color.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Q4LDvUe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7gekm2awrtb6ou3rh4e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Q4LDvUe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7gekm2awrtb6ou3rh4e.png" width="800" height="190"&gt;&lt;/a&gt;&lt;br&gt;Flame Graphs component taken from the CodSpeed app.
  &lt;/p&gt;

&lt;p&gt;Let's explore how to read flame graphs and how CodSpeed automates flame graph generation in your CI pipeline.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;CodSpeed supports Rust, Node.js, Python and generates flame graphs out of the box. More languages are coming soon!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Spotting a performance regression
&lt;/h2&gt;

&lt;p&gt;Let's dive into the previous example.&lt;/p&gt;

&lt;p&gt;We have a function &lt;code&gt;parse_issue_fixed&lt;/code&gt; that parses a GitHub pull request body and extracts the issue number that it fixes. Given &lt;code&gt;body = "fixes #123"&lt;/code&gt;, the function returns &lt;code&gt;123&lt;/code&gt;. Here &lt;code&gt;body&lt;/code&gt; can be a multiline string, and the issue number can be anywhere in the string.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;On GitHub, adding the string &lt;code&gt;fixes #123&lt;/code&gt; in a pull request body will automatically close the issue #123 when the pull request is merged.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We notice that the code is quite long and not easily understandable. We refactor it to use a regular expression instead. This gives us the following diff:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gi"&gt;+import re
&lt;/span&gt;
+FIXES_REGEX = re.compile(r"fixes #(\d+)")
&lt;span class="gi"&gt;+
+
&lt;/span&gt; def parse_issue_fixed(body: str) -&amp;gt; int | None:
&lt;span class="gd"&gt;-    prefix = "fixes #"
-    index = body.find(prefix)
-    if index == -1:
-        return None
-
-    start = index + len(prefix)
-    end = start
-    while end &amp;lt; len(body) and body[end].isdigit():
-        end += 1
-    return int(body[start:end])
&lt;/span&gt;&lt;span class="gi"&gt;+    match = FIXES_REGEX.search(body)
+    return int(match.group(1)) if match else None
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great, we have a much shorter and more readable function. But did we introduce a performance regression?&lt;/p&gt;

&lt;p&gt;We already have a benchmark in place for the function &lt;code&gt;parse_pr&lt;/code&gt;, a higher-level function that parses a GitHub pull request body and calls multiple functions to retrieve information, including &lt;code&gt;parse_issue_fixed&lt;/code&gt;. The input is a multiline string of approximately 50kB. We chose a large input to better understand the performance characteristics of the function and make sure the function performs well when parsing large pull request bodies.&lt;/p&gt;

&lt;p&gt;Let's check out the flame graphs for the benchmark to analyze the performance impact of our change. The &lt;strong&gt;Base&lt;/strong&gt; flame graph is the one before the change, the &lt;strong&gt;Head&lt;/strong&gt; flame graph is the one after the change, and finally, the &lt;strong&gt;Diff&lt;/strong&gt; flame graph is the difference between the two.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6witFCla--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxx3gv9ldcevr5tvge6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6witFCla--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxx3gv9ldcevr5tvge6n.png" width="800" height="315"&gt;&lt;/a&gt;&lt;br&gt;Flame Graphs component taken from the CodSpeed app.
  &lt;/p&gt;

&lt;p&gt;Here we can see that the &lt;code&gt;parse_issue_fixed&lt;/code&gt; function is bright red, thus denoting that it was slower after the change and had the biggest performance impact on the benchmark. When hovering over the &lt;code&gt;parse_issue_fixed&lt;/code&gt; frame, we can see that the performance impact is a &lt;em&gt;-21.5% regression&lt;/em&gt; on the overall benchmark.&lt;/p&gt;

&lt;p&gt;So in our use case, using a regular expression will result in slower executions compared to using &lt;code&gt;str.find&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So yes, our change for the sake of readability introduced a performance regression. We can now make an informed decision on whether we want to keep the change or not.&lt;/p&gt;

&lt;p&gt;Now that we understand how a differential flame graph works and how useful it is, let's explore how to generate one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating flame graphs manually is unreliable and time-consuming
&lt;/h2&gt;

&lt;p&gt;The first thing to note is that manually generated flame graphs on a dev machine are &lt;strong&gt;not consistent&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Running the same script multiple times with the same input will result in different flame graphs. Indeed, other processes running on the machine will impact the execution time of the script and its different functions. This make it hard to spot performance regressions with confidence.&lt;/p&gt;

&lt;p&gt;The second thing to note is that generating flame graphs manually is &lt;strong&gt;tedious&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The first tool to generate flame graphs is the eponym FlameGraph by Brendan Gregg written in Perl. Since then, many other tools have been created in and for other languages. The general steps are the same:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run your script with a profiler and output the profile in a file&lt;/li&gt;
&lt;li&gt;Run the tool on the profile to generate a flame graph svg&lt;/li&gt;
&lt;li&gt;Open the generated svg in your browser and explore the data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is already a quite heavy process: you have to create a script to isolate the part of your program you want to generate a flame graph for, then run a bunch of commands and finally open a file in your browser.&lt;/p&gt;

&lt;p&gt;This is why we decided to automate the generation of flame graphs in your CI pipeline with CodSpeed, and find a way to make sure that the flame graphs are consistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating flame graphs with CodSpeed
&lt;/h2&gt;

&lt;p&gt;Since we already had wrappers of benchmarking libraries in different languages, we decided to augment them with profiling capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps to follow to get flame graphs in your CI pipeline
&lt;/h3&gt;

&lt;p&gt;Write benchmarks with one of our wrappers in Rust, Node.js, or Python.&lt;/p&gt;

&lt;p&gt;For example, in Python, use our &lt;code&gt;pytest&lt;/code&gt; extension &lt;code&gt;pytest-codspeed&lt;/code&gt; (which API is compatible with &lt;code&gt;pytest-benchmark&lt;/code&gt;) and add a test marked as a benchmark:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pytest&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;parse_pr&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;parse_pr&lt;/span&gt;

&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;pytest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mark&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;benchmark&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_parse_pr&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;parse_pr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;very_long_body_string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pr_number&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;126&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Refactor some code"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the benchmarks in the &lt;code&gt;@codspeed/action&lt;/code&gt; GitHub Action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run benchmarks&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CodSpeedHQ/action@v1&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pytest tests/ --codspeed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The action will run the benchmarks on a "virtualized" CPU with Valgrind. Instead of measuring the execution time, it measures the number of CPU cycles and memory accesses. Each benchmark is only ran once and it is enough to get &lt;strong&gt;consistent data&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This data is then sent to CodSpeed's servers and flame graphs are generated. A comment is added to the GitHub Pull Request, with a link to the CodSpeed app, where you can browse the benchmarks and their flame graphs.&lt;/p&gt;

&lt;p&gt;And voilà! You now have flame graphs in your CI pipeline, with the benefits of consistency and automation. Moreover, now that are using CodSpeed, &lt;strong&gt;performance regressions will automatically be detected and reported&lt;/strong&gt; in your pull requests 🎉&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Head out to our &lt;a href="https://docs.codspeed.io"&gt;documentation&lt;/a&gt; to view integrationsfor the different languages.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Going further
&lt;/h2&gt;

&lt;p&gt;Flame graphs are not only useful for spotting performance regressions but they can also be used to understand the performance impact of a change. Being an improvement or new calls to a function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HTtamq3B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lpfyfyjx999yzxcw04vs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HTtamq3B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lpfyfyjx999yzxcw04vs.png" width="800" height="187"&gt;&lt;/a&gt;&lt;br&gt;A more complex flame graph with regressions, improvements, and added code
  &lt;/p&gt;

&lt;p&gt;If you enjoyed this article, you can follow us on &lt;a href="https://twitter.com/codspeedhq"&gt;Twitter&lt;/a&gt; to get notified when we publish new articles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.codspeed.io/features/trace-generation"&gt;Trace Generation&lt;/a&gt; in the CodSpeed documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/CodSpeedHQ/pytest-codspeed"&gt;pytest-codspeed&lt;/a&gt;, plugin for pytest&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/CodSpeedHQ/action"&gt;@codspeed/action&lt;/a&gt;, GitHub Action to run benchmarks and generate flame graphs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.brendangregg.com/flamegraphs.html"&gt;Flame Graphs&lt;/a&gt; by Brendan Gregg&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.brendangregg.com/blog/2014-11-09/differential-flame-graphs.html"&gt;Differential Flame Graphs&lt;/a&gt; by Brendan Gregg&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue"&gt;Linking a pull request to an issue&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://valgrind.org/"&gt;Valgrind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ionelmc/pytest-benchmark"&gt;pytest-benchmark&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>python</category>
      <category>performance</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Introducing CodSpeed: Continuous Performance Measurement</title>
      <dc:creator>Arthur Pastel</dc:creator>
      <pubDate>Tue, 31 Jan 2023 15:10:54 +0000</pubDate>
      <link>https://dev.to/codspeed/introducing-codspeed-continuous-performance-measurement-3g73</link>
      <guid>https://dev.to/codspeed/introducing-codspeed-continuous-performance-measurement-3g73</guid>
      <description>&lt;h2&gt;
  
  
  Why does Continuous Performance analysis matter?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Performance is the cornerstone of many problems in software development.&lt;/strong&gt; Application reactivity, infrastructure costs, energy consumption, and carbon footprint all depend on the performance of the underlying software. &lt;strong&gt;By performance, we often mean execution speed&lt;/strong&gt; but more generally, it refers to the efficiency or throughput of a system.&lt;/p&gt;

&lt;p&gt;Today, most of the performance monitoring is done through &lt;strong&gt;Application Performance Monitoring&lt;/strong&gt; &lt;strong&gt;solutions&lt;/strong&gt;(often referred to as APM), provided by companies such as Datadog, Sentry, Blackfire, and many others. These platforms bring &lt;strong&gt;several interesting insights about the production environment's health&lt;/strong&gt;: client-side UX monitoring, endpoint latency checks, and even continuous production profiling.&lt;/p&gt;

&lt;p&gt;However, these solutions are monitoring tools; &lt;strong&gt;they are built to check if everything is okay in production,&lt;/strong&gt; and not really to run experiments. They need real users serving as guinea pigs to experience poor performance to be able to report anomalies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgea5lwrg84lbpsvpxdb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgea5lwrg84lbpsvpxdb1.png" alt="Feedback in the current SDLC" width="800" height="164"&gt;&lt;/a&gt;&lt;br&gt;Performance feedback in the &lt;b&gt;current&lt;/b&gt; Software Development Life-Cycle
  &lt;/p&gt;

&lt;p&gt;So as a developer, to understand the performance impact of my changes, I need to wait for end users to try out my changes in production!? Then, if something is wrong, maybe I’ll try to improve it, someday…&lt;/p&gt;

&lt;p&gt;In an ideal world, &lt;strong&gt;performance checks should be included way earlier in the development lifecycle&lt;/strong&gt;; just as an additional testing flavor, &lt;strong&gt;nurturing continuous improvement&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv565hlhtgozghppe96fv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv565hlhtgozghppe96fv.png" alt="feedback in the ideal SDLC" width="800" height="165"&gt;&lt;/a&gt;&lt;br&gt;Performance feedback in an &lt;b&gt;ideal&lt;/b&gt; Software Development Life-Cycle
  &lt;/p&gt;

&lt;p&gt;This much shorter feedback loop would &lt;strong&gt;provide visibility with consistent metrics to the teams while they are building&lt;/strong&gt; and not once everything is already shipped in production environments. Besides, &lt;strong&gt;guesstimating performance is hard and often plain wrong&lt;/strong&gt;. Accurate performance reports help in those cases and can serve as an educational tool for software developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a consistent performance metric
&lt;/h2&gt;

&lt;p&gt;Measuring software performance in various environments to gather reproducible results is hard. The most basic metric we can think of is to &lt;strong&gt;measure the execution time but just changing the hardware will produce completely different data points&lt;/strong&gt;. Running a program on a toaster will be significantly slower than running it on the latest generations of cloud instances. Joking aside, &lt;strong&gt;merely&lt;/strong&gt; &lt;strong&gt;using the same machine at different times will produce different results&lt;/strong&gt; because other unpredictable background tasks are eating up the CPU time.&lt;/p&gt;

&lt;p&gt;One obvious solution to bring consistency in the results is to run the program in a controlled cloud environment where background running processes are very limited. Despite a significant improvement in the quality of the results, &lt;strong&gt;this doesn’t give repeatable measurements either because of the &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/noisy-neighbor-cloud-computing-performance" rel="noopener noreferrer"&gt;noisy neighbor issue&lt;/a&gt;&lt;/strong&gt;. Basically, physical cloud machines are shared among customers by splitting them into multiple Virtual Machines. At a software level, those are perfectly isolated but in the end, they still share some common hardware and &lt;strong&gt;isolation can’t be perfect&lt;/strong&gt;(eg. memory, high-level CPU caches, network interfaces). Since it’s not possible to predict the workload running along a measurement task, finding the “truth” in this noise becomes a statistical challenge, and &lt;strong&gt;building an extremely accurate metric is nearly impossible&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdss42e88134e0u9n6lzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdss42e88134e0u9n6lzt.png" alt="Time measurement for fibo10" width="800" height="371"&gt;&lt;/a&gt;&lt;br&gt;Time measurement for a Fibonacci sequence computation(Python runs from GitHub Action)
  &lt;/p&gt;

&lt;p&gt;When it comes to consistency, working with execution time measurement, seems doomed to failure. So what if we instead decided to dissect exactly what a virtual machine does to run our program? In this case, running the same measurement again and again on various machines produces the exact same results since &lt;strong&gt;the hosted virtual machine always starts with a predefined state and emulates the same hardware&lt;/strong&gt;. We don’t mind the noisy neighbor issue either because &lt;strong&gt;whatever is running along our measurement doesn’t change the instructions executed to run our program&lt;/strong&gt; in its sandboxed environment. Furthermore, based on those micro execution steps, it is possible to aggregate &lt;strong&gt;a time equivalent metric that will be consistent, accurate (detect less than 1% performance changes), and hardware agnostic&lt;/strong&gt;: &lt;strong&gt;this is how CodSpeed works!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jnku6pdeh9c7un55iz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jnku6pdeh9c7un55iz8.png" alt="CodSpeed measurement for fibo10" width="800" height="414"&gt;&lt;/a&gt;&lt;br&gt;CodSpeed Measurement for a Fibonacci sequence computation(Python runs from GitHub Action)
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Shifting left with CodSpeed
&lt;/h2&gt;

&lt;p&gt;CodSpeed brings this consistent measurement to the Continuous Integration environments, enabling performance checks to be included in the software development lifecycle as early as possible.&lt;/p&gt;

&lt;p&gt;On every new feature, the performance is measured and reported directly in the repository provider as a Pull Request comment. Optionally, status checks can also be enabled to enforce performance requirements to be satisfied before merging the new delivery.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyz4j0mmrb31ofaz2aa01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyz4j0mmrb31ofaz2aa01.png" alt="Github Integration" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, there is a whole platform available. Breaking down the performance per branches, commits, or benchmarks and giving you an overview of the upcoming performance changes. We will give you more details about this in a future blog post!&lt;/p&gt;

&lt;p&gt;And the beta is already opened! 🎉 Here are some of the open-source repositories already using CodSpeed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/pydantic/pydantic-core" rel="noopener noreferrer"&gt;&lt;code&gt;pydantic-core&lt;/code&gt;&lt;/a&gt;: The core validation logic for pydantic, a Python data parsing and validation library.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/samueltardieu/pathfinding" rel="noopener noreferrer"&gt;&lt;code&gt;pathfinding&lt;/code&gt;&lt;/a&gt;: A pathfinding library for Rust.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/swarmion/swarmion" rel="noopener noreferrer"&gt;&lt;code&gt;swarmion&lt;/code&gt;&lt;/a&gt;: A set of tools to build and deploy type-safe serverless microservices with Typescript.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to try it out, &lt;a href="https://codspeed.io" rel="noopener noreferrer"&gt;check out the product&lt;/a&gt; and don't hesitate to &lt;a href="https://twitter.com/codspeedhq" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay updated!&lt;/p&gt;

</description>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
