<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: T3ns0r</title>
    <description>The latest articles on DEV Community by T3ns0r (@t3ns0r).</description>
    <link>https://dev.to/t3ns0r</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/t3ns0r"/>
    <language>en</language>
    <item>
      <title>Why your FastAPI (or Flask) App performs poorly with high loads</title>
      <dc:creator>T3ns0r</dc:creator>
      <pubDate>Sun, 20 Oct 2024 18:19:33 +0000</pubDate>
      <link>https://dev.to/t3ns0r/why-your-fastapi-or-flask-app-performs-poorly-with-high-loads-4l48</link>
      <guid>https://dev.to/t3ns0r/why-your-fastapi-or-flask-app-performs-poorly-with-high-loads-4l48</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a4mep2czl2jaf8nam6m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a4mep2czl2jaf8nam6m.jpg" alt="Credits to https://medium.com/geekculture" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The initial results of load tests in a simple FastAPI App that performs CRUD operations to a Postgres DB did not look good, but after some analysis, the problem was identified.&lt;/p&gt;

&lt;p&gt;This text is intended for entry-level developers or Data scientists (not senior Python software engineers) and I will write this as a narrative, or in other words the chronological sequence of events as they happened, instead of a "technical paper (structured in problem, solution, discussion). I like this approach because it shows how things happen in real life.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial Considerations
&lt;/h2&gt;

&lt;p&gt;These tests were done on GCP Cloud Run using a single processor, and 512M RAM machine, and we used &lt;a href="https://locust.io/" rel="noopener noreferrer"&gt;Locust&lt;/a&gt;, an incredible tool (for Python, LoL).&lt;/p&gt;

&lt;p&gt;Also, if you are already having performance issues on single requests on Postman, I strongly suggest you take a look at this repo dedicated to increase FastAPI performance &lt;a href="https://kisspeter.github.io/fastapi-performance-optimization/" rel="noopener noreferrer"&gt;from kisspeter&lt;/a&gt; and this one &lt;a href="https://loadforge.com/guides/fastapi-performance-tuning-tricks-to-enhance-speed-and-scalability" rel="noopener noreferrer"&gt;from LoadForge&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Test Round
&lt;/h2&gt;

&lt;p&gt;Using a single request in Postman, after Cloud Run started, I was getting around 400ms response time. Not the best, but totally within an acceptable range.&lt;/p&gt;

&lt;p&gt;The load test is quite simple: reads, writes, and deletes in one table ( or GETs, POSTs, and DELETEs to the API endpoints). 75% reads, 20% writes, 5% deletes. We run it with 100 concurrent users for 10 min. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femzkk4tfbkt5fuif5t6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femzkk4tfbkt5fuif5t6u.png" alt="Response time in GCP - 2s average" width="745" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end, we got a 2s average response time, but the most disturbing part is that the average time was still increasing when the test ended, so it is very likely the number would still grow more before ( and if ) it stabilizes.&lt;/p&gt;

&lt;p&gt;I tried to run it locally on my machine, but to my surprise, the response time in Postman was 14ms only. However, when running the load test for 500 concurrent users, the problem appeared again 😱 ...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa709me3rx851fwgch7i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa709me3rx851fwgch7i0.png" alt="Response time locally - 1.6s average" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By the end of the test, the response time was about 1.6s and still increasing, but some glitch happened, and the 95th percentile skyrocketed (and ruined the graph =( ). Here are the stats:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhns8lru9fets1qll872.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhns8lru9fets1qll872.png" alt="Response time locally stats table " width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, why does a server that responds with 14ms suddenly go up to 1.6 seconds with only 500 concurrent users? &lt;/p&gt;

&lt;p&gt;My machine is a core i7, 6 cores, 2.6GHz, 16Gb RAM, SSD. It should not happen.&lt;/p&gt;

&lt;p&gt;What gave me a good hint was my processor and memory logs... They were extremely low!&lt;/p&gt;

&lt;p&gt;This probably means my server is not using all the resources from my machine. And guess what? It was not. Let me present to you a concept the vast majority of developers forget when deploying FastAPI or Flask applications to prod: the process worker.&lt;/p&gt;

&lt;p&gt;As per &lt;code&gt;getorchestra.io&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  Understanding Server Workers
&lt;/h3&gt;

&lt;p&gt;Server workers are essentially processes that run your application code. Each worker can handle one request at a time. If you have multiple workers, you can process multiple requests simultaneously, enhancing the throughput of your application.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Server Workers are Important
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Concurrency: They allow concurrent handling of requests, leading to better utilization of server resources and faster response times.&lt;/li&gt;
&lt;li&gt;Isolation: Each worker is an independent process. If one worker fails, it doesn't affect the others, ensuring better stability.&lt;/li&gt;
&lt;li&gt;Scalability: Adjusting the number of workers can easily scale your application to handle varying loads.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;In practice, all you need to do is add the optional &lt;code&gt;--workers&lt;/code&gt; param to your server initialization line. The calculation of how many workers you need depends a lot on the server you are running your application and the behavior of your application: especially when it comes to memory consumption.&lt;/p&gt;

&lt;p&gt;After doing it, I got much better results locally for 16 workers, converging to 90ms (for 500 concurrent users) after 10 min:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjha25esqzaxvw54zrzn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjha25esqzaxvw54zrzn7.png" alt="Response time locally - 90ms avg " width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Test Round
&lt;/h2&gt;

&lt;p&gt;After configuring the microservices with the appropriate number of workers (I used 4 for my single processor Cloud Run instance), my results were incredibly better in GCP:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ocru6em3e2lsk5uoszq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ocru6em3e2lsk5uoszq.png" alt="Response time in GCP - 300 ms avg " width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final value converges to 300ms at the end of the test in the GCP server, which is at least acceptable. 😅 &lt;/p&gt;

</description>
      <category>fastapi</category>
      <category>flask</category>
      <category>python</category>
      <category>gcp</category>
    </item>
  </channel>
</rss>
