<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hermógenes Ferreira</title>
    <description>The latest articles on DEV Community by Hermógenes Ferreira (@hermogenes).</description>
    <link>https://dev.to/hermogenes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hermogenes"/>
    <language>en</language>
    <item>
      <title>When Benchmarks Lie: Saturation, Queues, and the Cost of Assuming Linearity</title>
      <dc:creator>Hermógenes Ferreira</dc:creator>
      <pubDate>Mon, 05 Jan 2026 10:31:14 +0000</pubDate>
      <link>https://dev.to/hermogenes/when-benchmarks-lie-saturation-queues-and-the-cost-of-assuming-linearity-45b6</link>
      <guid>https://dev.to/hermogenes/when-benchmarks-lie-saturation-queues-and-the-cost-of-assuming-linearity-45b6</guid>
      <description>&lt;p&gt;&lt;strong&gt;If you didn't run your system at 1M requests per second, it doesn't support 1M requests per second.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Running it at 100k and extrapolating the rest assumes linearity, and linearity is the most dangerous assumption in performance engineering.&lt;/p&gt;

&lt;p&gt;Most performance benchmarks don't fail because the numbers are wrong.&lt;br&gt;
They fail because we ask them the wrong question.&lt;/p&gt;

&lt;p&gt;Low latency at low load doesn't predict high throughput at high load.&lt;br&gt;
It predicts only one thing: the system wasn't busy yet.&lt;/p&gt;

&lt;p&gt;This post explains why that assumption breaks, why queues quietly dominate system behavior, and why throughput and latency are inseparable once you leave the comfort of idle benchmarks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The intuition trap: "fast = scalable"
&lt;/h2&gt;

&lt;p&gt;You run a benchmark and see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;p99 latency: 1 microsecond&lt;/li&gt;
&lt;li&gt;test duration: a few minutes&lt;/li&gt;
&lt;li&gt;CPU looks idle&lt;/li&gt;
&lt;li&gt;no errors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The natural conclusion: "At 1 microsecond per request, this system can do ~1M requests per second."&lt;/p&gt;

&lt;p&gt;This assumes linearity: double the load, double the work, latency stays flat, resources scale smoothly.&lt;/p&gt;

&lt;p&gt;Real systems don't work like that.&lt;/p&gt;

&lt;h2&gt;
  
  
  A real-life analogy: the coffee shop
&lt;/h2&gt;

&lt;p&gt;A coffee machine makes a coffee in 1 second. Does that mean the shop can serve 60 people per minute?&lt;/p&gt;

&lt;p&gt;Only if customers arrive one at a time, nobody queues, and nobody orders at the same moment.&lt;/p&gt;

&lt;p&gt;The moment two people arrive together, one waits. The coffee machine is still "fast", but latency increases because of waiting.&lt;/p&gt;

&lt;p&gt;Your server works the same way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The one rule you actually need
&lt;/h2&gt;

&lt;p&gt;There's a fundamental relationship that applies to coffee shops, highways, and servers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Average concurrency = throughput × latency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is &lt;a href="https://en.wikipedia.org/wiki/Little%27s_law" rel="noopener noreferrer"&gt;Little's Law&lt;/a&gt;.&lt;br&gt;
The implication: if you want more throughput, either latency must increase or concurrency must increase.&lt;/p&gt;

&lt;p&gt;You don't get infinite throughput just because one request is fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applying this to the "1M RPS" claim
&lt;/h2&gt;

&lt;p&gt;Simple math:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average latency: 1 microsecond&lt;/li&gt;
&lt;li&gt;Target throughput: 1,000,000 requests/second&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Little's Law says: on average, the system has &lt;strong&gt;exactly 1 request in flight&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;One. No overlap. No waiting. No queue.&lt;/p&gt;

&lt;p&gt;That's physically impossible for a real server.&lt;br&gt;
Requests arrive simultaneously, the OS schedules work, network packets queue, memory is shared.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So what really happens?&lt;/strong&gt;&lt;br&gt;
Requests wait. Queues form. Latency grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why linear extrapolation fails
&lt;/h2&gt;

&lt;p&gt;The core mistake: "It works at 100k RPS, so it should work at 1M RPS."&lt;/p&gt;

&lt;p&gt;This ignores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Queue growth&lt;/li&gt;
&lt;li&gt;Contention&lt;/li&gt;
&lt;li&gt;OS limits&lt;/li&gt;
&lt;li&gt;Network saturation&lt;/li&gt;
&lt;li&gt;Scheduling delays&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These effects are non-linear. They appear near the limit, not at half load. Systems often look fine right up until they collapse.&lt;/p&gt;

&lt;h2&gt;
  
  
  What short benchmarks actually measure
&lt;/h2&gt;

&lt;p&gt;Short benchmarks typically run for a few minutes with light traffic, no queues, and warm caches.&lt;/p&gt;

&lt;p&gt;What they really measure: "How fast is my code when nothing is waiting?"&lt;/p&gt;

&lt;p&gt;That's useful, but it's not capacity. It's like timing how fast a cashier scans one item and assuming the store can handle Black Friday.&lt;/p&gt;

&lt;h2&gt;
  
  
  The invisible enemy: queues
&lt;/h2&gt;

&lt;p&gt;As load increases, requests start waiting, queues grow, latency increases, and throughput stops scaling.&lt;/p&gt;

&lt;p&gt;Nothing crashes. No error appears. No bug is introduced. The system just gets slower.&lt;/p&gt;

&lt;h2&gt;
  
  
  Another analogy: highways
&lt;/h2&gt;

&lt;p&gt;A highway at night is empty, fast, and smooth. The same highway at rush hour (same road, same speed limit) develops traffic jams.&lt;/p&gt;

&lt;p&gt;Why? Once the road is almost full, small increases in traffic cause huge delays. Servers behave exactly like this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network limits show up before your code
&lt;/h2&gt;

&lt;p&gt;At high request rates with realistic payloads (roughly 1 KB per request):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network cards queue packets&lt;/li&gt;
&lt;li&gt;Kernel buffers fill&lt;/li&gt;
&lt;li&gt;Interrupts pile up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Latency increases before your application code even runs. A microbenchmark won't show this. A real load test will.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ports and connections also saturate
&lt;/h2&gt;

&lt;p&gt;High request rates hit limits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TCP port exhaustion&lt;/li&gt;
&lt;li&gt;Accept queue overflows&lt;/li&gt;
&lt;li&gt;File descriptor limits&lt;/li&gt;
&lt;li&gt;Sockets stuck in TIME_WAIT&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are operating system constraints, not application bugs. Short tests rarely reach them. Production traffic does.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why nanosecond benchmarks are misleading
&lt;/h2&gt;

&lt;p&gt;Nanosecond benchmarks usually prove one thing: the data was already in cache.&lt;/p&gt;

&lt;p&gt;They measure best-case conditions: hot CPU caches, no contention, no waiting. They don't measure queue buildup, cache eviction, real concurrency, or sustained pressure.&lt;/p&gt;

&lt;p&gt;They're useful for understanding code-level performance, but dangerous if treated as capacity numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to test honestly
&lt;/h2&gt;

&lt;p&gt;If you want to know whether your system handles 1M requests per second:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run it at 1M RPS&lt;/li&gt;
&lt;li&gt;Run it long enough for queues to form (minutes, not seconds)&lt;/li&gt;
&lt;li&gt;Watch latency as load increases&lt;/li&gt;
&lt;li&gt;Expect non-linear behavior&lt;/li&gt;
&lt;li&gt;Stop assuming linearity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Anything less is guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;Latency isn't a property of a request. It's a property of a system under load.&lt;/p&gt;

&lt;p&gt;If you push more traffic, latency must increase, or concurrency must increase, or the system saturates. There is no fourth option.&lt;/p&gt;

&lt;p&gt;Once you internalize this, benchmarks stop lying — because you stop asking them the wrong questions.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>systemdesign</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Como um Regex pode derrubar o seu servidor</title>
      <dc:creator>Hermógenes Ferreira</dc:creator>
      <pubDate>Tue, 11 Feb 2025 15:16:02 +0000</pubDate>
      <link>https://dev.to/hermogenes/como-um-regex-pode-derrubar-o-seu-servidor-4pj6</link>
      <guid>https://dev.to/hermogenes/como-um-regex-pode-derrubar-o-seu-servidor-4pj6</guid>
      <description>&lt;p&gt;Há uns dias eu fiz o seguinte post no X.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1887545470857191474-322" src="https://platform.twitter.com/embed/Tweet.html?id=1887545470857191474"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1887545470857191474-322');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1887545470857191474&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Sim, parece exagero, mas é verdade. Um regex mal feito pode derrubar um servidor inteiro.&lt;/p&gt;

&lt;p&gt;Além de ser um problema que pode causar grandes prejuízos, é bem complicado de ser identificado.&lt;/p&gt;

&lt;p&gt;Esse tipo de problema é conhecido como &lt;a href="https://www.regular-expressions.info/catastrophic.html" rel="noopener noreferrer"&gt;Catastrophic Backtracking&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pessoas mal intencionadas podem explorar essa vulnerabilidade para derrubar um servidor no ataque conhecido como &lt;a href="https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS" rel="noopener noreferrer"&gt;ReDoS&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  O que é Catastrophic Backtracking?
&lt;/h2&gt;

&lt;p&gt;Catastrophic Backtracking é um problema que ocorre quando um regex é mal feito e tenta encontrar todas as possíveis combinações de uma string.&lt;/p&gt;

&lt;p&gt;Por exemplo, considere o regex &lt;code&gt;^(a+)+$&lt;/code&gt; e a string &lt;code&gt;aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;O regex acima tenta encontrar todas as combinações de &lt;code&gt;a&lt;/code&gt; na string, o que é um problema, pois o regex não é eficiente.&lt;/p&gt;

&lt;p&gt;Você pode estar se perguntando: "Mas como isso pode derrubar um servidor?"&lt;/p&gt;

&lt;p&gt;Bem, quando um regex é mal feito, ele pode tentar encontrar todas as combinações possíveis de uma string, o que pode consumir muitos recursos do servidor se a string for grande.&lt;/p&gt;

&lt;p&gt;Como algoritmos de regex são CPU-bound, se não houver mecanismos de proteção, um ataque ReDoS pode derrubar um servidor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simulação de um ReDoS
&lt;/h2&gt;

&lt;p&gt;Para exemplificar um ReDoS, eu criei uma aplicação em Node.js que valida uma URL slug.&lt;/p&gt;

&lt;p&gt;A validação é feita com o regex &lt;code&gt;^([a-zA-Z-_0-9]+\/?)*$&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Explicando o regex acima:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;^&lt;/code&gt;: Início da string&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;([a-zA-Z-_0-9]+\/?)*&lt;/code&gt;: Grupo que aceita letras, números, hífens e underscores, seguido de uma barra opcional. Esse grupo pode se repetir zero ou mais vezes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;$&lt;/code&gt;: Fim da string&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Isso significa que a URL slug pode conter letras, números, hífens e underscores, seguidos de uma barra opcional. Essa sequência pode se repetir zero ou mais vezes.&lt;/p&gt;

&lt;p&gt;A aplicação é bem simples. Ela recebe um URL slug via query string e valida se o slug é válido.&lt;/p&gt;

&lt;p&gt;O código do app está disponível &lt;a href="https://github.com/hermogenes/redos-simulation" rel="noopener noreferrer"&gt;aqui&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Agora, vamos simular um ReDoS.&lt;/p&gt;

&lt;p&gt;Primeiro, vamos testar a aplicação com um URL slug válido:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"http://localhost:3000/slugs?url=hello-world"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Esse request deve retornar &lt;code&gt;{"valid":true}&lt;/code&gt; sem maiores problemas.&lt;/p&gt;

&lt;p&gt;Agora, vamos testar a aplicação com um URL slug inválido:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"http://localhost:3000/slugs?url=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa%21"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bem, o request acima provavelmente não vai terminar.&lt;/p&gt;

&lt;p&gt;Se você observar o consumo de CPU do processo do Node.js, você verá que ele estará consumindo 100% da CPU.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verificando os resultados da simulação
&lt;/h2&gt;

&lt;p&gt;Para simplificar a simulação, eu criei um docker compose que executa duas instâncias do app, uma com o regex seguro e outra com o regex vulnerável.&lt;/p&gt;

&lt;p&gt;Além disso, eu criei um script que executa um teste de carga nas duas instâncias.&lt;/p&gt;

&lt;p&gt;Para executar a simulação, você precisa ter o Docker e o Docker Compose instalados.&lt;/p&gt;

&lt;p&gt;Primeiro, clone o &lt;a href="https://github.com/hermogenes/redos-simulation" rel="noopener noreferrer"&gt;repositório&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Depois, execute o seguinte comando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;O comando acima vai criar as duas instâncias do app e executar o script de teste de carga.&lt;/p&gt;

&lt;p&gt;Depois de alguns segundos, você verá que a instância com o regex vulnerável vai consumir 100% da CPU, enquanto a instância com o regex seguro vai continuar funcionando normalmente.&lt;/p&gt;

&lt;p&gt;Ao final da execução do script, nenhuma request para a instância com o regex vulnerável vai ter sido completada enquanto a instância com o regex seguro vai ter completado todas as requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqm5vqnlo3dr9taqyi52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqm5vqnlo3dr9taqyi52.png" alt="Resultado do teste de carga" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Como se proteger de ReDoS
&lt;/h2&gt;

&lt;p&gt;Para se proteger de ReDoS, você pode seguir algumas práticas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Evite regex complexos&lt;/li&gt;
&lt;li&gt;Use ferramentas de análise estática que identificam regex que podem causar ReDoS&lt;/li&gt;
&lt;li&gt;Verifique se a engine de regex que você está usando tem proteção contra ReDoS e explore alternativas&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;Regex é uma ferramenta poderosa, mas pode ser perigosa se não for usada corretamente.&lt;/p&gt;

&lt;p&gt;O grande problema do ReDoS é que ele é difícil de ser identificado e não necessita de muitas requisições para derrubar um servidor.&lt;/p&gt;

&lt;p&gt;Por isso, é importante ter cuidado ao usar regex e sempre verificar se o regex que você está usando é seguro.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>You should try .NET + libSQL, and here’s why</title>
      <dc:creator>Hermógenes Ferreira</dc:creator>
      <pubDate>Mon, 15 Jul 2024 11:27:34 +0000</pubDate>
      <link>https://dev.to/hermogenes/you-should-try-net-libsql-and-heres-why-173h</link>
      <guid>https://dev.to/hermogenes/you-should-try-net-libsql-and-heres-why-173h</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; libSQL is a SQLite fork that is a game changer for global distributed databases. .NET 8 Native AOT performance is incredible. But together, they're unbelievably fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is libSQL?
&lt;/h2&gt;

&lt;p&gt;First, let me introduce you to libSQL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://turso.tech/libsql" rel="noopener noreferrer"&gt;libSQL&lt;/a&gt; is a fork of SQLite made by Turso. The goal is to optimize SQLite for low latency and replication, making it a perfect fit for global distributed databases.&lt;/p&gt;

&lt;p&gt;From now on, when I mention libSQL, I’m referring to Turso's offering.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQLite? Really?
&lt;/h2&gt;

&lt;p&gt;Yeah, I thought the same when I first heard about it.&lt;/p&gt;

&lt;p&gt;SQLite is mostly used for embedded systems and mobile apps, but I never thought it could be used for server-side applications. But libSQL changes everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes libSQL so Special?
&lt;/h2&gt;

&lt;p&gt;First, it's fast. Really fast. But I’ll delve into that later.&lt;/p&gt;

&lt;p&gt;One of the biggest challenges of global systems is latency. It doesn't matter how fast your system is if you have to wait for the data to travel around the world. If you have a microservice that can process a request in 1ms, it still has to travel from your server to the client and back. This is not negligible. In the best case, it takes more than 200ms for a round trip between Sao Paulo and Europe, for example.&lt;/p&gt;

&lt;p&gt;We’ve seen a surge in Edge Computing offerings in recent years, but they all suffer from the same problem: the data is still far from the code.&lt;/p&gt;

&lt;p&gt;Turso solves this problem by replicating data to multiple read-only servers around the world. This way, the data is always close to the code that needs it.&lt;/p&gt;

&lt;p&gt;Another advantage of libSQL is that it offers a simple HTTP API to access the data. This is very useful for serverless applications, as you can access the data without the need for a database driver, completely stateless.&lt;/p&gt;

&lt;h2&gt;
  
  
  What About .NET?
&lt;/h2&gt;

&lt;p&gt;.NET is a great platform. It's fast, secure, cross-platform, and open-source.&lt;/p&gt;

&lt;p&gt;However, it’s not known for its performance, especially in startup time. This is a big problem for serverless applications, as the cold start can take seconds.&lt;/p&gt;

&lt;p&gt;But since .NET 6, things are changing. With the new Native AOT and trimming, the startup time is reduced to milliseconds.&lt;/p&gt;

&lt;p&gt;And with .NET 8, the performance is even better.&lt;/p&gt;

&lt;p&gt;With the official support of AWS Lambda, creating a serverless application with .NET is easier than ever, and the performance is great.&lt;/p&gt;

&lt;h2&gt;
  
  
  OK! But How Fast is .NET + libSQL?
&lt;/h2&gt;

&lt;p&gt;I’m glad you asked.&lt;/p&gt;

&lt;p&gt;And instead of telling you, I’ll show you.&lt;/p&gt;

&lt;p&gt;I made a simple benchmark inspired by &lt;a href="https://github.com/aws-samples/serverless-dotnet-demo" rel="noopener noreferrer"&gt;AWS .NET samples&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can find the code of this benchmark on &lt;a href="https://github.com/hermogenes/dotnet-lambda-libsql-demo" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It’s a simple function that performs CRUD operations on a table.&lt;/p&gt;

&lt;p&gt;The function is configured to 1GB of memory (but it uses way less than that and could be fine-tuned later).&lt;/p&gt;

&lt;p&gt;The test is split into two parts: Write and Read-only.&lt;/p&gt;

&lt;p&gt;For the write test, I made around 65 requests per second for 3 minutes.&lt;/p&gt;

&lt;p&gt;For the read-only test, I made around 100 requests per second for 2 minutes.&lt;/p&gt;

&lt;p&gt;The results are impressive, if I may say so.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Write and Read Test
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cold Start&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Billed&lt;/th&gt;
&lt;th&gt;Min&lt;/th&gt;
&lt;th&gt;Avg&lt;/th&gt;
&lt;th&gt;P50&lt;/th&gt;
&lt;th&gt;P75&lt;/th&gt;
&lt;th&gt;P90&lt;/th&gt;
&lt;th&gt;P95&lt;/th&gt;
&lt;th&gt;P99&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;11622&lt;/td&gt;
&lt;td&gt;164773&lt;/td&gt;
&lt;td&gt;7.586&lt;/td&gt;
&lt;td&gt;13.6814&lt;/td&gt;
&lt;td&gt;11.0058&lt;/td&gt;
&lt;td&gt;12.1101&lt;/td&gt;
&lt;td&gt;17.4715&lt;/td&gt;
&lt;td&gt;20.006&lt;/td&gt;
&lt;td&gt;29.798&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;515&lt;/td&gt;
&lt;td&gt;345.349&lt;/td&gt;
&lt;td&gt;405.552&lt;/td&gt;
&lt;td&gt;385.861&lt;/td&gt;
&lt;td&gt;441.672&lt;/td&gt;
&lt;td&gt;445.478&lt;/td&gt;
&lt;td&gt;477&lt;/td&gt;
&lt;td&gt;477&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In total, 11,632 requests and a total of 165,288 of billed duration.&lt;/p&gt;

&lt;p&gt;What’s interesting is that, even though cold start latency is high, given we are using AWS managed runtime, we don’t pay for it.&lt;/p&gt;

&lt;p&gt;If we have a similar request rate in a month, this Lambda function would cost around $72. Not bad at all. 65 requests per second is a lot of requests. Almost 170 million requests per month.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Read-only Test
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cold Start&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Billed&lt;/th&gt;
&lt;th&gt;Min&lt;/th&gt;
&lt;th&gt;Avg&lt;/th&gt;
&lt;th&gt;P50&lt;/th&gt;
&lt;th&gt;P75&lt;/th&gt;
&lt;th&gt;P90&lt;/th&gt;
&lt;th&gt;P95&lt;/th&gt;
&lt;th&gt;P99&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;11889&lt;/td&gt;
&lt;td&gt;147747&lt;/td&gt;
&lt;td&gt;7.581&lt;/td&gt;
&lt;td&gt;11.9277&lt;/td&gt;
&lt;td&gt;10.5759&lt;/td&gt;
&lt;td&gt;11.4531&lt;/td&gt;
&lt;td&gt;13.1145&lt;/td&gt;
&lt;td&gt;15.2582&lt;/td&gt;
&lt;td&gt;24.1165&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In total, 11,889 requests and a total of 147,747 of billed duration in 2 minutes.&lt;/p&gt;

&lt;p&gt;P99 is 24ms. That’s crazy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results Combined
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cold Start&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Billed&lt;/th&gt;
&lt;th&gt;Min&lt;/th&gt;
&lt;th&gt;Avg&lt;/th&gt;
&lt;th&gt;P50&lt;/th&gt;
&lt;th&gt;P75&lt;/th&gt;
&lt;th&gt;P90&lt;/th&gt;
&lt;th&gt;P95&lt;/th&gt;
&lt;th&gt;P99&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;23,511&lt;/td&gt;
&lt;td&gt;312,520&lt;/td&gt;
&lt;td&gt;7.581&lt;/td&gt;
&lt;td&gt;12.7946&lt;/td&gt;
&lt;td&gt;10.7458&lt;/td&gt;
&lt;td&gt;11.824&lt;/td&gt;
&lt;td&gt;14.4304&lt;/td&gt;
&lt;td&gt;18.7705&lt;/td&gt;
&lt;td&gt;27.9578&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;515&lt;/td&gt;
&lt;td&gt;345.349&lt;/td&gt;
&lt;td&gt;405.552&lt;/td&gt;
&lt;td&gt;385.861&lt;/td&gt;
&lt;td&gt;441.672&lt;/td&gt;
&lt;td&gt;445.478&lt;/td&gt;
&lt;td&gt;477&lt;/td&gt;
&lt;td&gt;477&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;.NET + libSQL is a great combination for serverless applications.&lt;/p&gt;

&lt;p&gt;The performance is great, the cost is low, and the development experience is awesome if you are already familiar with .NET.&lt;/p&gt;

&lt;p&gt;If you are looking for a global distributed database, libSQL is a great option.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://turso.tech/libsql" rel="noopener noreferrer"&gt;libSQL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/hermogenes/dotnet-lambda-libsql-demo" rel="noopener noreferrer"&gt;Benchmark Code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
