<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ibrohim syarif</title>
    <description>The latest articles on DEV Community by ibrohim syarif (@ibrohhm).</description>
    <link>https://dev.to/ibrohhm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ibrohhm"/>
    <language>en</language>
    <item>
      <title>The Dangers of High-Cardinality Labels in Prometheus</title>
      <dc:creator>ibrohim syarif</dc:creator>
      <pubDate>Sun, 22 Feb 2026 04:51:57 +0000</pubDate>
      <link>https://dev.to/ibrohhm/the-dangers-of-high-cardinality-labels-in-prometheus-poi</link>
      <guid>https://dev.to/ibrohhm/the-dangers-of-high-cardinality-labels-in-prometheus-poi</guid>
      <description>&lt;p&gt;We're all familiar with the warnings: "&lt;em&gt;Don't use user_id as a Prometheus label&lt;/em&gt;" or "&lt;em&gt;Don't use transaction codes as labels — they can crash Prometheus&lt;/em&gt;". But do we really understand why these are so dangerous?&lt;/p&gt;

&lt;p&gt;Before that, we need to know how Prometheus works.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Prometheus Works
&lt;/h2&gt;

&lt;p&gt;Prometheus is an open-source systems monitoring and alerting tool that collects and stores its metrics as time-series data. It periodically scrapes metrics from your services based on the configured interval.&lt;/p&gt;

&lt;p&gt;This is an example of the config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scrape_configs:
  - job_name: 'golang-app'
    static_configs:
      - targets: ['localhost:8080']
    scrape_interval: 5s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This config will tell Prometheus to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Target&lt;/strong&gt;: send an HTTP request &lt;code&gt;GET to http://localhost:8080/metrics&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Periodically&lt;/strong&gt;: for every 5 seconds&lt;br&gt;
&lt;strong&gt;Label&lt;/strong&gt;: with &lt;code&gt;job=golang-app&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe42h41ov5e38ed2hpzvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe42h41ov5e38ed2hpzvj.png" alt="how_prometheus_works" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus has three metric types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gauges&lt;/strong&gt; represent current measurements and reflect the current state of a system, such as CPU usage and memory usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Counters&lt;/strong&gt; measure discrete events that continuously increase over time. Common examples are the number of HTTP requests received, CPU seconds spent, and bytes sent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Histogram&lt;/strong&gt; tracks the distribution of observed values. For a base metric name &lt;code&gt;&amp;lt;basename&amp;gt;&lt;/code&gt;, it exposes multiple related time series:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;basename&amp;gt;_bucket{le="..."}&lt;/code&gt; — Cumulative counters representing the number of observations that fall within each bucket boundary&lt;br&gt;
&lt;code&gt;&amp;lt;basename&amp;gt;_sum&lt;/code&gt; — The total sum of all observed values&lt;br&gt;
&lt;code&gt;&amp;lt;basename&amp;gt;_count&lt;/code&gt; — The count of events that have been observed&lt;/p&gt;
&lt;h2&gt;
  
  
  Time Series Database (TSDB)
&lt;/h2&gt;

&lt;p&gt;Prometheus collects and stores metrics as time series. Each time series is uniquely identified by a metric name and a set of labels, while each sample within the series contains a timestamp and a value. Each unique combination of labels (method, path, and status) represents a separate time series whose value increases as more requests are processed, with total &lt;code&gt;method x path x status&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn5fmkmn7auzhkgka8rn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn5fmkmn7auzhkgka8rn.png" alt="methodxpathxstatus" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Counter metrics example, suppose we have two endpoints: &lt;code&gt;GET: /api/data&lt;/code&gt;, &lt;code&gt;GET: /api/users&lt;/code&gt;, and each of which can return either a &lt;code&gt;200&lt;/code&gt; or &lt;code&gt;500&lt;/code&gt; status code. This results in the following metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http_requests_total{method="GET", path="/api/data",  status="200"} 17
http_requests_total{method="GET", path="/api/data",  status="500"} 0
http_requests_total{method="GET", path="/api/users", status="200"} 10
http_requests_total{method="GET", path="/api/users", status="500"} 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because each time series represents a &lt;em&gt;unique combination of labels&lt;/em&gt;, these four label combinations produce four distinct time series. In the time-series database (TSDB), each of these time series is stored independently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// time series 1
2026-02-19 09:00:00 | {__name__="http_requests_total", method="GET", path="/api/data", status="200"} | 15
2026-02-19 09:00:05 | {__name__="http_requests_total", method="GET", path="/api/data", status="200"} | 16
2026-02-19 09:00:10 | {__name__="http_requests_total", method="GET", path="/api/data", status="200"} | 17

// time series 2
2026-02-19 09:00:00 | {__name__="http_requests_total", method="GET", path="/api/data", status="500"} | 0
2026-02-19 09:00:05 | {__name__="http_requests_total", method="GET", path="/api/data", status="500"} | 0
2026-02-19 09:00:10 | {__name__="http_requests_total", method="GET", path="/api/data", status="500"} | 0

// time series 3
2026-02-19 09:00:00 | {__name__="http_requests_total", method="GET", path="/api/users", status="200"} | 8
2026-02-19 09:00:05 | {__name__="http_requests_total", method="GET", path="/api/users", status="200"} | 9
2026-02-19 09:00:10 | {__name__="http_requests_total", method="GET", path="/api/users", status="200"} | 10

// time series 4
2026-02-19 09:00:00 | {__name__="http_requests_total", method="GET", path="/api/users", status="500"} | 1
2026-02-19 09:00:05 | {__name__="http_requests_total", method="GET", path="/api/users", status="500"} | 2
2026-02-19 09:00:10 | {__name__="http_requests_total", method="GET", path="/api/users", status="500"} | 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Dangers
&lt;/h2&gt;

&lt;p&gt;Let's go back to the warning: "&lt;em&gt;Don't use user_id as a Prometheus label&lt;/em&gt;" or "&lt;em&gt;Don't use transaction codes as labels — they can crash Prometheus.&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;Imagine you want to record transaction latency using metric labels such as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;status&lt;/code&gt;: &lt;code&gt;pending&lt;/code&gt;, &lt;code&gt;paid&lt;/code&gt;, &lt;code&gt;success&lt;/code&gt;, &lt;code&gt;failed&lt;/code&gt;&lt;br&gt;
&lt;code&gt;payment_type&lt;/code&gt;: &lt;code&gt;wallet&lt;/code&gt;, &lt;code&gt;cash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here, &lt;code&gt;status&lt;/code&gt; has 4 possible values, &lt;code&gt;payment_type&lt;/code&gt; has 2 possible values. It will produce &lt;code&gt;status (4) x payment_type (2) = 8 time series&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s1s5xzzvem70wkw12qj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s1s5xzzvem70wkw12qj.png" alt="statusxpayment_type" width="771" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the example result of the metrics&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F100krkandp6x0r5wsj3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F100krkandp6x0r5wsj3s.png" alt="total processing time rate" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;there are exactly 8 label for the metrics&lt;/p&gt;

&lt;p&gt;Then, you adjust the metrics by adding a &lt;code&gt;code&lt;/code&gt; label, allowing request rates, error rates, and traffic patterns to be broken down per transaction&lt;/p&gt;

&lt;p&gt;&lt;code&gt;code&lt;/code&gt;: a unique identifier for each transaction&lt;/p&gt;

&lt;p&gt;However, &lt;code&gt;code&lt;/code&gt; is unique for every transaction and grows continuously with request volume. As a result, the number of possible values for &lt;code&gt;code&lt;/code&gt; is &lt;strong&gt;unbounded&lt;/strong&gt; and &lt;strong&gt;increases over time&lt;/strong&gt;. &lt;code&gt;status (4) × payment_type (2) × code (∞) = ∞ time series&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg78isnri0arom4r2vxs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg78isnri0arom4r2vxs.png" alt="statusxpayment_typexcode" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the example result of the metrics&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxpfskghz8w5v40ofxxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxpfskghz8w5v40ofxxr.png" alt="total processing time rate with code" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This single unbounded label is enough to turn an otherwise manageable metric into a high-cardinality time-series explosion that can cause memory exhaustion and query performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyxl4vuasgyc9gya0vfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyxl4vuasgyc9gya0vfj.png" alt="nuke" width="559" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Adding labels whose values grow unbounded over time—such as UUIDs, timestamps, user IDs, or transaction codes—is strongly discouraged. These labels rarely add meaningful value at the metrics level and introduce high cardinality. For high-cardinality data, better use logging, not metrics&lt;/p&gt;

&lt;p&gt;High-cardinality labels can lead to serious issues, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Huge memory usage&lt;/strong&gt; — each unique label set creates a new time series&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rapid disk growth&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Slow queries&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scrape performance issue&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's better to use labels that have semantic meaning, and strongly recommended to keep the number of labels to a minimum&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A wise man says, "Never use a label whose value grows with users, requests, or time."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Good labels describe what something is, not who or which exact instance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;code: &lt;a href="https://github.com/ibrohhm/prometheus-grafana-golang" rel="noopener noreferrer"&gt;https://github.com/ibrohhm/prometheus-grafana-golang&lt;/a&gt;&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Circuit Breaker Pattern</title>
      <dc:creator>ibrohim syarif</dc:creator>
      <pubDate>Wed, 04 Dec 2024 12:00:00 +0000</pubDate>
      <link>https://dev.to/ibrohhm/circuit-breaker-pattern-1775</link>
      <guid>https://dev.to/ibrohhm/circuit-breaker-pattern-1775</guid>
      <description>&lt;p&gt;Integrating with partners often got unexpected behavior due some isssue on their server that impact to our service performance. lets say the integration flow look like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ludyh9alen5a1jd7f7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ludyh9alen5a1jd7f7b.png" alt="simple partner integration" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the partner responds successfully, our service forwards the response data to the client. Otherwise, if the partner returns an error, our service will relay the error message to the client. Similar to our server, the partner have maintenance or unexpected issue that make it inaccessible. When their server fails to respond, every request to their server will get not responding error and giving unnecessary waiting time, with huge traffic this issue very possible will cause our server to crash. So what should we do to prevent that happen?&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;The issue in this article isn't about the persistent errors from the partner but rather the additional response time caused by these errors, which could lead to our server crash (see this article &lt;a href="https://dev.to/ibrohhm/crash-and-timeout-simulation-jbp"&gt;https://dev.to/ibrohhm/crash-and-timeout-simulation-jbp&lt;/a&gt;). To solve this, we need add an another layer to manage the partner connection, acting as circuit breaker if the connection goes bad it will break the connection and return the request immediately without waiting for the partner response&lt;/p&gt;

&lt;p&gt;the circuit breaker pattern have three states&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;closed&lt;/strong&gt; means the service allow to make connections&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;half-open&lt;/strong&gt; means the service allow to make connections with limited number&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;open&lt;/strong&gt; means the service not allow to make connections, it will return error immediately&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;this is detail curcuit breaker flow&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubm3qled4qjt4ujuycio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubm3qled4qjt4ujuycio.png" alt="circuit breaker flow" width="499" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;curcuit breaker allows us to control the partner connection effectively. By implement circuit breaker in our integration flow, we have no worries about the unexpected partner failure, it will cut the connection automatically and prevent our service from potential crashes due the unnecessary waiting times&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Crash and Timeout Simulation</title>
      <dc:creator>ibrohim syarif</dc:creator>
      <pubDate>Sun, 21 Jul 2024 08:35:21 +0000</pubDate>
      <link>https://dev.to/ibrohhm/crash-and-timeout-simulation-jbp</link>
      <guid>https://dev.to/ibrohhm/crash-and-timeout-simulation-jbp</guid>
      <description>&lt;p&gt;Image you have apps that required called partner to served your data, the partner sometimes got unexpected behavior that we cannot control, let say it's random delay everytime you request from the partner. It's very tiny detail but if we are not handle the partner request well, it will causing our server down. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mh96w9l6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media1.giphy.com/media/GGyEfuIWI43zq/200.webp%3Fcid%3D790b76117uyc65rzr9vkylyz4c2io08uj7brbqj1a98hs4xb%26ep%3Dv1_gifs_search%26rid%3D200.webp%26ct%3Dg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mh96w9l6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media1.giphy.com/media/GGyEfuIWI43zq/200.webp%3Fcid%3D790b76117uyc65rzr9vkylyz4c2io08uj7brbqj1a98hs4xb%26ep%3Dv1_gifs_search%26rid%3D200.webp%26ct%3Dg" alt="crash" width="300" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article is focus on the simulation for your server to handle this partner behavior&lt;/p&gt;

&lt;p&gt;To simulate this we will create three service (client, server, partner) using golang&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client --&amp;gt; server --&amp;gt; partner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;client: call the server with go routine&lt;/li&gt;
&lt;li&gt;server: the server will forward the request from client to partner, act as middleware&lt;/li&gt;
&lt;li&gt;partner: simple hello world golang with random timeout&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Partner
&lt;/h3&gt;

&lt;p&gt;Partner service is simple http call with random delay&lt;/p&gt;

&lt;p&gt;The partner service only have &lt;code&gt;get /data&lt;/code&gt; endpoint with response &lt;em&gt;Hello from Partner Service&lt;/em&gt; with generate random delay everytime request the data (1-10 second delay). The partner also have logging to show the &lt;em&gt;delay_set&lt;/em&gt; and &lt;em&gt;time&lt;/em&gt; when request occur. So we can monitor the request well&lt;/p&gt;

&lt;p&gt;See the implementation: (&lt;a href="https://github.com/ibrohhm/crash_and_timeout_simulation/blob/master/partner/partner.go" rel="noopener noreferrer"&gt;partner service&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;How to run: &lt;code&gt;go run partner.go&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Server
&lt;/h3&gt;

&lt;p&gt;Server service is your internal service to handle the request from client. To simulate the crash, we need to set the memory limit allocation (&lt;code&gt;MemoryLimit&lt;/code&gt;) so we can simulate the crash without crashing your laptop. When running the server, it will checking the memory usage in every 1 second (&lt;code&gt;getMemoryUsage&lt;/code&gt;) and the memory usage is exceed the &lt;code&gt;MemoryLimit&lt;/code&gt; we will stop the server. The service also have logging to show the &lt;em&gt;method&lt;/em&gt;, &lt;em&gt;url&lt;/em&gt;, &lt;em&gt;latency&lt;/em&gt;, &lt;em&gt;status&lt;/em&gt;, &lt;em&gt;error&lt;/em&gt;, and &lt;em&gt;memory_usage&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;See the implementation: &lt;a href="https://github.com/ibrohhm/crash_and_timeout_simulation/blob/master/server/server.go" rel="noopener noreferrer"&gt;server service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to run: &lt;code&gt;go run server.go&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Client
&lt;/h3&gt;

&lt;p&gt;Client service is simple golang apps that will do 100 request in 1 second to the server with go routine. I choose to create client service instead of using load test application like &lt;code&gt;JMeter&lt;/code&gt;, so we can see the logger for every request&lt;/p&gt;

&lt;p&gt;See the implementation: &lt;a href="https://github.com/ibrohhm/crash_and_timeout_simulation/blob/master/client/client.go" rel="noopener noreferrer"&gt;client service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to run: &lt;code&gt;go run client.go&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Simulation
&lt;/h2&gt;

&lt;p&gt;In this section we will do three simulation&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;partner with no delay&lt;/li&gt;
&lt;li&gt;partner with random delay but no timeout set in the server&lt;/li&gt;
&lt;li&gt;partner with random delay with timeout set in the server&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_2Mubwzf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media0.giphy.com/media/eMB8ru08jqn8wbjmgM/giphy.webp%3Fcid%3Decf05e47rfcnxhben1oyox4g5b3fyazmb682a2skuos5blyh%26ep%3Dv1_gifs_search%26rid%3Dgiphy.webp%26ct%3Dg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_2Mubwzf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media0.giphy.com/media/eMB8ru08jqn8wbjmgM/giphy.webp%3Fcid%3Decf05e47rfcnxhben1oyox4g5b3fyazmb682a2skuos5blyh%26ep%3Dv1_gifs_search%26rid%3Dgiphy.webp%26ct%3Dg" alt="simulation" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Case 1
&lt;/h3&gt;

&lt;p&gt;Every case always have happy case and this is it, our partner service have good spec and never got delay everytime we request. to make this possible you need to change the delay on partner code from &lt;code&gt;delay := time.Duration(rand.Intn(11))&lt;/code&gt; to &lt;code&gt;delay := time.Duration(0)&lt;/code&gt; (&lt;a href="https://github.com/ibrohhm/crash_and_timeout_simulation/blob/master/partner/partner.go#L14" rel="noopener noreferrer"&gt;ref&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client --&amp;gt; server --&amp;gt; partner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this is the result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpqtzb4rn8si4hfkapro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpqtzb4rn8si4hfkapro.png" alt="case 1" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the partner, the server, the client is all good. everyone happy&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3aVMesuI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media3.giphy.com/media/xSM46ernAUN3y/giphy.webp%3Fcid%3D790b7611j8zbdbqkmkc8ijz491n7ea1h060b1s3ukkaw3niw%26ep%3Dv1_gifs_search%26rid%3Dgiphy.webp%26ct%3Dg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3aVMesuI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media3.giphy.com/media/xSM46ernAUN3y/giphy.webp%3Fcid%3D790b7611j8zbdbqkmkc8ijz491n7ea1h060b1s3ukkaw3niw%26ep%3Dv1_gifs_search%26rid%3Dgiphy.webp%26ct%3Dg" alt="happy" width="245" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Case 2
&lt;/h3&gt;

&lt;p&gt;Our partner service have random delay (delay := time.Duration(rand.Intn(11))) and our server service not set the timeout when request to partner service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client --&amp;gt; server --&amp;gt; partner (random delay)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this is the result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sdbpsnmsby0z7insh9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sdbpsnmsby0z7insh9y.png" alt="case 2" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;our server got killed in 24 second since the memory usage exceed the MemoryLimit. This is because the client service continuosly spawn new request using goroutine to call server service, since there's no limit on the number of goroutine being spawend, the server service request the partner service with hugh number. Because of the partner delay, most of the request running at the same time and consume all available memory then leading to a crash&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pMVDgnIp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExZmVveWE4dGEzZXI3NzRvbmpneWo2Y3dmOWFjd2V5eXA0YTVoOWF0byZlcD12MV9naWZzX3NlYXJjaCZjdD1n/9M5jK4GXmD5o1irGrF/giphy.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pMVDgnIp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExZmVveWE4dGEzZXI3NzRvbmpneWo2Y3dmOWFjd2V5eXA0YTVoOWF0byZlcD12MV9naWZzX3NlYXJjaCZjdD1n/9M5jK4GXmD5o1irGrF/giphy.webp" alt="this is fine" width="436" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;what happen if we set the timeout request on the server service?&lt;/p&gt;

&lt;h3&gt;
  
  
  Case 3
&lt;/h3&gt;

&lt;p&gt;Our partner have random delay but our server set the timeout request. we need to change the &lt;code&gt;Timeout&lt;/code&gt; variable in server to some number, let say 3 second &lt;code&gt;const Timeout = 3&lt;/code&gt; (&lt;a href="https://github.com/ibrohhm/crash_and_timeout_simulation/blob/master/server/server.go#L16" rel="noopener noreferrer"&gt;ref&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client --&amp;gt; server --[with timeout]--&amp;gt; partner (random delay)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this is the result&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrcmdmq5qohk7h9yxmvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrcmdmq5qohk7h9yxmvw.png" alt="case 3" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you running the simulation, you'll see our server service not get killed from the exceed memory allocation. if you look more closely in the logger, the memory_usage of the server is always around 6MB - 12MB (never exceed the 20MB) this is because the timeout killed the ongoing request and release the memory allocation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1as5q0w0go5zccmjzco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1as5q0w0go5zccmjzco.png" alt="time out logger" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jbx9RfOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media3.giphy.com/media/qwXFFwQATRG4o/giphy.webp%3Fcid%3D790b7611rf3qii0dabti82nahvuohf6iwi9yrsk2nlcpvd4n%26ep%3Dv1_gifs_search%26rid%3Dgiphy.webp%26ct%3Dg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jbx9RfOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media3.giphy.com/media/qwXFFwQATRG4o/giphy.webp%3Fcid%3D790b7611rf3qii0dabti82nahvuohf6iwi9yrsk2nlcpvd4n%26ep%3Dv1_gifs_search%26rid%3Dgiphy.webp%26ct%3Dg" alt="better" width="220" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summaries
&lt;/h2&gt;

&lt;p&gt;The partner service behavior is the external thing that we cannot control, we cannot trust the partner to have good behavior. sometimes it got delay, sometimes we cannot access it doe their internal error or else. the small delay maybe will cause our server down (like the simulation), so we better to prevent that happen and one way to prevent it is like adding the timeout when request to the server&lt;/p&gt;

&lt;p&gt;source code and simulation videos: &lt;a href="https://github.com/ibrohhm/crash_and_timeout_simulation" rel="noopener noreferrer"&gt;https://github.com/ibrohhm/crash_and_timeout_simulation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>simulation</category>
      <category>timeout</category>
      <category>crash</category>
      <category>go</category>
    </item>
    <item>
      <title>Know Better About N+1 Queries Problem</title>
      <dc:creator>ibrohim syarif</dc:creator>
      <pubDate>Wed, 29 Nov 2023 16:41:42 +0000</pubDate>
      <link>https://dev.to/ibrohhm/know-better-about-n1-queries-problem-gpc</link>
      <guid>https://dev.to/ibrohhm/know-better-about-n1-queries-problem-gpc</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In the engineering process we often facing the case to query all the data based on it's parent and the data will be used for some reason. For example, let say there is &lt;code&gt;users&lt;/code&gt; table that has correlation one-to-many with the &lt;code&gt;transactions&lt;/code&gt; table, you need to get all the users and it's transactions based on the &lt;code&gt;user_ids&lt;/code&gt; that given from argument. The simple logic that we will do is to get all the user with id include in &lt;code&gt;user_ids&lt;/code&gt; then get all the transactions one by one&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def load_data(user_ids)
    result = []
    users = User.where(id: user_ids)
    users.each do |user|
        result &amp;lt;&amp;lt; { user: user, transactions: user.transactions }
    end

    result
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;it's really simple logic, but is it good enough? is it bad? is it our service can endure the high throughput? is there any way to make it more efficient?&lt;/p&gt;

&lt;h2&gt;
  
  
  Look Inside the Query
&lt;/h2&gt;

&lt;p&gt;Let say we have this model&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class User &amp;lt; ApplicationRecord
  has_many :transactions
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Transaction &amp;lt; ApplicationRecord
  belongs_to :user
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we're gonna simulate the query in the rails console (run: &lt;code&gt;rails console&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3s669swxvh01ezrqb7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3s669swxvh01ezrqb7l.png" alt="load_data method"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;we see in the image, the &lt;code&gt;load_data&lt;/code&gt; method called 4 queries to the database. The first one is query all the users based on the user_ids, and the 3 others are queries to fetch the transactions for each user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT "users".* FROM "users" WHERE "users"."id" IN (?, ?, ?)  [["id", 1], ["id", 2], ["id", 3]]
SELECT "transactions".* FROM "transactions" WHERE "transactions"."user_id" = ? /* loading for inspect */ LIMIT ?  [["user_id", 1], ["LIMIT", 11]]
SELECT "transactions".* FROM "transactions" WHERE "transactions"."user_id" = ? /* loading for inspect */ LIMIT ?  [["user_id", 2], ["LIMIT", 11]]
SELECT "transactions".* FROM "transactions" WHERE "transactions"."user_id" = ? /* loading for inspect */ LIMIT ?  [["user_id", 3], ["LIMIT", 11]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happen if the user_ids is so big? will we query to fetch the transactions as much as the user that we have? now we facing the N+1 queries problem&lt;/p&gt;

&lt;h2&gt;
  
  
  N+1 query problem?
&lt;/h2&gt;

&lt;p&gt;This is common problem in the database query, it will execute the query one-by-one for all instance instead of 1 or 2 queries. In the example above we fetch all the three users data, then continue with query all the transactions for each user, it count 4 queries (1+3). If the are N users data, first it will fetch all the N users then continue to query all the transactions for each user, so it's called N+1 queries.&lt;/p&gt;

&lt;p&gt;The problem in the N+1 queries is each query will take some amount of time, bigger data that we fetch bigger time that we need and we may facing the timeout issue. N+1 query is not good for the performance and we need find the solution&lt;/p&gt;

&lt;p&gt;*we can ignore the N+1 query if the data is small or low throughput&lt;/p&gt;

&lt;h2&gt;
  
  
  Eager Load
&lt;/h2&gt;

&lt;p&gt;In ruby, we have Eager load mechanism to load all the data and it's association with single query. One of the method to trigger the eager_load is &lt;code&gt;.includes&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def load_data_with_eager_load(user_ids)
    result = []
    users = User.includes(:transactions).where(id: user_ids)
    users.each do |user|
        result &amp;lt;&amp;lt; { user: user, transactions: user.transactions }
    end

    result
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the method above will give result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcxfvyxruf9z5yfzw5s3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcxfvyxruf9z5yfzw5s3.png" alt="load_data_with_eager_load method"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;if we look closely, the &lt;code&gt;load_data_with_eager_load&lt;/code&gt; method only trigger two query. First query get all the users and the second query get all the transactions for all users&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT "users".* FROM "users" WHERE "users"."id" IN (?, ?, ?)  [["id", 1], ["id", 2], ["id", 3]]
SELECT "transactions".* FROM "transactions" WHERE "transactions"."user_id" IN (?, ?, ?)  [["user_id", 1], ["user_id", 2], ["user_id", 3]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It reduce the database query significantly, the eager load only cost 2 queries for however much data we have&lt;/p&gt;

</description>
      <category>query</category>
    </item>
  </channel>
</rss>
