<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ramon Perez</title>
    <description>The latest articles on DEV Community by Ramon Perez (@sondar4).</description>
    <link>https://dev.to/sondar4</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sondar4"/>
    <language>en</language>
    <item>
      <title>Your memory leak might be standard Python behavior</title>
      <dc:creator>Ramon Perez</dc:creator>
      <pubDate>Mon, 27 Apr 2026 09:38:57 +0000</pubDate>
      <link>https://dev.to/sondar4/your-memory-leak-might-be-standard-python-behavior-484o</link>
      <guid>https://dev.to/sondar4/your-memory-leak-might-be-standard-python-behavior-484o</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; If your garbage collector shows nothing wrong but memory keeps growing, you might not have a leak at all, you might have memory fragmentation. It happens because glibc creates separate memory arenas per thread, and a handful of live objects can prevent entire arenas from being returned to the OS. Switching to jemalloc fixed it for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;I had been working recently on debugging a memory leak in a Python web application built with Flask and SQLAlchemy and deployed in a Linux container in Kubernetes. Each deployment of the application lived for about 2 weeks, with a steady increase in memory that led to application crashes.&lt;/p&gt;

&lt;p&gt;We had some asynchronous tasks that were intensive in resources, both cpu and memory, and our first approach was to use the garbage collector to inspect the number of objects during the lifecycle of the application. Our surprise was that we could not see any suspicious increase of instances of any class - all objects seemed to be properly collected and disposed after each task. Also, the pattern of memory increase was curious: when we ran a memory intensive task, there was a spike in memory, but after the spike there were a few dozens of megabytes that were not returned to the system (which after a few days was already in the hundreds).&lt;/p&gt;

&lt;p&gt;At this point we knew that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Objects were created and memory was taken from the system.&lt;/li&gt;
&lt;li&gt;The garbage collector disposed of these objects when they were not referenced anymore.&lt;/li&gt;
&lt;li&gt;Memory that had been taken from the system was not completely returned.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But Python is a garbage collected programming language used by millions of people daily, did these mean there was a bug in it? Seemed unlikely. So the next step was to delve into something I had never the need to think of: how does Python manage memory?&lt;/p&gt;

&lt;p&gt;There are different levels in this topic, but what I was interested about was memory allocation, a topic I hadn't touched since my days in university where we were taught to code in C.&lt;/p&gt;

&lt;h2&gt;
  
  
  Short intro to Python memory management
&lt;/h2&gt;

&lt;p&gt;Python differentiates between small objects (up to 512 bytes), like integers, and large objects, like HTTP response bodies or database query results. Small objects are handled by Python's own manager: PyMalloc. Big objects are directly handled by the system memory allocator, in our case&lt;br&gt;
glibc.&lt;/p&gt;

&lt;p&gt;PyMalloc organises memory in a strict three-level hierarchy: each 1 MiB arena contains 4 KiB pools, and each pool is divided into fixed-size blocks. When all blocks in an arena are free, that arena is returned to the OS, so a single live small object can pin a full 1 MiB arena in memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Arena (1 MiB on 64-bit systems)
 └── Pool (4 KiB)
      └── Block (8–512 bytes)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the other hand, large objects are handled by glibc's &lt;code&gt;malloc&lt;/code&gt;, whose internal structure is completely different, but the reclaim logic is similar in spirit: memory is returned to the OS when a region has no live objects. But glibc adds one complication: when Python is working with multiple threads, glibc creates additional arenas per thread, up to &lt;code&gt;8 × CPU cores&lt;/code&gt; on 64-bit systems. Each thread can accumulate its own arenas, and if a small fraction of objects remain live across many of them, none can be reclaimed.&lt;/p&gt;

&lt;p&gt;One thing worth noting: Python's GIL prevents true parallel CPU execution across threads, but it doesn't affect how glibc assigns arenas. Arena assignment happens at the &lt;code&gt;malloc&lt;/code&gt; call level, so each OS thread still acquires its own arena on first allocation, regardless of the GIL.&lt;/p&gt;

&lt;p&gt;So, imagine we have an application that spawns multiple tasks, each on a different thread, that make a lot of web requests and create a lot of objects. This means a lot of small and big objects, which means a lot of arenas allocated both by PyMalloc and glibc for each thread. That's a lot of arenas. And if for some reason 99% of these objects are released but a 1% is still referenced and scattered across arenas, little memory might be released. This issue is called memory fragmentation, and it's common in long-lived (not only Python) applications that spawn multiple threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;Before switching allocators, we tried two lighter interventions. First, we set &lt;code&gt;MALLOC_ARENA_MAX=2&lt;/code&gt; to cap the number of glibc arenas across all threads, a common first recommendation for this class of problem. Second, we tried calling &lt;code&gt;malloc_trim(0)&lt;/code&gt; periodically to prompt glibc to release free memory at the top of the heap. Neither had a meaningful impact in our case.&lt;/p&gt;

&lt;p&gt;That led us to replacing glibc with jemalloc.&lt;/p&gt;

&lt;p&gt;Indeed, glibc is known to produce memory fragmentation in some scenarios - that's one of the reasons why Jason Evans created jemalloc, a memory allocator designed to address memory fragmentation and scalability issues in heavily multi-threaded environments.&lt;/p&gt;

&lt;p&gt;And surprisingly (at least to me), it's very easy to replace glibc with jemalloc as the memory allocator used by Python (in a Linux container): just install jemalloc with &lt;code&gt;apt&lt;/code&gt; (or your package manager), and set the environment variable &lt;code&gt;LD_PRELOAD&lt;/code&gt; to its path in the system. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; libjemalloc2
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LD_PRELOAD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Proper configuration of jemalloc is also needed. Usually, &lt;code&gt;narenas&lt;/code&gt; is set to a low number to force different threads to share their arenas and reduce memory fragmentation, at the possible cost of some performance (negligible in our case). I will just leave here some config that might work in the scenario I described:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;MALLOC_CONF&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"narenas:1,tcache:false"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As almost everything in software, nothing is free, and glibc is so widely used because it works generally pretty well. With this configuration we are reducing memory fragmentation at the cost of more CPU usage, which in our case was negligible as might be as well in yours, if you are reading&lt;br&gt;
these lines.&lt;/p&gt;

&lt;p&gt;So, our fragmentation issue was solved with 3 new short lines in our Dockerfile. As a final word, I'd like to say that best jemalloc configuration depends heavily on you specific case. There are other variables that can be setup like &lt;code&gt;dirty_decay_ms&lt;/code&gt; or &lt;code&gt;background_thread&lt;/code&gt; that might be or not useful, so, always try with different configurations and find the one that yields the best performance!&lt;/p&gt;
&lt;h2&gt;
  
  
  Bonus
&lt;/h2&gt;

&lt;p&gt;While investigating, I also found out that it is posible to disable PyMalloc entirely with &lt;code&gt;PYTHONMALLOC=malloc&lt;/code&gt;. This routes all memory management (small objects included) through the system allocator (glibc or jemalloc, depending on what is configured).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PYTHONMALLOC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;malloc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'm sharing this as a curiosity: in my case, performance noticeably degraded. PyMalloc is specifically optimised for Python's pattern of many small, short-lived objects, and is estimated to give a 15–20% performance improvement for typical workloads. Bypassing it pushes significantly more allocations into the system allocator, which is slower for that use case.&lt;/p&gt;

&lt;p&gt;Unless you have strong evidence that PyMalloc itself is contributing to your problem, I would not recommend this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://rushter.com/blog/python-memory-managment/" rel="noopener noreferrer"&gt;Python memory management internals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.finbox.in/blog/beyond-python-heap-fixing-memory-retention-with-jemalloc" rel="noopener noreferrer"&gt;Beyond the Python heap: fixing memory retention with jemalloc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>performance</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
