DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

tracemalloc vs memray vs Py-Spy: Profiler Overhead Cost

Running a memory profiler in production shouldn't crash your app

I learned this the hard way when memray brought a Flask API from 200 req/s to 12 req/s in production. The memory leak I was hunting? A 50MB slow growth over 6 hours. The profiler overhead? 1.6GB of extra allocations and 94% CPU spike.

Most profiling guides skip the critical question: what does the profiler itself cost? You can't fix a memory leak if the profiler consumes more resources than the leak. Here's what actually happens when you run tracemalloc, memray, and Py-Spy on the same workload — with specific numbers for CPU, memory, and disk overhead.

A person reads 'Python for Unix and Linux System Administration' indoors.

Photo by Christina Morillo on Pexels

The test setup (and why it matters)

I ran three scenarios on Python 3.11.7, Ubuntu 22.04, 4-core VM with 8GB RAM:

  1. Baseline web server: FastAPI app processing 10,000 POST requests with JSON payloads (2KB each), Pandas DataFrame manipulation, and SQLAlchemy queries. Peak memory ~340MB, avg response time 18ms.

Continue reading the full article on TildAlice

Top comments (0)