Last week my customer had a problem. Their custom webpack build was working fine on a spiffy developer Mac, but fell over when running on a deployment server. I could see an out of memory error, but how much RAM did this thing really need? Adding more "iron" in the form of a bigger AWS instance was a possible solution, but how much iron exactly?
I knew I could watch the output of the top
command, and sort it by "resident set size," a fancy technical term for "how much memory this process really needs, not counting shared stuff." But I also knew the webpack build would "fork" a lot of child processes with their own memory requirements, and besides, who wants to stare at the screen for several minutes? Don't blink, you might miss the magic moment!
This made me think of the time
command. time
is a classic Unix command that tells you how much time another command takes to run:
~ $ time sleep 5
real 0m5.012s
user 0m0.001s
sys 0m0.004s
time
is great, but what I really needed was max-mem
, a command to show me the peak resident set size of my command... including all child processes.
I couldn't find it, so I whipped something up. Here's the max-mem command.
Now I can type:
max-mem npm run build
[Regular output appears here]
Max memory usage: 800MB
With the help of max-mem
, I was able to tell my customer exactly how much it would cost to upgrade the deployment server to ensure enough RAM for the webpack builds. His response: "yes please, upgrade the server! My developers can optimize RAM usage later." Problem solved.
It's written in Node.js, and samples the memory usage every 100 milliseconds by automating the ps
command. Is there a better way? Maybe! Pull requests are welcome, just remember it should work on both MacOS and Linux.
Check it out and let me know what you think.
Top comments (0)