I have a bunch of Docker apps running on one server, and things got tricky when Docker builds decided to grab all the CPU, causing all my other apps to run slowly. So, how did I fix it? The answer is the --cpu-quota
flag!
Whenever I run docker build ...
, my CPU utilization goes through the roof 🤯
This would be fine if there isn't anything else on the server, but there are some apps that are super sensitive to this kind of CPU hogging, so I needed to find a solution!
The --cpu-quota
flag is described as "Limit the CPU CFS (Completely Fair Scheduler) quota", which I honestly didn't understand the first time I read it.
The TL;DR version is that every core you have equals 100000
. If you have a server with 1 core and you want to give docker build 80% of the available CPU, you use --cpu-quota 80000
. And if you have more cores, you simply scale it up. If you have 4 cores, it would be 4 times 80000, or 320000.
That's it! If you simply append --cpu-quota xyz
to your docker build, your problems are solved. Of course, you need to figure out what your magic number! For me, 70% of the available CPU solved all my issues.
If you don't want to have the same issue while deploying Docker on your Server, check out Sliplane!
Top comments (18)
Why do you build images on the server that is running some important programs?
He never mentioned they were important programs, just CPU sensitive. For a private server setting up an extra build server can introduce unnecessary complexity
I used GitHub actions before, but they were incredibly slow so I just did it on my server
I think it depends on the server and the type of build you had implemented on github actions. I use github actions for all my image builds and distribute them to clouds
So that they have an excuse to write this click bait article and link to Sliplane which author founded. Hopefully their one server isn't the one running Sliplane.
I was genuinely happy when I found out about this. What’s wrong with sharing some useful info??
I created an account just to point out the bullshit.
Seif's comment is marked "Comment marked as low quality/non-constructive by the community."
But he is absolutely correct. This guy is making stupid decisions (building containers on the same server it's running on) just to promote his company that he co-founded. This isn't a tech article, it's advertising...
This is my first comment in 4+ years, maybe - same reason.
Whatever the underlying reason, adding a very specific flag to literally every build command that ever runs on this box isn't a solution. It's a workaround at best.
It's a neat trick to know, I guess, but even at that it's nothing
docker build --help
wouldn't solve.Edit to add: oof dev.to/code42cate/comment/29jl5
This saved my ass ❤️
Really something to consider adding. Thanks! 🫡
What about limiting the number of cores and/or memory via compatibility mode?
Add to docker-compose
deploy:
Resources:
Limits:
CPUs: '1'
Memory: 2048M
Then > docker-compose --compatibility up
That's what I'm doing, but I will try this out too, thanks!
Yeah! I only skimmed the docs, but this seems to be the same, just for Docker Compose? I rarely use Docker Compose, nice to see that you can do it there as well :)
Advertisement alert!!!!!!!
Absolutely bullshit. Bite by antipattern. My ass was flamed I even created account
Odd, with Docker being premised on jails / LXC - how is this an after thought. With Solaris and BSD, quotas and resources are always top of mind. Especially with production workloads. How's this an obscured idea.
If you don't want to have the same issue while deploying Docker on your Server, check out ... using a different server like you should be doing.
Of course, but that's not always an option:)
wow nice one
Some comments may only be visible to logged-in visitors. Sign in to view all comments.