DEV Community

Cover image for How to setup resources for k8s pod
kination
kination

Posted on

How to setup resources for k8s pod

Resources in k8s

In Kubernetes, managing resources efficiently is significant to maintain service performance and cluster stability. One of main aspects of resource management is setting up resource requests and limits for your containers.

---
apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  - name: log-aggregator
    image: images.my-company.example/log-aggregator:v6
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
Enter fullscreen mode Exit fullscreen mode

Container inside k8s pods should run based on assigned cpu/memory resources. To go on this way, if resources has been configured inside pod, k8s scheduler will check the available amount of resource of node before placing it.

Scheduler refuses pod to be located inside node if resource capacity is not enough, and will try to find another node. If there's no remaining node, pod will be in "pending" state, and will wait until enough resource has been released from other closed pods.

How requests/limit setting effects the system and pod

requests are guaranteed resource for container, so you can think as minimum amount that container requires. Like so, limits are maximum amount that container can use. k8s uses it to make scheduling decisions, ensuring that there are enough resources available in the cluster to run the container.

And as you see in following, cpu/memory can be defined as resource for container.

...
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "1Gi"
        cpu: "1"
Enter fullscreen mode Exit fullscreen mode

But there are some difference of treating CPU and memory limitation inside k8s.

If cpu usage is being close to the limits.cpu, container is getting 'throttled'. Same as our laptop, if cpu throttles, it restrict CPU usage, and it means performance can be degraded. However, it won’t be terminated or evicted

But when memory goes over limitation, instead of lower the usage, container gets terminated with OOM(out-of-memory) error because there is no way to throttle memory usage. So it can cause even worst situation.

So, how should we do?

It's not sure which level of cpu usage causes 'throttling'. Seems it depend on the type of application, and logic. Simply we can think as from "80%~90%", but some of research shows that it starts even from "30%~40%". Seems it is due to the particular way CPU limits are implemented at the Linux kernel level.

Because of this, there are several research about not setting up 'limits.cpu', to avoid performance issue. And seems it is pretty reasonable. Nevertheless, giving limits is critical to ensure that Kubernetes clusters remain stable and efficient over time.

And about memory, limitation needs to be setup with proper data based on peak values, to avoid OOM.

How did I do?

This is the case what I've done on running "Flink + k8s".

For cpu

  • Check average amount of source/sink with specific limit. Average usage percentage was around 50%.
  • Reduce limit, and check whether I/O has been decreased or not.
  • If not, reduce again until it shows performance restriction.

For memory

  • Research peak value during several days, and setup as "peak value * 1.25".

I've defined performance level based on data input/output. But for common server application, it can be average speed of request/response, or else. Also, appropriate memory limit can be different base on application's memory usage pattern.

So make research of your own, and keep improve it.

Reference

Top comments (0)