Kubernetes release requested cpu

11/19/2019

We have a Java application distributed over multiple pods on Google Cloud Platform. We also set memory requests to give the pod a certain part of the memory available on the node for heap and non-heap space.

The application is very resource-intensive in terms of CPU while starting the pod but does not use the CPU after the pod is ready (only 0,5% are used). If we use container resource "requests", the pod does not release these resources after start has finished.

Does Kubernetes allow to specify that a pod is allowed to use (nearly) all the cpu power available during start and release those resources after that? Due to rolling update we can prevent that two pods are started at the same time.

Thanks for your help.

-- Jochen Zimmermann
cpu
google-cloud-platform
kubernetes
limit
request

3 Answers

11/21/2019

one factor for scheduling pods in nodes is resource availability and kubernetes scheduler calculates used resources from request value of each pod. If you do not assign any value in request parameter then for this deployment request will be zero . Request parameter doesnt ensure that the pod will use this much cpu or ram. you can get current usage of resources from "kubectl top pods / nodes". request parameter will buffer resources for a pod. where as limit put a cap on resources usage for a pod. you can get more information here https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/. This will give you a rough idea of request and limit.

-- shubham_asati
Source: StackOverflow

11/19/2019

"Does Kubernetes allow to specify that a pod is allowed to use (nearly) all the cpu power available during start and release those resources after that?"

A key word here is "available". The answer is "yes" and it can be achieved by using Burstable QoS (Quality of Service) class. Configure CPU request to a value you expect the container will need after starting up, and either:

  • configure CPU limit higher than the CPU request, or
  • don't configure CPU limit in which case either namespace's default CPU limit will apply if defined, or the container "...could use all of the CPU resources available on the Node where it is running".

If there isn't CPU available on the Node for bursting, the container won't get any beyond the requested value and as result the starting of the application could be slower.

It is worth mentioning what the docs explain for Pods with multiple Containers:

The CPU request for a Pod is the sum of the CPU requests for all the Containers in the Pod. Likewise, the CPU limit for a Pod is the sum of the CPU limits for all the Containers in the Pod.

If running Kubernetes v1.12+ and have access to configure kubelet, the Node CPU Management Policies could be of interest.

-- apisim
Source: StackOverflow

11/19/2019

If you specify requests without a limit the value will be used for scheduling the pod to an appropriate node that satisfies the requested available CPU bandwidth. The kernel scheduler will assume that the requests match the actual resource consumption but will not prevent exceeding usage. This will be 'stolen' from other containers. If you specify a limit as well your container will get throttled if it tries to exceed the value. You can combine both to allow bursting usage of the cpu, exceeding the usual requests but not allocating everything from the node, slowing down other processes.

-- Thomas
Source: StackOverflow