How to change k8s's pods limts without killing the original pod?

7/20/2019

Requst:limits of a pod may be set to low at the beginning, to make full use of node's resource, we need to set the limits higher. However, when the resource of node is not enough, to make the node's still work well, we need to set the limits lower. It is better not to kill the pod, because it may influence the cluster.

Background:I am currently a beginner in k8s and docker, my mentor give me this requests. Can this requests fullfill normaly? Or is it better way to solve this kind of problem? Thanks for your helps! All I tried:I am trying to do by editing the Cgroups, but I can only do this in a container, so may be container should be use in privileged mode.

I expect a resonable plan for this requests. Thanks...

-- scofieldmao
docker
go
kubernetes

3 Answers

7/22/2019

The clue is you want to change limits without killing the pod.

This is not the way Kubernetes works, as Markus W Mahlberg explained in his comment above. In Kubernetes there is no "hot plug CPU/memory" or "live migration" facilities the convenient hypervisors provide. Kubernetes treats pods as ephemeral instances and does not take care about keeping them running. Whether you need to change resource limits for the application, change the app configuration, install app updates or repair misbehaving application, the "kill-and-recreate" approach is applied to pods.

Unfortunately, the solutions suggested here will not work for you:

  • Increasing limits for the running container within the pod ( docker update command ) will lead to breaching the pod limits and killing the pod by Kubernetes.
  • Vertical Pod Autoscaler is part of Kubernetes project and relies on the "kill-and-recreate" approach as well.

If you really need to keep the containers running and managing allocated resource limits for them "on-the-fly", perhaps Kubernetes is not suitable solution in this particular case. Probably you should consider using pure Docker or a VM-based solution.

-- mebius99
Source: StackOverflow

7/20/2019

I do no think this is possible, there is an old issue tracking such thing on the kubernetes github (https://github.com/kubernetes/kubernetes/issues/9043) from 2015 and it is open.

Also, you should not rely on pod not being recreated while using kubernetes. Applications should be able to stateless to a point where if it dies in mid of a process, it could handle this failure and start it from the begin once it is started again.

I understand the idea behind trying to optimize the resource usage to it maximum but you should be also worried about a reliable process.

I think you should check out the Kubernetes' Vertical Pod Autoscaler, as it automates the resources of a pod depending on its usage. Maybe that could be an alternative: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler

-- Gustavo Paiva
Source: StackOverflow

7/20/2019

You have to find the container ID's those running inside the pods and run this below command to increase the resources.

docker update --cpu-shares NewValue -m NewValue DockerContainerID
-- Subramanian Manickam
Source: StackOverflow