k8s cluster pod terminates process

11/29/2019

in K8S cluster we can define resource limit and once application reached to limit k8S cluster will automatically terminates this container and brings new one. There can be other reason as well for a pod to get terminated. Wanted to know 1) what happens to those transaction that are in mid of processing & 2) If have one replica only is all incoming request get stopped before another one start up.

Can anyone help to understand this in more details?

Thanks Baharul Islam

-- Baharul
kubernetes
kubernetes-pod

1 Answer

11/29/2019

Try adding grafana in your kubernetes cluster and check the memory POD is requiring while you are doing the load test ( or just the case that you are taking now), if it reaches the limits, then try increasing the memory and cpu limit assigned to it.

I faced one scenario where Pod was supposed to carry some data from database and the resources were not appropriate when data was high and the call used to fail by giving 500 kinda error and multiple Pods were there alive at that time in picture, still there was drop in response.

Then I checked for grafana and it had shown that large memory limits should be set, and after doing it, the things were sorted.

You should try debugging this way, maybe helps.

-- Tushar Mahajan
Source: StackOverflow