Kubernetes deployment high memory usage

12/2/2019

enter image description here

enter image description here

I am using python flask in GKE contianer and moemory is increasing inside pod. I have set limit to pod but it's getting killed.

I am thinking it's memory leak can anybody suggest something after watching this. As disk increase memory also increase and there are some page faults also.

Is there anything container side linux os (using python-slim base). Memory is not coming back to os or python flask memory management issue ?

To check memory leak i have added stackimpact to application.

enter image description here

Please help...! Thanks in advance

-- Harsh Manvar
docker
google-kubernetes-engine
kubernetes
python

1 Answer

12/3/2019

If you added a resource memory limit to each GKE Deployment when the memory limit was hit, the pod was killed, rescheduled, and should restarted and the other pods on the node should be fine.

You can find more information by running this command:

kubectl describe pod <YOUR_POD_NAME>

kubectl top pods

Please note if you put in a memory request that is larger than the amount of memory on your nodes, the pod will never be scheduled.

And if the Pod cannot be scheduled because of insufficient resources or some configuration error You might encounter an error indicating a lack memory or another resource. If a Pod is stuck in Pending it means that it can not be scheduled onto a node. In this case you need to delete Pods, adjust resource requests, or add new nodes to your cluster. You can find more information here.

Additionally, as per this document, Horizontal Pod Autoscaling (HPA) scales the replicas of your deployments based on metrics like memory or CPU usage.

-- Milad
Source: StackOverflow