Google Kubernetes logs

12/20/2019
Memory cgroup out of memory: Kill process 545486 (python3) score 2016 or sacrifice child Killed process 545486 (python3) total-vm:579096kB, anon-rss:518892kB, file-rss:16952kB

This node logs and my container is continuously restarting randomly. Running python cotnainer with 4 replicas.

Python application contains socket with a flask. Docker image contain of python3.5:slim

Kubectl get nodes

NAME                                                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
gke-XXXXXXX-cluster-highmem-pool-gen2-f2743e02-msv2   682m         17%    11959Mi         89%    

Today morning node log : 0/1 nodes are available: 1 Insufficient cpu.

But node CPU usage is 17% only

There not much running inside pod.

-- chagan
docker
google-kubernetes-engine
kubernetes
kubernetes-pod

1 Answer

12/20/2019

Have a look at the best practices and try to adjust resource requests and limits for CPU and memory. If your app starts hitting your CPU limits, Kubernetes starts throttling your container. Because there is no way to throttle memory usage, if a container goes past its memory limit it will be terminated (and restarted). So, using suitable limits should help you to solve your problem with restarts of your containers.

In case request of your container exceeded limits, Kubernetes will throw an error, similar to one you have, and won’t let you run the container.

After adjusting limits, you could use some monitoring system (like Stackdriver) to find the cause of potential memory leak.

-- Serhii Rohoza
Source: StackOverflow