Kubernetes pod auto restart with exit 137 code

11/30/2019

This logs i got from exited container from Kubernetes one of node

can please anyone helo i think it's a memory issue but i have set sufficient resources to pod.

Memory is gradually increasing with time so memory leak may chance. Please help on this thanks.

It's only working on staging perfectly and on production it restart. Also i was thinking due to python-slim image i am using in docker so kernel or Linux itself killing my python process.

Thanks in advance

Nov 26 00:24:03 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: python3 invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null),  order=0, oom_score_adj=993
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  oom_kill_process+0x23e/0x490
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  out_of_memory+0x100/0x4c0
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  mem_cgroup_out_of_memory+0x3f/0x60
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  mem_cgroup_oom_synchronize+0x2dd/0x300
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  pagefault_out_of_memory+0x25/0x56
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: memory: usage 1048576kB, limit 1048576kB, failcnt 2106
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: memory+swap: usage 1048576kB, limit 9007199254740988kB, failcnt 0
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod2ef4b832-1101-11ea-9b9a-42010a8000a9: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod2ef4b832-1101-11ea-9b9a-42010a8000a9/4a728f33240d29d15761e3224c1c08a41943c233e8d2970b5068a19c95f1f3e1: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:48KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod2ef4b832-1101-11ea-9b9a-42010a8000a9/7dd87463773c32fbffad267b50f3986cdb969bd9915ab32cc371a50c9e2dc16f: cache:128KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod2ef4b832-1101-11ea-9b9a-42010a8000a9/6006c2b3ae7dcc7e6ddf41e765c747db71ed3b09c49e83cec281501ff848419e: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:132KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod2ef4b832-1101-11ea-9b9a-42010a8000a9/7f54c48546807d9430b82469e1968da2e83772b60c2c6b65a308d78b50eefc56: cache:0KB rss:1041756KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:1042088KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup out of memory: Kill process 3951244 (python3) score 2004 or sacrifice child
Nov 29 06:45:02 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: oom_reaper: reaped process 3951244 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Nov 29 06:45:24 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: python3 invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null),  order=0, oom_score_adj=993
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  oom_kill_process+0x23e/0x490
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  out_of_memory+0x100/0x4c0
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  mem_cgroup_out_of_memory+0x3f/0x60
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  mem_cgroup_oom_synchronize+0x2dd/0x300
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel:  pagefault_out_of_memory+0x25/0x56
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: memory: usage 1048576kB, limit 1048576kB, failcnt 795
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: memory+swap: usage 1048576kB, limit 9007199254740988kB, failcnt 0
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod06691003-1101-11ea-9b9a-42010a8000a9: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod06691003-1101-11ea-9b9a-42010a8000a9/c1265c7dc67ee140d0033c3527adcb4e47fded0e8ac27822701d2e56acbb528f: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod06691003-1101-11ea-9b9a-42010a8000a9/471908fde52e37475d1e454fd23755ac0066fd16f324aa1b8dcdae70ae3ee4db: cache:128KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod06691003-1101-11ea-9b9a-42010a8000a9/d29c34fa96c3350d5b5caf09f19be16d68d07bbd54dd80c7bb709f7d55937ae7: cache:0KB rss:44KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup stats for /kubepods/burstable/pod06691003-1101-11ea-9b9a-42010a8000a9/10686aeb2ec1f3054f2d6da37b75a74076c3c1ad61d0fda16601bcca8f66f8c2: cache:12KB rss:1042092KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:132KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:1042244KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: Memory cgroup out of memory: Kill process 3956354 (python3) score 2004 or sacrifice child
Nov 29 06:45:25 gke-cluster-highmem-pool-gen2-f2743e02-msv2 kernel: oom_reaper: reaped process 3956354 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
-- Harsh Manvar
docker
kubernetes
python
python-3.x

1 Answer

12/2/2019

I am posting David's answer from the comments (community wiki) as it was confirmed by the OP:

If you're seeing that message it's the kernel OOM killer: your node is out of memory. Increasing your pod's resource requests to be closer to or equal to the resource limits can help a little bit (by keeping other processes from getting scheduled on the same node), but if you have a memory leak, you just need to fix that, and that's not something that can really be diagnosed from the Kubernetes level.

-- OhHiMark
Source: StackOverflow