Kubernetes MySQL pod getting killed due to memory issue

9/13/2018

In my Kubernetes 1.11 cluster a MySQL pod is getting killed due to Out of memory issue:

> kernel: Out of memory: Kill process 8514 (mysqld) score 1011 or
> sacrifice child kernel: Killed process 8514 (mysqld)
> total-vm:2019624kB, anon-rss:392216kB, file-rss:0kB, shmem-rss:0kB
> kernel: java invoked oom-killer: gfp_mask=0x201da, order=0,
> oom_score_adj=828 kernel: java
> cpuset=dab20a22eebc2a23577c05d07fcb90116a4afa789050eb91f0b8c2747267d18e
> mems_allowed=0 kernel: CPU: 1 PID: 28667 Comm: java Kdump: loaded Not
> tainted 3.10.0-862.3.3.el7.x86_64 #1 kernel

My questions:

  1. How to prevent that my pod gets OOM-killed? Is there a deployment setting I need to enable?
  2. What is the configuration to prevent new pod getting scheduled on a node, when there is not enough memory available on said node?
  3. We disabled the swap space. Do we need to disable memory overcommitting setting on the host level, setting /proc/sys/vm/overcommit_memory to 0?

Thanks SR

-- sfgroups
kubernetes

1 Answer

9/13/2018

When defining a Pod manifest it's a best practice to define resources section with limits and requests for CPU and memory:

resources:
    limits:
      cpu: "1"
      memory: 512Mi
    requests:
      cpu: 500m
      memory: 256Mi

This definition helps the scheduler identifying three Quality of Service (QoS) categories:

  • Guaranteed

  • Burstable

  • BestEffort

and pods in the last category are the most expendable.

-- Nicola Ben
Source: StackOverflow