Kubernetes does not evict nodes despite limit being set

6/27/2018

I changed the default eviction policy set by kops to include the condition memory.available<1Gi. The --eviction-hard flag is now set as:

memory.available<1Gi,nodefs.available<10%,nodefs.inodesFree<5%,imagefs.available<10%,imagefs.inodesFree<5%

The available memory on one node right now is at 400Mb and has been like this for quite a while. No pod eviction is happening.

Why isn't the kubelet evicting pods to make room? There's plenty of room on other nodes.

Is there an AND between eviction conditions? How can I see what the kubelet sees for memory usage?

-- Alex
kubernetes

1 Answer

3/18/2019

Your pods might have quality of service class set to Guaranteed, that's possible reason they are not getting evicted.

See: https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes

-- Karol Flis
Source: StackOverflow