I'm trying to get my kubernetes cluster to trigger a MemoryPressure state, for the purpose of testing, and I can't seem to get it to happen. Even when I'm getting the Warning "Out of Memory", running kubectl describe pod still shows MemoryPressure == false.
I did this by creating a deployment that is running containers that do: stress-ng -m 4 --vm-bytes 800M --vm-keep
And I keep scaling until I see Out of Memory -- but still no MemoryPressure flag!
I have tried this on KIND and Minikube with no success. Any ideas?
MemoryPressure
is a node metrics. You could run the stress
command on the node directly to create an artificial memory pressure on the node. Kubelet will notice this(with a caveat) and should report that the node is under MemoryPressure
.
The caveat as per the doc is
The kubelet currently polls cAdvisor to collect memory usage stats at a regular interval. If memory usage increases within that window rapidly, the kubelet may not observe MemoryPressure fast enough, and the OOMKiller will still be invoked. We intend to integrate with the memcg notification API in a future release to reduce this latency, and instead have the kernel tell us when a threshold has been crossed immediately