I'm running Kubernetes 1.2.0 on a number of lab machines. The machines have swap enabled. As the machines are used for other purposes, too, I cannot disable swap globally.
I'm observing the following problem: If I start a pod with a memory limit, the container starts swapping after it reached the memory limit. I would expect the container to be killed.
According to this issue this was a problem that has been fixed, but it still occurs with Kubernetes 1.2.0. If I check the running container with docker inspect
, then I can see that MemorySwap = -1
and MemorySwappiness = -1
. If I start a pod with low memory limits, it starts swapping almost immediately.
I had some ideas, but I couldn't figure out how to do any of these:
--memory-swappiness=0
How can I prevent the containers to start swapping?
If you are just playing around then no need to bother with turning swap off. Stuff will still run but resource isolation won't work as well. If you are using Kubernetes seriously enough to need resource isolation then you should not be running other things on the machines.
Kubernetes, specifically the kubelet
, fails if swap is enabled on Linux since version 1.8
(flag --fail-swap-on=true
), as Kubernetes can't handle swap. That means you can be sure that swap is disabled by default on Kubernetes.
To test it in local Docker container, set memory-swap == memory
, e.g.:
docker run --memory="10m" --memory-swap="10m" dominikk/swap-test
My test image is based on this small program with the addition to flush output in Docker:
setvbuf(stdout, NULL, _IONBF, 0); // flush stdout buffer every time
You can also test it with docker-compose up
(only works for version <= 2.x
):
version: '2'
services:
swap-test:
image: dominikk/swap-test
mem_limit: 10m
# memswap_limit:
# -1: unlimited swap
# 0: field unset
# >0: mem_limit + swap
# == mem_limit: swap disabled
memswap_limit: 10m