The container set resource limit:
resources:
limits:
cpu: "1"
memory: 1G
requests:
cpu: "1"
memory: 1G
the cgroup memory limit:
cat /sys/fs/cgroup/memory/kubepods.slice/kubepods-podaace5b66_c7d0_11e9_ba2a_dcf401d01e81.slice/memory.limit_in_bytes
999997440
1GB= 1*1024*1024*1024=1,073,741,824B
k8s version:1.14.4
docker version: docker-ce-18.09.6 OS: ubuntu 18.04
Because you declared a GibiByte using the G
scale: if you expect a GigaByte, you should use Gi
# k get deployments.apps memory -o yaml|grep -i limits -C 1
resources:
limits:
memory: 1Gi
$ cat /sys/fs/cgroup/memory/kubepods/burstable/pod15dff9ec-7815-48c0-bfce-453be77413ad/memory.limit_in_bytes
1073741824
I have performed some tests.
For the values between 999997440 B (976560 KB) and 1000000000 B (as in your example) you will have the same results in memory.limit_in_bytes = 999997440 B. Till you reach the next (integer) number of bytes divisible by your pagesize (default 4096). In my example it was 1000001536 B (976564K).
I am not linux expert but according to the documentation:
A successful write to this file does not guarantee a successful setting of this limit to the value written into the file. This can be due to a number of factors, such as rounding up to page boundaries or the total availability of memory on the system. The user is required to re-read this file after a write to guarantee the value committed by the kernel.
I would like to suggest use Gi notation instead as mentioned by prometherion to have more control about resources limits.
Hope this help.