kubernetes: Difference between whats requested and allocated by Docker

10/13/2019

Created a deployment with mem requests of 200M:

request=200Mi

Limit=not defined

kubectl get po -n qos-example mem-req-56b889c948-79ptc -o yaml|grep -i memory -A 4 -B 4
imagePullPolicy: Always
name: mem-req
resources:
  requests:  memory: 200M

But in Docker it doesn't show any allocated memory,

shouldn't it set mem as whats requested ie 200M?

docker inspect f8a7f26528fe|grep -i memory

        "Memory": 0,
        "KernelMemory": 0,
        "MemoryReservation": 0,
        "MemorySwap": 0,
        "MemorySwappiness": null,

Why does Kubernetes not able to pass this info to Docker eventhough kubernetes knows that it should allocate 200M

kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", 
GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-    06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3",GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean"BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

With the same value for limit and request, i can see expected values in Docker.

request=200Mi

Limit=200Mi

  kubectl get po -n qos-example  mem-check-re-limit-844b4bc5cb-nn98d -o yaml |grep memory -A 4 -B 4
imagePullPolicy: Always
name: mem-check-re-limit
resources:
  limits:
    memory: 200Mi
  requests:
    memory: 200Mi




 docker inspect d2711e340b94|grep -i memory

        "Memory": 209715200,
-- green
docker
kubernetes
kubernetes-pod
linux

1 Answer

10/13/2019

Resource requests only affect the Kubernetes scheduler; they have no effect on the running pod. If a node has 4 GB of RAM and there are four pods that each request 1 GB, they "fit". They can get OOM-killed if any pod individually exceeds its declared memory limit (if any) or if the total combination of things running on the node doesn't fit.

So, setting resource requests lower than limits allows more pods to fit on a node at low utilization, but brings a higher risk of them randomly getting killed off; setting the two numbers equal potentially allows less to fit and can leave real memory unused, but also means a pod is more likely to live if it obeys its own limit.

In general if you're using Kubernetes you should let it manage all of the interactions with the container system; you shouldn't generally run docker commands on Kubernetes-managed nodes.

-- David Maze
Source: StackOverflow