"Limits" property ignored when deploying a container in a Kubernetes cluster

9/14/2018

I am deploying a container in Google Kubernetes Engine with this YAML fragment:

    spec:
      containers:
      - name: service
        image: registry/service-go:latest
        resources:
          requests:
           memory: "20Mi"
           cpu: "20m"
          limits:
           memory: "100Mi"
           cpu: "50m"

But it keeps taking 120m. Why is "limits" property being ignored? Everything else is working correctly. If I request 200m, 200m are being reserved, but limit keeps being ignored.

enter image description here

My Kubernetes version is 1.10.7-gke.1

I only have the default namespace and when executing

kubectl describe namespace default

Name:         default
Labels:       <none>
Annotations:  <none>
Status:       Active

No resource quota.

Resource Limits
 Type       Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
 ----       --------  ---  ---  ---------------  -------------  -----------------------
 Container  cpu       -    -    100m             -              -
-- rsan
google-cloud-platform
google-kubernetes-engine
kubernetes

2 Answers

9/15/2018

You can try logging into the node running your pod and run:

ps -Af | grep docker

You'll see the full command line that kubelet sends to docker. Representing the memory limit it should have something like --memory. Note that the request value for memory is only used by the Kubernetes scheduler to determine whether it has exceeded all pods/containers running on a node.

Representing the requests for CPUs you'll see the --cpu-shares flag. In this case the limit is not a hard limit but again it's a way for the Kubernetes scheduler to not allocate containers/pod passed that limit when running multiple containers/pods on a specific node. You can learn more about cpu-shares here and from the Kubernetes side here. So in essence, if you don't have enough workloads on the node, it will always go over its CPU share if it needs to and that's what you are probably seeing.

Docker has other ways of restricting the CPUs such as cpu-period/cpu-quota and cpuset-cpus but not used bu Kubernetes as of this writing. In this, I believe mesos does somehow better when dealing with CPU/memory reservations and quotas imo.

Hope it helps.

-- Rico
Source: StackOverflow

9/19/2018

Considering Resources Request Only

The google cloud console works well, I think you have multiple containers in your pod, this is why. The value shown above is the sum of resources requests declared in your truncated YAML file. You can verify easily with kubectl.

First verify the number of containers in you pod.

kubectl describe pod service-85cc4df46d-t6wc9

Then, look the description of the node via kubectl, you should have the same informations as the console says.

kubectl describe node gke-default-pool-abcdefgh...

What is the difference between resources request and limit ?

You can imagine your cluster as a big square box. This is the total of your allocatable resources. When you drop a Pod in the big box, Kubernetes will check if there is an empty space for the requested resources of the pod (is the small box fits in the big box?). If there is enough space available, then it will schedule your workload on the selected node.

Resources limits are not taken into account by the scheduler. All is done at the kernel level with CGroups. The goal is to restrict workloads to take all the CPU or Memory on the node they are scheduled on.

If your resources requests == resources limits then, workloads cannot escape their "box" and are not able to use available CPU/Memory next to them. In other terms, your resource are guaranteed for the pod.

But, if the limits are greater than your requests, this is called overcommiting resources. You bet that all the workloads on the same node are not fully loaded at the same time (generally the case).

I recommend to not overcommiting the memory resource, do not let the pod escape the "box" in term of memory, it can leads to OOMKilling.

-- Yann C.
Source: StackOverflow