Removing default CPU request and limits on GCP Kubernetes

9/19/2019

Kubernetes on Google Cloud Platform configures a default CPU request and limit.

I make use of deamonsets and deamonset pods should use as much CPU as possible.

Manually increasing the upper limit is possible but the upper bound must be reconfigured in case of new nodes and the upper bound must be set much lower than what is available on the node in order to have rolling updates allowing pods scheduling.

This requires a lot of manual actions and some resources are just not used most of the time. Is there a way to completely remove the default CPU limit so that pods can use all available CPUs?

-- Laurent
google-cloud-platform
google-kubernetes-engine
kubernetes

4 Answers

9/20/2019

GKE, by default, creates a LimitRange object named limits in the default namespace looking like this:

apiVersion: v1
kind: LimitRange
metadata:
  name: limits
spec:
  limits:
  - defaultRequest:
      cpu: 100m
    type: Container

So, if you want to change this, you can either edit it:

kubectl edit limitrange limits

Or you can delete it altogether:

kubectl delete limitrange limits

Note: the policies in the LimitRange objects are enforced by the LimitRanger admission controller which is enabled by default in GKE.

-- weibeld
Source: StackOverflow

9/19/2019

Limit Range is a policy to constrain resource by Pod or Container in a namespace.

A limit range, defined by a LimitRange object, provides constraints that can:

  • Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.

  • Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.

  • Enforce a ratio between request and limit for a resource in a namespace.

  • Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.

You need to find the LimitRange resource of your namespace and remove the spec.limits.default.cpu and spec.limits.defaultRequest.cpu that are defined (or simply delete the LimitRange to remove all constraints).

-- Eduardo Baitello
Source: StackOverflow

9/20/2019

The resource limitation can be configured in 2 ways.

At object level:

kubectl edit limitrange limits

This object is created by default and the value is 100m (1/10 of CPU) and when a pod reach that limit, it's simply killed.


At manifest level: Using statefulSet, DaemonSet, etc, through a yaml file and configured on

spec.containers.resources

it's look like this:

spec:
  containers:
    resources:
      limits:
        memory: 200Mi
      requests:
        cpu: 100m
        memory: 200 Mi

As mentioned you can modify the configuration or simply delete them to remove the limitations.


However, they have some reasons why these limitations has been implemented. I found a video from a Googler talking about it, take a look! [1]

-- Bruno
Source: StackOverflow

9/19/2019

On top of the Limit Range mentioned by Eduardo Baitello, you should also look out for admission controllers, which can intercept requests to the Kubernetes API and modify them (e.g. add limits, and other defaults).

-- Alassane Ndiaye
Source: StackOverflow