Kubernetes: Exclude Node from default scheduling

3/23/2017

Is it possible to create a node-pool that the scheduler will ignore by default but that can be targeted by node-selector?

-- tback
google-kubernetes-engine
kubernetes

3 Answers

5/17/2017

For those on Kubernetes 1.6 without alpha support enabled, you'll need to use the new "beta" level fields. The equivalent to the above accepted answer is what I've created below. This is based on the following article in the docs: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: dedicated
            operator: In
            values: ["my-pool"]
  tolerations: 
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"
  containers:
  - name: with-node-affinity
    image: gcr.io/google_containers/pause:2.0
-- Aaron
Source: StackOverflow

3/23/2017

If your node-pool has a static size or at least it's not auto-scaling then this is easy to accomplish.

First, taint the nodes in that pool:

kubectl taint node \
  `kubectl get node -l cloud.google.com/gke-nodepool=my-pool -o name` \
  dedicated=my-pool:NoSchedule

Kubernetes version >= 1.6

Then add affinity and tolerations values under spec: in your Pod(templates) that need to be able to run on these nodes:

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: dedicated
            operator: In
            values: ["my-pool"]
  tolerations: 
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"

Pre 1.6

Then add these annotations to your Pod(templates) that need to be able to run on these nodes:

annotations:
  scheduler.alpha.kubernetes.io/tolerations: >
    [{"key":"dedicated", "value":"my-pool"}]
  scheduler.alpha.kubernetes.io/affinity: >
    {
      "nodeAffinity": {
        "requiredDuringSchedulingIgnoredDuringExecution": {
          "nodeSelectorTerms": [
            {
              "matchExpressions": [
                {
                  "key": "dedicated",
                  "operator": "In",
                  "values": ["my-pool"]
                }
              ]
            }
          ]
        }
      }
    }

See the design doc for more information.

Autoscaling group of nodes

You need to add the --register-with-taints parameter to kubelet:

Register the node with the given list of taints (comma separated <key>=<value>:<effect>). No-op if register-node is false.

In another answer I gave some examples on how to persist that setting. GKE now also has specific support for tainting node pools

-- Janos Lenart
Source: StackOverflow

11/3/2017

Now GKE supports node taints. Node taints will be applied to all nodes during creation and will be persisted. So you don't need to run kubectl taint command. Please check https://cloud.google.com/container-engine/docs/node-taints for more information on this.

-- Ajit Kumar
Source: StackOverflow