How to reserve certain worker nodes for a namespace

10/31/2019

I would like to reserve some worker nodes for a namespace. I see the notes of stackflow and medium

How to assign a namespace to certain nodes?

https://medium.com/@alejandro.ramirez.ch/reserving-a-kubernetes-node-for-specific-nodes-e75dc8297076

I understand we can use taint and nodeselector to achieve that. My question is if people get to know the details of nodeselector or taint, how can we prevent them to deploy pods into these dedicated worker nodes.

thank you

-- Honord
kubernetes
taint

2 Answers

10/31/2019

To accomplish what you need, basically you have to use taint. Let's suppose you have a Kubernetes cluster with one Master and 2 Worker nodes:

$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
knode01      Ready    <none>   8d    v1.16.2
knode02      Ready    <none>   8d    v1.16.2
kubemaster   Ready    master   8d    v1.16.2

As example I'll setup knode01 as Prod and knode02 as Dev.

$ kubectl taint nodes knode01 key=prod:NoSchedule
$ kubectl taint nodes knode02 key=dev:NoSchedule

To run a pod into these nodes, we have to specify a toleration in spec session on you yaml file:

apiVersion: v1
kind: Pod
metadata:
  name: pod1
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "dev"
    effect: "NoSchedule"

This pod (pod1) will always run in knode02 because it's setup as dev. If we want to run it on prod, our tolerations should look like that:

  tolerations:
  - key: "key"
    operator: "Equal"
    value: "prod"
    effect: "NoSchedule"

Since we have only 2 nodes and both are specified to run only prod or dev, if we try to run a pod without specifying tolerations, the pod will enter on a pending state:

$ kubectl get pods -o wide
    NAME         READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
    pod0         1/1     Running   0          21m   192.168.25.156   knode01   <none>           <none>
    pod1         1/1     Running   0          20m   192.168.32.83    knode02   <none>           <none>
    pod2         1/1     Running   0          18m   192.168.25.157   knode01   <none>           <none>
    pod3         1/1     Running   0          17m   192.168.32.84    knode02   <none>           <none>
    shell-demo   0/1     Pending   0          16m   <none>           <none>    <none>           <none>

To remove a taint:

$ kubectl taint nodes knode02 key:NoSchedule-
-- mWatney
Source: StackOverflow

10/31/2019

This is how it can be done

  1. Add new label, say, ns=reserved, label to a specific worker node
  2. Add taint and tolerations to target specific pods on to this worker node
  3. You need to define RBAC roles and role bindings in that namespace to control what other users can do
-- P Ekambaram
Source: StackOverflow