cluster created with Kops - deploying one pod by node with DaemonSet avoiding master node

2/9/2017

I try to deploy one pod by node. It works fine with the kind daemonSet and when the cluster is created with kubeup. But we migrated the cluster creation using kops and with kops the master node is part of the cluster.

I noticed the master node is defined with a specific label: kubernetes.io/role=master

and with a taint: scheduler.alpha.kubernetes.io/taints: [{"key":"dedicated","value":"master","effect":"NoSchedule"}]

But it does not stop to have a pod deployed on it with DaemonSet

So i tried to add scheduler.alpha.kubernetes.io/affinity:

- apiVersion: extensions/v1beta1
  kind: DaemonSet
  metadata:
    name: elasticsearch-data
    namespace: ess
    annotations:
      scheduler.alpha.kubernetes.io/affinity: >
        {
          "nodeAffinity": {
            "requiredDuringSchedulingRequiredDuringExecution": {
              "nodeSelectorTerms": [
                {
                  "matchExpressions": [
                    {
                      "key": "kubernetes.io/role",
                      "operator": "NotIn",
                      "values": ["master"]
                    }
                  ]
                }
              ]
            }
          }
        }
  spec:
    selector:
      matchLabels:
        component: elasticsearch
        type: data
        provider: fabric8
    template:
      metadata:
        labels:
          component: elasticsearch
          type: data
          provider: fabric8
      spec:
        serviceAccount: elasticsearch
        serviceAccountName: elasticsearch
        containers:
          - env:
              - name: "SERVICE_DNS"
                value: "elasticsearch-cluster"
              - name: "NODE_MASTER"
                value: "false"
            image: "essearch/ess-elasticsearch:1.7.6"
            name: elasticsearch
            imagePullPolicy: Always
            ports:
              - containerPort: 9300
                name: transport
            volumeMounts:
              - mountPath: "/usr/share/elasticsearch/data"
                name: task-pv-storage
        volumes:
          - name: task-pv-storage
            persistentVolumeClaim:
              claimName: task-pv-claim
        nodeSelector:
          minion: true

But it does not work. Is anyone know why? The workaround I have for now is to use nodeSelector and add a label to the nodes that are minion only but i would avoid to add a label during the cluster creation because it's an extra step and if i could avoid it, it would be for the best :)

EDIT:

I changed to that (given the answer) and i think it's right but it does not help, i still have a pod deployed on it:

- apiVersion: extensions/v1beta1
  kind: DaemonSet
  metadata:
    name: elasticsearch-data
    namespace: ess
  spec:
    selector:
      matchLabels:
        component: elasticsearch
        type: data
        provider: fabric8
    template:
      metadata:
        labels:
          component: elasticsearch
          type: data
          provider: fabric8
        annotations:
          scheduler.alpha.kubernetes.io/affinity: >
            {
              "nodeAffinity": {
                "requiredDuringSchedulingRequiredDuringExecution": {
                  "nodeSelectorTerms": [
                    {
                      "matchExpressions": [
                        {
                          "key": "kubernetes.io/role",
                          "operator": "NotIn",
                          "values": ["master"]
                        }
                      ]
                    }
                  ]
                }
              }
            }
      spec:
        serviceAccount: elasticsearch
        serviceAccountName: elasticsearch
        containers:
          - env:
              - name: "SERVICE_DNS"
                value: "elasticsearch-cluster"
              - name: "NODE_MASTER"
                value: "false"
            image: "essearch/ess-elasticsearch:1.7.6"
            name: elasticsearch
            imagePullPolicy: Always
            ports:
              - containerPort: 9300
                name: transport
            volumeMounts:
              - mountPath: "/usr/share/elasticsearch/data"
                name: task-pv-storage
        volumes:
          - name: task-pv-storage
            persistentVolumeClaim:
              claimName: task-pv-claim
-- Emilien Brigand
kubernetes

1 Answer

2/9/2017

Just move the annotation into the pod template: section (under metadata:).

Alternatively taint the master node (and you can remove the annotation):

kubectl taint nodes nameofmaster dedicated=master:NoSchedule

I suggest you read up on taints and tolerations.

-- Janos Lenart
Source: StackOverflow