Kubernetes application running on master - DaemonSet

7/5/2017

is there any way to avoid execution of an application deployed as DaemonSet on master?
I have seen that this is the expected behavior, but I would like to avoid execution in some way.

Regular pods will not schedule on master but DaemonSet pods do.

If yes, is it possible to set this information in the yml file (parameter..etc??)?

 kubectl create -f mydaemon.yml

logspri-4zwl4              1/1       Running           0          <invalid>   X.X.X.X   k8s-master-e7c355e2-0
logspri-kld2w              1/1       Running           0          <invalid>   X.X.X.X     k8s-agent-e7c355e2-0
logspri-lksrh              1/1       Running           0          <invalid>   X.X.X.X     k8s-agent-e7c355e2-1

I would like to avoid my pod is running on k8s-master-e7c355e2-0

I have tried :

annotations:
          scheduler.alpha.kubernetes.io/affinity: >
            {
              "nodeAffinity": {
                "requiredDuringSchedulingRequiredDuringExecution": {
                  "nodeSelectorTerms": [
                    {
                      "matchExpressions": [
                        {
                          "key": "kubernetes.io/role",
                          "operator": "NotIn",
                          "values": ["master"]
                        }
                      ]
                    }
                  ]
                }
              }
            }

Trying also to apply the following role (as suggested) but it doesn't work :

kubectl get nodes
NAME                    STATUS                     AGE       VERSION
k8s-agent-e7c355e2-0    Ready                      49d       v1.5.3
k8s-agent-e7c355e2-1    Ready                      49d       v1.5.3
k8s-master-e7c355e2-0   Ready,SchedulingDisabled   49d       v1.5.3

Shall I perform:

VirtualBox:~/elk/logspout$ kubectl taint node k8s-master-e7c355e2-0 k8s-master-e7c355e2-0/ismaster=:NoSchedule
node "k8s-master-e7c355e2-0" tainted

Even if it seems that the master is tainted I see that the application is always on master.

Role:           
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/instance-type=Standard_D2
            beta.kubernetes.io/os=linux
            failure-domain.beta.kubernetes.io/region=northeurope
            failure-domain.beta.kubernetes.io/zone=0
            kubernetes.io/hostname=k8s-master-e7c355e2-0
Annotations:        volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:         <none>
CreationTimestamp:  Wed, 17 May 2017 14:38:06 +0200
Phase:          
Conditions:

What is wrong? Could you give me the right command to perform?

same problem reported here without an apparent solution by :

kubectl taint nodes nameofmaster dedicated=master:NoSchedule

Thanks Prisco

-- Prisco
configuration
daemon
docker
kubectl
kubernetes

2 Answers

7/5/2017

From https://github.com/kubernetes/kubernetes/issues/29108, you can add a taint flag to your Master node kubelet, so the even the pods in the DaemonSet are not scheduled there

   --register-with-taints=node.alpha.kubernetes.io/ismaster=:NoSchedule

You will need to restart kubelet in your node

-- Javier Salmeron
Source: StackOverflow

7/6/2017

By Even if it seems that the master is tainted I see that the application is always on master., I'm not certain if the DaemonSet was created before or after the taint.

If you tainted first and then created the DaemonSet, the pod should be repelled by the tainted node without further config. Otherwise, the pod from the DaemonSet will not automatically terminate. To evict existing pods immediately, the NoExecute taint is needed.

From here:

Normally, if a taint with effect NoExecute is added to a node, then any pods that do not tolerate the taint will be evicted immediately, and any pods that do tolerate the taint will never be evicted. However, a toleration with NoExecute effect can specify an optional tolerationSeconds field that dictates how long the pod will stay bound to the node after the taint is added.

-- Eugene Chow
Source: StackOverflow