My kubernetes cluster looks as follow:
k get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
As you can see, the cluster consists of 4 master and 3 nodes.
These are the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
Why the pods greeter-service-v1-8d97f9bcd-gnsvp and helloweb-77c9476f6d-7f76v are running on the master?
By default, there is no restriction for Pod to be scheduled on master unless there is a Taint
like node-role.kubernetes.io/master:NoSchedule
.
You can verify if there is any taint on master node using kubectl describe k8s-1
or kubectl get node k8s-secure-master.linxlabs.com -o jsonpath={.spec.taints[]} && echo
If you want to put a taint then use below
kubectl taint node k8s-1 node-role.kubernetes.io/master="":NoSchedule
After adding taint, no new pods will be scheduled on this node unless there is matching toleration on Pod spec.
Read more about Taints and Tolerations here