Kubernetes Cluster Behaviour

6/6/2020

Today I setup Kubernetes cluster with 3 VMS running Redhat Linux 8.0. I am able to deploy services and pods also but do failed to understand role of master considering pods are hosted on nodes/workers not on master, is this correct?

My Cluster:

Serverone: Master

Serverthree: Worker Node

Serverfour: Worker Node

I deployed sample service (with 3 replicas) from here

I see all three pods are running but "1" is running on serverone, which is my master node.

hello-kubernetes-594f6f475f-hkn85            Running   serverone

hello-kubernetes-594f6f475f-mct2r            Running   serverfour

hello-kubernetes-594f6f475f-vjchd            Running   serverfour

Any idea why I am seeing POD running on master node?

My cluster tech details....

NAME          STATUS     ROLES     AGE    VERSION

serverfour    Ready      worker4   40m    v1.18.3

serverone     Ready      master    3d3h   v1.18.3

serverthree   NotReady   worker3   20h    v1.18.3

enter image description here

-- Solutions Architect
cluster-computing
kubernetes
master
nodes

1 Answer

6/6/2020

Generally kubernetes cluster deployment tools mark the master nodes as Unschedulable which prevents the scheduler from placing pods onto master. But It's possible that pod get scheduled to master nodes when all of below is true

  1. Master node has Unschedulable: false flag. You can check this by describing the node using kubectl describe node serverone
  2. It does not have any taints with NoSchedule effect
  3. If it has got taints(as explained in no 2) but you have mentioned tolerations in the pod spec for those taints.

More about taints and tolerations here

-- Arghya Sadhu
Source: StackOverflow