Today I setup Kubernetes cluster with 3 VMS running Redhat Linux 8.0. I am able to deploy services and pods also but do failed to understand role of master considering pods are hosted on nodes/workers not on master, is this correct?
My Cluster:
Serverone: Master
Serverthree: Worker Node
Serverfour: Worker Node
I deployed sample service (with 3 replicas) from here
I see all three pods are running but "1" is running on serverone, which is my master node.
hello-kubernetes-594f6f475f-hkn85 Running serverone
hello-kubernetes-594f6f475f-mct2r Running serverfour
hello-kubernetes-594f6f475f-vjchd Running serverfour
Any idea why I am seeing POD running on master node?
My cluster tech details....
NAME STATUS ROLES AGE VERSION
serverfour Ready worker4 40m v1.18.3
serverone Ready master 3d3h v1.18.3
serverthree NotReady worker3 20h v1.18.3
Generally kubernetes cluster deployment tools mark the master nodes as Unschedulable
which prevents the scheduler from placing pods onto master. But It's possible that pod get scheduled to master nodes when all of below is true
Unschedulable: false
flag. You can check this by describing the node using kubectl describe node serverone
taints
with NoSchedule
effecttolerations
in the pod spec for those taints
.More about taints and tolerations here