I have kubernetes cluster and every thing work fine. after some times I drain my worker node and reset it and join it again to master but
#kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 159m v1.14.0
ubuntu1 Ready,SchedulingDisabled <none> 125m v1.14.0
ubuntu2 Ready,SchedulingDisabled <none> 96m v1.14.0
what should i do?
To prevent a node from scheduling new pods use:
kubectl cordon <node-name>
Which will cause the node to be in the status: Ready,SchedulingDisabled
.
To tell is to resume scheduling use:
kubectl uncordon <node-name>
More information about draining a node can be found here. And manual node administration here
I fixed it using:
kubectl uncordon <node-name>