I have deployed a stateful set app on my Kubernetes cluster. It has persistent volumes etc and a replica of one. The issue I face is that when I turn off the node where the stateful set is running, the pod does not restart on a new node. It continues waiting for the node to be up and eventually restarts on the same node. Is there some setting in the StatefulSet spec which I am missing. I have followed the example mentioned in Kubernetes guides to set up the same: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components
What am I missing?
This is by design. When a node goes "down", the master does not know whether it was a safe down (deliberate shutdown) or a network partition. Thus PVC with that node remains on the same node and master mark the pods on that node as
By default, Kubernetes always try to create pod on the same node where PVC is provisioned, which is the reason the pod always comes up on the same node when deleted.
This PVC goes onto other node only when you
cordon the node,
drain the node and
delete the node from cluster, Now master knows this node doesn't exist in cluster. Hence master moves PVC to another node and pod comes up on that node.