I'm running the kuberenets cluster on bare metal servers and my cluster nodes keep added and removed regularly. But when a node is removed, kubernetes does not remove it automatically from nodes list and kubectl get nodes keep showing NotReady nodes. Is there any automated way to achieve this? I want similar behavior for nodes as kubernetes does for pods.
to remove a node follow the below steps
Run on Master
# kubectl cordon <node-name>
# kubectl drain <node-name> --delete-exit-data --force --ignore-daemonsets --delete-local-data
# kubectl delete node <node-name>