autoremove pods from tainted nodes

8/10/2018

In my quest to automate k8s deployments, I'm starting nodes that are supposed to automatically be set as masters or simple nodes.

This is accomplished by joining the node in k8s and having a daemonset that runs only on nodes with specific hostnames (only masters) that taints the node, labels it as master and sets the content of /etc/kubernetes.

My issues is that before doing the taint and labeling, the node is, from the point of view of the cluster, a simple node, so it will launch on it anything that the scheduler wants or seems appropriate.

This is not desirable, mainly because is also adds kube2iam, that is supposed to run only on worker nodes. This in turn blocks the IAM role attached to masters, which breaks other things.

Is there any possibility to force the cluster to evict the pods that are not supposed to be on a master?

EDIT

Currently I'm doing this at the end of the script: K8S_NAME=$(kubectl get nodes -l kubernetes.io/hostname=$(hostname) --no-headers --output=custom-columns=NAME:.metadata.name) kubectl get pods --all-namespaces --field-selector spec.nodeName=$K8S_NAME --no-headers | gawk '{print "-n "$1" "$2}' | xargs -n 3 kubectl delete pod

But I'm looking for something better.

-- cristi
kubernetes

1 Answer

8/10/2018

You can run kubectl drain command and specify node name with all the other options which will be a lot simpler.

-- Harshal Shah
Source: StackOverflow