How to delete all the contents from a kubernetes node?
Contents include deployments, replica sets etc. I tried to delete deplyoments seperately. But kubernetes recreates all the pods again.
Is there there any ways to delete all the replica sets present in a node?
I tried so many variations to delete old pods from tutorials, including everything here.
What finally worked for me was:
kubectl delete replicaset --all
Deleting them one at a time didn't seem to work; it was only with the --all flag that all pods were deleted without being recreated.
If you are testing things, the easiest way would be
kubectl delete deployment --allAlthougth if you are using minikube, the easiest would probably be delete the machine and start again with a fresh node
minikube delete
minikube startIf we are talking about a production cluster, Kubernetes has a built-in feature to drain a node of the cluster, removing all the objects from that node safely.
You can use
kubectl drainto safely evict all of your pods from a node before you perform maintenance on the node. Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.Note: By default
kubectl drainwill ignore certain system pods on the node that cannot be killed; see the kubectl drain documentation for more details.When
kubectl drainreturns successfully, that indicates that all of the pods (except the ones excluded as described in the previous paragraph) have been safely evicted (respecting the desired graceful termination period, and without violating any application-level disruption SLOs). It is then safe to bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine.
First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with
kubectl get nodesNext, tell Kubernetes to drain the node:
kubectl drain <node name>Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). drain waits for graceful termination. You should not operate on the machine until the command completes.
If you leave the node in the cluster during the maintenance operation, you need to run
kubectl uncordon <node name>afterwards to tell Kubernetes that it can resume scheduling new pods onto the node.
Please, note that if there are any pods that are not managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force, as mentioned in the docs.
kubectl drain <node name> --forceKubenertes provides namespaces object for isolation and separation of concern. Therefore, It is recommended to apply all of the k8s resources objects (Deployment, ReplicaSet, Pods, Services and other) in a custom namespace.
Now If you want to remove all of the relevant and related k8s resources, you just need to delete the namespace which will remove all of these resources.
kubectl create namespace custom-namespace
kubectl create -f deployment.yaml --namespace=custom-namespace
kubectl delete namespaces custom-namespaceI have attached a link for further research.
minikube delete
rm -rf ~/.minikube
minikube start