I've created a Kubernetes deployment. However, there seem to be additional pods running - that I'm hoping to be able to delete the unnecessary ones.
I see no need to run the dashboard container. I'd like to remove it to free up CPU resources.
How can I disable this container from starting up? Preferably from the deployment config.
Essentially the following pod:
kubectl get pods --all-namespaces | grep "dashboard"
kube-system kubernetes-dashboard-490794276-sb6qs 1/1 Running 1 3d
Additional information:
Output of kubectl --namespace kube-system get deployment
:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
heapster-v1.3.0 1 1 1 1 3d
kube-dns 2 2 2 2 3d
kube-dns-autoscaler 1 1 1 1 3d
kubernetes-dashboard 1 1 1 1 11m
l7-default-backend 1 1 1 1 3d
Output of kubectl --namespace kube-system get rs
:
NAME DESIRED CURRENT READY AGE
heapster-v1.3.0-191291410 1 1 1 3d
heapster-v1.3.0-3272732411 0 0 0 3d
heapster-v1.3.0-3742215525 0 0 0 3d
kube-dns-1829567597 2 2 2 3d
kube-dns-autoscaler-2501648610 1 1 1 3d
kubernetes-dashboard-490794276 1 1 1 12m
l7-default-backend-3574702981 1 1 1 3d
For me the most easy way to do it, is to find with which yaml you deploy it. and simply do :
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/alternative.yaml
replace with your yaml. This the best way to clean up because what you deployed will be delete.
Just delete the Deployment
, all the related pods will be terminated automatically.
To have a clean removal you must to delete a lot of objects, I assume the dashboard is in the namespace kube-system
.
Just try to execute this to see how many they are:
kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard
If the output is empty, just double check your dashboard namespace's with the command kubectl get namespaces
At time of writing to remove everything, I did this:
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete service kubernetes-dashboard --namespace=kube-system
kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete sa kubernetes-dashboard --namespace=kube-system
kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system
None of these answers worked for me because every answer assumes the namespace is kube-system
, which is not always true. Thus, you need to first see the names space:
$ kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx-deployment 3/3 3 3 39m
kube-system coredns 2/2 2 2 93m
kubernetes-dashboard dashboard-metrics-scraper 1/1 1 1 12m
kubernetes-dashboard kubernetes-dashboard 1/1 1 1 12m
You can see from the first column (NAMESPACE), then:
$ kubectl delete deployment kubernetes-dashboard --namespace=kubernetes-dashboard
$ kubectl delete deployment dashboard-metrics-scraper --namespace=kubernetes-dashboard
Do the same for services (if any):
$kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m
default nginx-service NodePort 10.96.31.151 <none> 80:31634/TCP 49m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 102m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.102.22.35 <none> 8000/TCP 22m
Then delete any dashboard services:
$ kubectl delete service kubernetes-dashboard --namespace=kubernetes-dashboard
$ kubectl delete service dashboard-metrics-scraper --namespace=kubernetes-dashboard
Then finally the service account and secrets:
$ kubectl delete sa kubernetes-dashboard --namespace=kubernetes-dashboard
$ kubectl delete secret kubernetes-dashboard-certs --namespace=kubernetes-dashboard
$ kubectl delete secret kubernetes-dashboard-key-holder --namespace=kubernetes-dashboard
UPDATE MAY 2020:
Thanks to Lee Richardson for his comment ;)
They have changed the organisation of the files in the repo and as well the command on Kubernetes manual, so the new kubectl delete
command needs to be:
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
ORIGINAL POST:
As said before, you can delete the deployment to remove the pods too, running this:
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
But, if you want to clean all the dashboard related things, you can simply execute the delete command K8s cluster file based in the official Kubernetes manual:
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
Using a label selector:
kubectl --namespace=kube-system delete deployment,service,role,rolebinding,sa,secret -l k8s-app=kubernetes-dashboard
Simply go with kubectl --namespace kube-system delete deployment kubernetes-dashboard
and you'll have no more dashboard in your cluster
kubectl --namespace=kube-system edit deployment kubernetes-dashboard
And set replicas: 0
This seems to work for the dashboard.
kubectl delete ([-f FILENAME] | TYPE [(NAME | -l label | --all)])
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_delete/
I use "minikube start" create.Then use "minikube dashboard" create the dashboard.Finally,I use "minikube config set dashboard false" to stop and delete the dashboard resources(including service,deployment,...).
'minikube addons disable dashboard' worked for me. Using v1.6.2.