minikube stop doesn't stop the pods after sudo minikube start --vm-driver none. kube-apiserver still running

2/18/2020

I use minikube v1.6.2, kubectl 1.17.

I start minikube without Virtualbox, with:

sudo minikube start --vm-driver none 

Now, to stop it, I do:

sudo minikube stop
minikube stop # I don't know which one is the good one, but I do both

but, after that, when I do:

kubectl get po

I still get the pods listing. The only way to stop it is to actually reboot my machine.

Why is it happening, and how should I fix it ?

-- Juliatzin
kubernetes

1 Answer

2/18/2020

minikube stop when used with --vm-driver=none does not do any cleanup of the pods. As mentioned here:

When minikube starts without a hypervisor, it installs a local kubelet service on your host machine, which is important to know for later.

Right now it seems that minikube start is the only command aware of --vm-driver=none. Running minikube stop keeps resulting in errors related to docker-machine, and as luck would have it also results in none of the Kubernetes containers terminating, nor the kubelet service stopping.

Of course, if you wish to actually terminate minikube, you will need to execute service kubelet stop and then ensure the k8s containers are removed from the output in docker ps.

If you wish to know the overview of none (bare-metal) driver you can find it here.

Also as a workaround you can stop and remove all Docker containers that have 'k8s' in their name by executing the following command: docker stop (docker ps -q --filter name=k8s) and docker rm (docker ps -aq --filter name=k8s).

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow