I'm currently dealing with this situation.
Whenever I create multi-node cluster using Minikube, when I stop it and restart it again. It will lose track of the "middle" nodes, e.g. I create 4 nodes: m1
, m2
, m3
, m4
; by some reason Minikube loses track of m2
and m3
.
Scenario:
Let's say I want to create a Kubernetes cluster with Vault, so then I create one a profile named "vault-cluster" with 4 nodes (1 control plane and 3 worker nodes):
$ minikube start --nodes 4 -p vault-cluster
Then when I stop them using:
minikube stop -p vault-cluster
Expected behaviour:
Output:
✋ Stopping node "vault-cluster" ...
✋ Stopping node "vault-cluster-m02" ...
✋ Stopping node "vault-cluster-m03" ...
✋ Stopping node "vault-cluster-m04" ...
🛑 4 nodes stopped.
So when when I started again:
Output:
$ minikube start -p vault-cluster
😄 [vault-cluster] minikube v1.20.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042
✨ Using the virtualbox driver based on existing profile
🎉 minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
👍 Starting control plane node vault-cluster in cluster vault-cluster
🔄 Restarting existing virtualbox VM for "vault-cluster" ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image kubernetesui/dashboard:v2.1.0
▪ Using image kubernetesui/metrics-scraper:v1.0.4
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
👍 Starting node vault-cluster-m02 in cluster vault-cluster
🔄 Restarting existing virtualbox VM for "vault-cluster-m02" ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.120
▪ no_proxy=192.168.99.120
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
▪ env NO_PROXY=192.168.99.120
🔎 Verifying Kubernetes components...
👍 Starting node vault-cluster-m03 in cluster vault-cluster
🔄 Restarting existing virtualbox VM for "vault-cluster-m03" ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.120,192.168.99.121
▪ no_proxy=192.168.99.120,192.168.99.121
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
▪ env NO_PROXY=192.168.99.120
▪ env NO_PROXY=192.168.99.120,192.168.99.121
🔎 Verifying Kubernetes components...
👍 Starting node vault-cluster-m04 in cluster vault-cluster
🔄 Restarting existing virtualbox VM for "vault-cluster-m04" ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.120,192.168.99.121,192.168.99.122
▪ no_proxy=192.168.99.120,192.168.99.121,192.168.99.122
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
▪ env NO_PROXY=192.168.99.120
▪ env NO_PROXY=192.168.99.120,192.168.99.121
▪ env NO_PROXY=192.168.99.120,192.168.99.121,192.168.99.122
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "vault-cluster" cluster and "default" namespace by default
ACTUAL BEHAVIOUR:
$ minikube stop -p vault-cluster
✋ Stopping node "vault-cluster" ...
✋ Stopping node "vault-cluster-m04" ...
✋ Stopping node "vault-cluster-m04" ...
✋ Stopping node "vault-cluster-m04" ...
So when this is what happens when I try to start cluster again:
$ minikube start -p vault-cluster
😄 [vault-cluster] minikube v1.20.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node vault-cluster in cluster vault-cluster
🔄 Restarting existing virtualbox VM for "vault-cluster" ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image kubernetesui/metrics-scraper:v1.0.4
▪ Using image kubernetesui/dashboard:v2.1.0
🌟 Enabled addons: default-storageclass, dashboard
👍 Starting node vault-cluster-m04 in cluster vault-cluster
🔄 Restarting existing virtualbox VM for "vault-cluster-m04" ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.120
▪ no_proxy=192.168.99.120
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
▪ env NO_PROXY=192.168.99.120
🔎 Verifying Kubernetes components...
👍 Starting node vault-cluster-m04 in cluster vault-cluster
🏃 Updating the running virtualbox "vault-cluster-m04" VM ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.120,192.168.99.123
▪ no_proxy=192.168.99.120,192.168.99.123
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
▪ env NO_PROXY=192.168.99.120
▪ env NO_PROXY=192.168.99.120,192.168.99.123
🔎 Verifying Kubernetes components...
👍 Starting node vault-cluster-m04 in cluster vault-cluster
🏃 Updating the running virtualbox "vault-cluster-m04" VM ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.120,192.168.99.123
▪ no_proxy=192.168.99.120,192.168.99.123
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
▪ env NO_PROXY=192.168.99.120
▪ env NO_PROXY=192.168.99.120,192.168.99.123
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "vault-cluster" cluster and "default" namespace by default
This is the output when I least the nodes:
$ minikube node list -p vault-cluster
vault-cluster 192.168.99.120
vault-cluster-m04 192.168.99.123
vault-cluster-m04 192.168.99.123
vault-cluster-m04 192.168.99.123
Any ideas what could be wrong?
Environment:
Windows 10 Pro
Virtual Box 6.1
$ minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
There seems to be some issue with minikube v1.20.0 and it also happens on linux with kvm2 driver (my setup) so it is not OS or driver specific.
It also happens on minikube v1.21.0, although it doesn't happen until stopped second time. After the first stop and start all seems to work fine but after second stop I see exactly what you see.
If you want you can create an issue on minikube githib repo and hope developers fix it.