Minikube never start - Error restarting cluster

5/27/2018

I'm using Arch linux
I had virtualbox 5.2.12 installed
I had the minikube 0.27.0-1 installed
I had the Kubernetes v1.10.0 installed

When i try start the minkube with sudo minikube start i get this error

Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0527 12:58:18.929483   22672 start.go:281] Error restarting cluster:  running cmd: 
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: running command: 
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: exit status 1

I already try start minekube with others option like:

sudo minikube start --kubernetes-version v1.10.0 --bootstrapper kubeadm

sudo minikube start --bootstrapper kubeadm

sudo minikube start --vm-driver none

sudo minikube start --vm-driver virtualbox

sudo minikube start --vm-driver kvm

sudo minikube start --vm-driver kvm2

Always I get the same error. Can someone help me?

-- Italo José
kubernetes
minikube

2 Answers

10/7/2019

Update:

The below answer did not work due to what I suspect are differences in versions between my environment and the information I found and I'm not willing to sink more time into this problem. The VM itself does startup so if you have important information in it, ie: other docker containers you can login to the VM and extract such data from it before minikube delete


Same problem, I ssh'ed into the VM and ran sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml The result was failure loading apiserver-kubelet-client certificate: the certificate has expired

So like any good engineer, I googled that and found this:

Source: https://github.com/kubernetes/kubeadm/issues/581

If you are using a version of kubeadm prior to 1.8, where I understand certificate rotation #206 was put into place (as a beta feature) or your certs already expired, then you will need to manually update your certs (or recreate your cluster which it appears some (not just @kachkaev) end up resorting to).

You will need to SSH into your master node. If you are using kubeadm

\= 1.8 skip to 2.

Update Kubeadm, if needed. I was on 1.7 previously. $ sudo curl -sSL https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm > ./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv /usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15 /usr/bin/kubeadm Backup old apiserver, apiserver-kubelet-client, and front-proxy-client certs and keys. $ sudo mv /etc/kubernetes/pki/apiserver.key /etc/kubernetes/pki/apiserver.key.old $ sudo mv /etc/kubernetes/pki/apiserver.crt /etc/kubernetes/pki/apiserver.crt.old $ sudo mv /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt.old $ sudo mv /etc/kubernetes/pki/apiserver-kubelet-client.key /etc/kubernetes/pki/apiserver-kubelet-client.key.old $ sudo mv /etc/kubernetes/pki/front-proxy-client.crt /etc/kubernetes/pki/front-proxy-client.crt.old $ sudo mv /etc/kubernetes/pki/front-proxy-client.key /etc/kubernetes/pki/front-proxy-client.key.old Generate new apiserver, apiserver-kubelet-client, and front-proxy-client certs and keys. $ sudo kubeadm alpha phase certs apiserver --apiserver-advertise-address $ sudo kubeadm alpha phase certs apiserver-kubelet-client $ sudo kubeadm alpha phase certs front-proxy-client Backup old configuration files $ sudo mv /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old $ sudo mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old $ sudo mv /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.old $ sudo mv /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old Generate new configuration files. There is an important note here. If you are on AWS, you will need to explicitly pass the --node-name parameter in this request. Otherwise you will get an error like: Unable to register node "ip-10-0-8-141.ec2.internal" with API server: nodes "ip-10-0-8-141.ec2.internal" is forbidden: node ip-10-0-8-141 cannot modify node ip-10-0-8-141.ec2.internal in your logs sudo journalctl -u kubelet --all | tail and the Master Node will report that it is Not Ready when you run kubectl get nodes.

Please be certain to replace the values passed in --apiserver-advertise-address and --node-name with the correct values for your environment.

$ sudo kubeadm alpha phase kubeconfig all --apiserver-advertise-address 10.0.8.141 --node-name ip-10-0-8-141.ec2.internal [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"

Ensure that your kubectl is looking in the right place for your config files. $ mv .kube/config .kube/config.old $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config $ sudo chmod 777 $HOME/.kube/config $ export KUBECONFIG=.kube/config Reboot your master node $ sudo /sbin/shutdown -r now Reconnect to your master node and grab your token, and verify that your Master Node is "Ready". Copy the token to your clipboard. You will need it in the next step. $ kubectl get nodes $ kubeadm token list If you do not have a valid token. You can create one with:

$ kubeadm token create The token should look something like 6dihyb.d09sbgae8ph2atjw

SSH into each of the slave nodes and reconnect them to the master $ sudo curl -sSL https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm > ./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv /usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15 /usr/bin/kubeadm $ sudo kubeadm join --token= : --node-name

Repeat Step 9 for each connecting node. From the master node, you can verify that all slave nodes have connected and are ready with: $ kubectl get nodes Hopefully this gets you where you need to be @davidcomeyne.

-- Glen Pierce
Source: StackOverflow

5/28/2018

Minikube VM is usually started for simple experiments without any important payload. That's why it's much easier to recreate minikube cluster than trying to fix it.

To delete existing minikube VM execute the following command:

minikube delete

This command shuts down and deletes the minikube virtual machine. No data or state is preserved.

Check if you have all dependencies at place and run command:

minikube start

This command creates a “kubectl context” called “minikube”. This context contains the configuration to communicate with your minikube cluster. minikube sets this context to default automatically, but if you need to switch back to it in the future, run:

kubectl config use-context minikube

Or pass the context on each command like this:

kubectl get pods --context=minikube

More information about command line arguments can be found here.

-- VAS
Source: StackOverflow