Minikube won't work after Ubuntu upgrade to 19.10

10/24/2019

I just upgrade Ubuntu from 19.04 to 19.10

Now Minikube won't start.

So, after a while, I just removed completely Minikube with.

minikube stop; minikube delete
docker stop $(docker ps -aq)
rm -r ~/.kube ~/.minikube
sudo rm /usr/local/bin/localkube /usr/local/bin/minikube
systemctl stop '*kubelet*.mount'
sudo rm -rf /etc/kubernetes/
docker system prune -af --volumes

Now I want to reinstall everything, but I can't make it working.

I download minikube, and move it to /usr/local/bin

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \\n  && chmod +x minikube && sudo mv ./minikube /usr/local/bin

I start minikube

sudo minikube start --vm-driver=none

Everything is OK, minukube starts successfully.

~ sudo minikube start --vm-driver=none
  minikube v1.4.0 on Ubuntu 19.10
  Running on localhost (CPUs=4, Memory=7847MB, Disk=280664MB) ...
ℹ️   OS release is Ubuntu 19.10
  Preparing Kubernetes v1.16.0 on Docker 18.09.6 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
  Pulling images ...
  Launching Kubernetes ... 
  Configuring local host environment ...

⚠️  The 'none' driver provides limited isolation and may reduce system security and reliability.
⚠️  For more information, see:
  https://minikube.sigs.k8s.io/docs/reference/drivers/none/

⚠️  kubectl and minikube configuration will be stored in /root
⚠️  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /root/.kube /root/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
  Done! kubectl is now configured to use "minikube"

I finally do:

   ~ sudo mv /root/.kube /root/.minikube $HOME
➜  ~ sudo chown -R $USER $HOME/.kube $HOME/.minikube

But when I want to check pods:

kubectl get po

I get:

➜  ~ kubectl get po
Error in configuration: 
* unable to read client-cert /root/.minikube/client.crt for minikube due to open /root/.minikube/client.crt: permission denied
* unable to read client-key /root/.minikube/client.key for minikube due to open /root/.minikube/client.key: permission denied
* unable to read certificate-authority /root/.minikube/ca.crt for minikube due to open /root/.minikube/ca.crt: permission denied

And if using sudo:

 ~ sudo kubectl get po
[sudo] password for julien: 
The connection to the server localhost:8080 was refused - did you specify the right host or port?

here is the result of minikube logs

https://gist.github.com/xoco70/8a9c7042238400e370796cb23cb11c88

What should I do ?

EDIT:

After a reboot, when starting minikube with:

sudo minikube start --vm-driver=none

I get:

Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
 output: [init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING FileExisting-ethtool]: ethtool not found in system path
    [WARNING FileExisting-socat]: socat not found in system path
    [WARNING Hostname]: hostname "minikube" could not be reached
    [WARNING Hostname]: hostname "minikube": lookup minikube on 8.8.8.8:53: no such host
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [WARNING Port-10250]: Port 10250 is in use
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Port-8443]: Port 8443 is in use
    [ERROR Port-10251]: Port 10251 is in use
    [ERROR Port-10252]: Port 10252 is in use
    [ERROR Port-2379]: Port 2379 is in use
    [ERROR Port-2380]: Port 2380 is in use
    [ERROR DirAvailable--var-lib-minikube-etcd]: /var/lib/minikube/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
  https://github.com/kubernetes/minikube/issues/new/choose
❌  Problems detected in kube-apiserver [3e0d8c59345d]:
    I1025 07:09:56.349120       1 log.go:172] http: TLS handshake error from 127.0.0.1:46254: remote error: tls: bad certificate
    I1025 07:09:56.353714       1 log.go:172] http: TLS handshake error from 127.0.0.1:46082: remote error: tls: bad certificate
    I1025 07:09:56.353790       1 log.go:172] http: TLS handshake error from 127.0.0.1:46080: remote error: tls: bad certificate
-- Juliatzin
kubernetes
minikube

2 Answers

10/25/2019

Okay, so I reproduced and got the same errors with minikube after upgrading it to 19.10.

How I initiated cluster on 19.04:

#Install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl && sudo mv ./kubectl /usr/local/bin/kubectl

#Install minikube. Make sure to check for latest version
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

#Install Docker
curl -fsSL get.docker.com -o get-docker.sh && chmod +x get-docker.sh
sh get-docker.sh
sudo usermod -aG docker $USER


export MINIKUBE_WANTUPDATENOTIFICATION=false
export MINIKUBE_WANTREPORTERRORPROMPT=false
export MINIKUBE_HOME=$HOME
export CHANGE_MINIKUBE_NONE_USER=true
export KUBECONFIG=$HOME/.kube/config
sudo minikube start --vm-driver none
sudo chown -R $USER $HOME/.kube $HOME/.minikube


vkr@ubuntu-minikube:~$ docker version
Client: Docker Engine - Community
 Version:           19.03.3
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        a872fc2f86
 Built:             Tue Oct  8 01:00:44 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       a872fc2f86
  Built:            Tue Oct  8 00:59:17 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683


vkr@ubuntu-minikube:~$ kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-5644d7b6d9-cv8c5           1/1     Running   0          2m25s
kube-system   coredns-5644d7b6d9-gk725           1/1     Running   0          2m25s
kube-system   etcd-minikube                      1/1     Running   0          75s
kube-system   kube-addon-manager-minikube        1/1     Running   0          75s
kube-system   kube-apiserver-minikube            1/1     Running   0          98s
kube-system   kube-controller-manager-minikube   1/1     Running   0          88s
kube-system   kube-proxy-59jp9                   1/1     Running   0          2m25s
kube-system   kube-scheduler-minikube            1/1     Running   0          82s
kube-system   storage-provisioner                1/1     Running   0          2m24s

After upgrading to 19.10 and clean minikube install:

vkr@ubuntu-minikube:~$ kubectl get all -A
Error in configuration: 
* unable to read client-cert /root/.minikube/client.crt for minikube due to open /root/.minikube/client.crt: permission denied
* unable to read client-key /root/.minikube/client.key for minikube due to open /root/.minikube/client.key: permission denied
* unable to read certificate-authority /root/.minikube/ca.crt for minikube due to open /root/.minikube/ca.crt: permission denied

There are a lots of discussions stating you should use root for none driver since minikube runs the kubernetes system components directly on your machine...

Running minikube as normal user

Can't start minikube-- permissions

https://minikube.sigs.k8s.io/docs/reference/drivers/none/:

Usage The none driver requires minikube to be run as root, until #3760 can be addressed

However.. here is a small trick for you..

1) wipe everything

vkr@ubuntu-minikube:~$ minikube stop
✋  Stopping "minikube" in none ...
  "minikube" stopped.
vkr@ubuntu-minikube:~$ minikube delete
  Uninstalling Kubernetes v1.16.0 using kubeadm ...
  Deleting "minikube" in none ...
  The "minikube" cluster has been deleted.
vkr@ubuntu-minikube:~$ rm -rf ~/.kube
vkr@ubuntu-minikube:~$ rm -rf ~/.minikube
vkr@ubuntu-minikube:~$ sudo rm -rf /var/lib/minikube
vkr@ubuntu-minikube:~$ sudo rm -rf /etc/kubernetes
vkr@ubuntu-minikube:~$ sudo rm -rf /root/.minikube
vkr@ubuntu-minikube:~$ sudo rm -rf /usr/local/bin/minikube

2) Install minikube, export variables, check

vkr@ubuntu-minikube:~$  curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
vkr@ubuntu-minikube:~$   export MINIKUBE_WANTUPDATENOTIFICATION=false
vkr@ubuntu-minikube:~$  export MINIKUBE_WANTREPORTERRORPROMPT=false
vkr@ubuntu-minikube:~$  export MINIKUBE_HOME=$HOME
vkr@ubuntu-minikube:~$  export CHANGE_MINIKUBE_NONE_USER=true
vkr@ubuntu-minikube:~$  export KUBECONFIG=$HOME/.kube/config
vkr@ubuntu-minikube:~$  sudo minikube start --vm-driver none

  minikube v1.4.0 on Ubuntu 19.10
  Running on localhost (CPUs=2, Memory=7458MB, Disk=9749MB) ...
ℹ️   OS release is Ubuntu 19.10
  Preparing Kubernetes v1.16.0 on Docker 19.03.3 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
  Downloading kubelet v1.16.0
  Downloading kubeadm v1.16.0
  Pulling images ...
  Launching Kubernetes ... 
  Configuring local host environment ...

⚠️  The 'none' driver provides limited isolation and may reduce system security and reliability.
⚠️  For more information, see:
  https://minikube.sigs.k8s.io/docs/reference/drivers/none/

⚠️  kubectl and minikube configuration will be stored in /root
⚠️  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /root/.kube /root/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
  Done! kubectl is now configured to use "minikube"

What I do next is copy everything from /root/.kube and /root/.minikube to $HOME, grant user permissions and finally edit $HOME/.kube/config specifying new path to certs ($HOME/.minikube/ instead of /root/.minikube/). Right now it looks like

vkr@ubuntu-minikube:~$ cat $KUBECONFIG
apiVersion: v1
...
    certificate-authority: /root/.minikube/ca.crt
...
    client-certificate: /root/.minikube/client.crt
    client-key: /root/.minikube/client.key

Lets do it :)

vkr@ubuntu-minikube:~$ sudo cp -r /root/.kube  /root/.minikube $HOME
vkr@ubuntu-minikube:~$ sudo chown -R $USER $HOME/.kube
vkr@ubuntu-minikube:~$ sudo chown -R $USER $HOME/.minikube
sed 's/root/home\/vkr/g' $KUBECONFIG > tmp; mv tmp $KUBECONFIG

And finally result..

vkr@ubuntu-minikube:~$ kubectl get all -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-5644d7b6d9-bt897           1/1     Running   0          81m
kube-system   pod/coredns-5644d7b6d9-hkm5t           1/1     Running   0          81m
kube-system   pod/etcd-minikube                      1/1     Running   0          80m
kube-system   pod/kube-addon-manager-minikube        1/1     Running   0          80m
kube-system   pod/kube-apiserver-minikube            1/1     Running   0          80m
kube-system   pod/kube-controller-manager-minikube   1/1     Running   0          80m
kube-system   pod/kube-proxy-wm52p                   1/1     Running   0          81m
kube-system   pod/kube-scheduler-minikube            1/1     Running   0          80m
kube-system   pod/storage-provisioner                1/1     Running   0          81m

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  81m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   81m

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           beta.kubernetes.io/os=linux   81m

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   2/2     2            2           81m

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-5644d7b6d9   2         2         2       81m
-- VKR
Source: StackOverflow

10/24/2019

A bit confusing question question with some mismatches in your story.

You are asking how to "fix Minikube and keep my cluster" and at the same time mention you deleted your local cluster, but still having the issue.

I can only assume here you finally broke cluster and there is no need to recover it.

Look, you say that despite the fact your cluster hasn't been starter(Error restarting cluster:exit status 1,Sorry that minikube crashed. errors) - command sudo kubectl get po -A still works. And you have some issues with doing the same under regular user..

Here the question - what does it mean "sudo kubectl get po -A still works"? You see pods? you can see other objects?

If so, you should first of all look into etcd direction. In addition you have etcd errors during start.

: running command: sudo ln -s /data/minikube /var/lib/minikube/etcd
 output: ln: failed to create symbolic link '/var/lib/minikube/etcd/minikube': File exists
: running command: sudo ln -s /data/minikube /var/lib/minikube/etcd
.: exit status 1

Etcd is a primary key-value datastore for k8s, that stores and replicating all the kubernetes cluster states and most probably so I assume problem is there.

Also would be good to know how exactly you deleted your cluster. I can bet you have done minikube delete and forgot delete manually unnecessary configs. Proper way:

minikube stop
minikube delete
rm -rf ~/.kube
rm -rf ~/.minikube
sudo rm -rf /var/lib/minikube
sudo rm /var/lib/kubeadm.yaml
sudo rm -rf /etc/kubernetes

Remove everything and install from scratch.

-- VKR
Source: StackOverflow