Kubernetes kubeadm reset error - unable to reset

5/28/2020

I initialized k8 using kubeadm and now when I try to reset using kubeadm reset , am getting the following error. I searched for several forums but couldn't find any answers

> "level":"warn","ts":"2020-05-28T11:57:52.940+0200","caller":"clientv3/retry_interceptor.go:61","msg":"retrying
> o                                                                     
> f unary invoker
> failed","target":"endpoint://client-e6d5f25b-0ed2-400f-b4d7-2ccabb09a838/192.168.178.200:2379","a
> ttempt":0,"error":"rpc error: code = Unknown desc = etcdserver:
> re-configuration failed due to not enough started                     
> members"}

The master node status is showing as not ready and I have not been able to reset the network plugin (weave)

  ubuntu@ubuntu-nuc-masternode:~$ kubectl get nodes
NAME         STATUS                        ROLES    AGE   VERSION
ubuntu-nuc   NotReady,SchedulingDisabled   master   20h   v1.18.3

I tried forcing reset but hasn't worked. Any help is much appreciated

-- IT_novice
kubeadm
kubernetes

1 Answer

5/28/2020

This seems to be reported issue kubeadm reset takes more than 50 seconds to retry deleting the last etcd member which was moved here.

Fix was committed kubeadm: skip removing last etcd member in reset phase on May 28th.

What type of PR is this?
/kind bug

What this PR does / why we need it:
If this is the last etcd member of the cluster, it cannot be removed due to "not enough started members". Skip it as the cluster will be destroyed in the next phase, otherwise the retries with exponential backoff will take more than 50 seconds to proceed.

Which issue(s) this PR fixes:

Fixes kubernetes/kubeadm#2144

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

kubeadm: during "reset" do not remove the only remaining stacked etcd member from the cluster and just proceed with the cleanup of the local etcd storage.
-- Crou
Source: StackOverflow