kubectl get componentstatus showing extra etcd instances

7/30/2018

I have a single node kubernetes cluster running. Everything working fine, but when I run the "kubectl get cs" (kubectl get componentstatus) it showing two instance of etcd. I have running a single etcd instance.

[root@master01 vagrant]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}

[root@master01 vagrant]# etcdctl member list 19ef3eced66f4ae3: name=master01 peerURLs=http://10.0.0.10:2380 clientURLs=http://0.0.0.0:2379 isLeader=true

[root@master01 vagrant]# etcdctl cluster-health member 19ef3eced66f4ae3 is healthy: got healthy result from http://0.0.0.0:2379 cluster is healthy

Etcd is running as a docker container. In the /etc/systemd/system/etcd.service file single etcd cluster is mentioned.(http://10.0.0.10:2380)

/usr/local/bin/etcd \ --name master01 \ --data-dir /etcd-data \ --listen-client-urls http://0.0.0.0:2379 \ --advertise-client-urls http://0.0.0.0:2379 \ --listen-peer-urls http://0.0.0.0:2380 \ --initial-advertise-peer-urls http://10.0.0.10:2380 \ --initial-cluster master01=http://10.0.0.10:2380 \ --initial-cluster-token my-token \ --initial-cluster-state new \

Also in the api server config file /etc/kubernetes/manifests/api-srv.yaml --etcd-servers flag is used.

- --etcd-servers=http://10.0.0.10:2379,

[root@master01 manifests]# netstat -ntulp |grep etcd tcp6 0 0 :::2379 :::* LISTEN 31109/etcd tcp6 0 0 :::2380 :::* LISTEN 31109/etcd

Any one know why it showing etcd-0 and etcd-1 in "kubectl get cs" ?. Any help is appreciated.

-- Jyothish Kumar S
coreos
etcd
kubectl
kubernetes

1 Answer

8/1/2018

Despite the fact that @Jyothish Kumar S has found the root cause on his own and fixed the issue - It's a good practice to have an answer that will be available for those, who will face the same problem in the future.

Issue came from missconfiguration in API server config file /etc/kubernetes/manifests/api-srv.yaml where--etcd-servers was set in an inappropriate way. All flags for kube-apiserver along with their descriptions may be found here. So, the issue was in the last comma in --etcd-servers=http://10.0.0.10:2379, line. This comma was interpreted as new ETCD server record http://:::2379 and that’s why in the "kubectl get cs" output we were able to see two etcd records instead of one. Pay attention to this aspect while configuring etcd.

-- VKR
Source: StackOverflow