etcd 3rd pod not getting scheduled on master node due to peers expecting old cert

3/23/2019

needed hint to resolve etcd cert issue on two etcd server pods

I have 2(3) etcd server pods and these are reporting for 3rd pod that x.509 cert is valid for etc.test1.com and not for etc.test2.com

so, my assumption is, issue is etcd server pod 2 & 3 are somehow expecting old cert dns name and not new cert dns name value which is etc.test2.com>

this is causing the 3rd pod to never get accepted as a valid peer and pod never gets scheduled on node.

Any hint how can I reset the two PODS that are expecting old cert and start expecting new cert?

below is the error from etcd server pods that are running .

rafthttp: health check for peer 44ffe8e24fa23c10 could not connect:         x509: certificate is valid for etcd-a.internal.test1.com, etcd-b.internal.test1.com, etcd-c.internal.test1.com, etcd-events-a.internal.test1.com, etcd-events-b.internal.test1.com, etcd-events-c.internal.test1.com, localhost, not etcd-b.internal.test2.com

Also, will cluster work on single etcd server pod or does it need to have 3?

-- fma abd
etcd3
kubernetes

0 Answers