Error: validation failed: unable to recognize "": no matches for kind "EtcdCluster" in version "etcd.database.coreos.com/v1beta2"

2/1/2019

I am trying to deploy SEBA/CORD on a Kubernetes cluster I am very confused as to why I am getting this error:

Error: validation failed: unable to recognize "": no matches for kind "EtcdCluster" in version "etcd.database.coreos.com/v1beta2"

please let me know any other debugging I can do to help pinpoint the issue.

kubectl get crd | grep etcd | wc -l

outcome is 3 as specified in documentation so everything should be in place - unsure what's missing.

NAMESPACE     NAME                                                              READY   STATUS    RESTARTS   AGE     IP              NODE      NOMINATED NODE   READINESS GATES
default       cord-platform-etcd-operator-etcd-backup-operator-8cfdff8b7vzqq6   1/1     Running   0          8m38s   192.168.1.10    knode     <none>           <none>
default       cord-platform-etcd-operator-etcd-operator-d57f45bb6-g5wfm         1/1     Running   0          8m38s   192.168.1.11    knode     <none>           <none>
default       cord-platform-etcd-operator-etcd-restore-operator-6c5f8cf99pwhb   1/1     Running   0          8m38s   192.168.1.12    knode     <none>           <none>
default       cord-platform-kafka-0                                             1/1     Running   2          8m38s   192.168.1.8     knode     <none>           <none>
default       cord-platform-kafka-1                                             1/1     Running   0          2m15s   192.168.1.18    knode     <none>           <none>
default       cord-platform-kafka-2                                             1/1     Running   0          95s     192.168.1.19    knode     <none>           <none>
default       cord-platform-onos-6d8c8c9795-k2675                               2/2     Running   0          8m38s   192.168.1.4     knode     <none>           <none>
default       cord-platform-zookeeper-0                                         1/1     Running   0          8m38s   192.168.1.7     knode     <none>           <none>
default       cord-platform-zookeeper-1                                         1/1     Running   0          3m34s   192.168.1.16    knode     <none>           <none>
default       cord-platform-zookeeper-2                                         1/1     Running   0          3m10s   192.168.1.17    knode     <none>           <none>
default       xos-chameleon-577ddb8db-zlmww                                     1/1     Running   0          8m38s   192.168.1.6     knode     <none>           <none>
default       xos-core-76995dc468-zlz9w                                         1/1     Running   0          8m38s   192.168.1.14    knode     <none>           <none>
default       xos-db-75ccccf4cf-llfdz                                           1/1     Running   0          8m38s   192.168.1.5     knode     <none>           <none>
default       xos-gui-68977b7b4c-zwffh                                          1/1     Running   0          8m38s   192.168.1.9     knode     <none>           <none>
default       xos-tosca-868bf8f746-kx549                                        1/1     Running   0          8m38s   192.168.1.15    knode     <none>           <none>
default       xos-ws-6c9f8fb949-jdfm8                                           1/1     Running   0          8m38s   192.168.1.13    knode     <none>           <none>
kube-system   calico-node-mnfg4                                                 2/2     Running   0          6d22h   135.25.49.130   kmaster   <none>           <none>
kube-system   calico-node-v6ldm                                                 2/2     Running   2          27h     135.25.24.45    knode     <none>           <none>
kube-system   coredns-86c58d9df4-cvnpl                                          1/1     Running   0          6d22h   192.168.0.4     kmaster   <none>           <none>
kube-system   coredns-86c58d9df4-fx4lc                                          1/1     Running   0          6d22h   192.168.0.2     kmaster   <none>           <none>
kube-system   etcd-kmaster                                                      1/1     Running   0          6d22h   135.25.49.130   kmaster   <none>           <none>
kube-system   kube-apiserver-kmaster                                            1/1     Running   0          6d22h   135.25.49.130   kmaster   <none>           <none>
kube-system   kube-controller-manager-kmaster                                   1/1     Running   0          6d22h   135.25.49.130   kmaster   <none>           <none>
kube-system   kube-proxy-fgr6m                                                  1/1     Running   0          6d22h   135.25.49.130   kmaster   <none>           <none>
kube-system   kube-proxy-x7phw                                                  1/1     Running   1          27h     135.25.24.45    knode     <none>           <none>
kube-system   kube-scheduler-kmaster                                            1/1     Running   0          6d22h   135.25.49.130   kmaster   <none>           <none>
kube-system   kubernetes-dashboard-57df4db6b-fcvtr                              1/1     Running   0          3d3h    192.168.0.5     kmaster   <none>           <none>
kube-system   kubernetes-dashboard-head-57b9585588-gvjkv                        1/1     Running   0          6d22h   192.168.0.3     kmaster   <none>           <none>
kube-system   tiller-deploy-dbb85cb99-ncmw9                                     1/1     Running   0          9m15s   192.168.1.3     knode     <none>           <none>
-- Kevin Riordan
etcd
kubernetes
kubernetes-helm
ubuntu

0 Answers