While this error message is just a symptom, my problem is real.
My bare-metal cluster experienced a certificate expired situation. i managed to renew all certificates, but upon restart, most pods wouldn't work. the pod that seems responsible, is the flannel one (crashloopbackoff).
logs for the flannel pods show
I1120 22:24:00.541277 1 main.go:475] Determining IP address of default interface
I1120 22:24:00.541546 1 main.go:488] Using interface with name eth0 and address xxx.xxx.xxx.xxx
I1120 22:24:00.541565 1 main.go:505] Defaulting external address to interface address (xxx.xxx.xxx.xxx)
E1120 22:24:03.572745 1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-dmrzh': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-amd64-dmrzh: dial tcp 10.96.0.1:443: getsockopt: network is unreachable
on the host there is not even a flannel interface anymore. neither a systemd file
running flanneld manually yields this output
I1120 20:12:15.923966 26361 main.go:446] Determining IP address of default interface
I1120 20:12:15.924171 26361 main.go:459] Using interface with name eth0 and address xxx.xxx.xxx.xxx
I1120 20:12:15.924187 26361 main.go:476] Defaulting external address to interface address (xxx.xxx.xxx.xxx)
E1120 20:12:15.924285 26361 main.go:223] Failed to create SubnetManager: asn1: structure error: tags don't match (16 vs {class:0 tag:2 length:1 isCompound:false}) {optional:false explicit:false application:false defaultValue:<nil> tag:<nil> stringType:0 timeType:0 set:false omitEmpty:false} tbsCertificate @2
the available pieces of evidence point in several directions, but upon checking those out, it points somewehre else. so i need a pointer to which part causes the problem.
if there is information missing here, please ask, i can surely provide.
key specs: - host: ubuntu 18.04 - kubeadm 1.13.2
thank you and best regards, scones
UPDATE1
$ k get cs,po,svc
NAME STATUS MESSAGE ERROR
componentstatus/controller-manager Healthy ok
componentstatus/scheduler Healthy ok
componentstatus/etcd-0 Healthy {"health": "true"}
NAME READY STATUS RESTARTS AGE
pod/cert-manager-6dc5c68468-hkb6j 0/1 Error 51 89d
pod/coredns-86c58d9df4-dtdxq 0/1 Completed 23 304d
pod/coredns-86c58d9df4-k7h7m 0/1 Completed 23 304d
pod/etcd-redacted 1/1 Running 2506 304d
pod/hostpath-provisioner-5c6754fbd4-ckvnp 0/1 Error 12 222d
pod/kube-apiserver-redacted 1/1 Running 1907 304d
pod/kube-controller-manager-redacted 1/1 Running 2682 304d
pod/kube-flannel-ds-amd64-dmrzh 0/1 CrashLoopBackOff 338 372d
pod/kube-proxy-q8jgs 1/1 Running 15 304d
pod/kube-scheduler-redacted 1/1 Running 2694 304d
pod/metrics-metrics-server-65cd865c9f-dbh85 0/1 Error 2658 120d
pod/tiller-deploy-865b88d89-8ftzs 0/1 Error 170 305d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 372d
service/metrics-metrics-server ClusterIP 10.97.186.19 <none> 443/TCP 120d
service/tiller-deploy ClusterIP 10.103.184.226 <none> 44134/TCP 354d
unfortunately i don't recall how i installed flannel a year ago. kubectl version is also 1.13.2, as is the cluster
the linked post by @hanx is about renewing certificated, not broken network overlays, so not applicable.