Unable to init kubernetes master

4/2/2018

I am trying to setup Kubernetes on RaspberryPi 3 with latest HypriotOS, running Docker 17.03 and Kubeadm/ctl/kubelet 1.9. I have a connection over WiFi

Everything seems normal until I try to run kubeadm init --apiserver-advertise-address=...

Maybe someone more knowledgeable could confirm but it seems to me that it's failing to pull the api container and then it fails to perform api calls.

I'm not sure what could be the problem, I would appreciate any help.

    Apr  2 07:24:19 docker5 kubelet[28562]: E0402 07:24:19.316996   28562 kubelet_node_status.go:375] Unable to update node status: update node status exceeds retry count
    Apr  2 07:24:19 docker5 kubelet[28562]: W0402 07:24:19.341134   28562 status_manager.go:459] Failed to get status for pod "kube-apiserver-docker5_kube-system(0be6a3a13f3b3c604447ca6f55a6c407)": Get https://192.168.0.104:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-docker5: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:19 docker5 kubelet[28562]: W0402 07:24:19.342988   28562 status_manager.go:459] Failed to get status for pod "kube-controller-manager-docker5_kube-system(d92e00dc78c1cb276248a9695158c4c1)": Get https://192.168.0.104:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-docker5: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:19 docker5 kubelet[28562]: I0402 07:24:19.671521   28562 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:gcr.io/google_containers/kube-apiserver-arm:v1.9.6 Command:[kube-apiserver --requestheader-username-headers=X-Remote-User --advertise-address=192.168.0.104 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --secure-port=6443 --insecure-port=0 --allow-privileged=true --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --requestheader-allowed-names=front-proxy-client --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --requestheader-group-headers=X-Remote-Group --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/sa.pub --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --enable-bootstrap-token-auth=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --client-ca-file=/etc/kubernetes/pki/ca.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --requestheader-extra-headers-prefix=X-Remote-Extra- --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.0.104,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMes
    Apr  2 07:24:19 docker5 kubelet[28562]: sagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
    Apr  2 07:24:19 docker5 kubelet[28562]: I0402 07:24:19.672072   28562 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-docker5_kube-system(0be6a3a13f3b3c604447ca6f55a6c407)"
    Apr  2 07:24:19 docker5 kubelet[28562]: I0402 07:24:19.673128   28562 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-docker5_kube-system(0be6a3a13f3b3c604447ca6f55a6c407)
    Apr  2 07:24:19 docker5 kubelet[28562]: E0402 07:24:19.673390   28562 pod_workers.go:186] Error syncing pod 0be6a3a13f3b3c604447ca6f55a6c407 ("kube-apiserver-docker5_kube-system(0be6a3a13f3b3c604447ca6f55a6c407)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-docker5_kube-system(0be6a3a13f3b3c604447ca6f55a6c407)"
    Apr  2 07:24:19 docker5 kubelet[28562]: E0402 07:24:19.945110   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:471: Failed to list *v1.Service: Get https://192.168.0.104:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:20 docker5 kubelet[28562]: E0402 07:24:20.007637   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:480: Failed to list *v1.Node: Get https://192.168.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddocker5&limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:20 docker5 kubelet[28562]: E0402 07:24:20.019886   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.104:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddocker5&limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:20 docker5 kubelet[28562]: E0402 07:24:20.947291   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:471: Failed to list *v1.Service: Get https://192.168.0.104:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:21 docker5 kubelet[28562]: E0402 07:24:21.009803   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:480: Failed to list *v1.Node: Get https://192.168.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddocker5&limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:21 docker5 kubelet[28562]: E0402 07:24:21.022165   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.104:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddocker5&limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:21 docker5 kubelet[28562]: E0402 07:24:21.951488   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:471: Failed to list *v1.Service: Get https://192.168.0.104:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:22 docker5 kubelet[28562]: E0402 07:24:22.012018   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:480: Failed to list *v1.Node: Get https://192.168.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddocker5&limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
    Apr  2 07:24:22 docker5 kubelet[28562]: E0402 07:24:22.024773   28562 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.104:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddocker5&limit=500&resourceVersion=0: dial tcp 192.168.0.104:6443: getsockopt: connection refused
-- Andrei Dascalu
hypriot
kubernetes
raspberry-pi3
raspbian

1 Answer

4/3/2018

Kubernetes API server is exposed through a service called kubernetes. The endpoints for this service correspond to the API server replicas that we deployed.

Kubernetes nodes are communicating with master API server on IP address specified during init process. During initialization the API server interchanges to network on all interfaces (0.0.0.0/0).

If --apiserver-advertise-address is provided during the setup process, only the endpoints registered from the same subnetwork will work, and load balancing services may stop working.

You may consider looking at /var/log/kube-apiserver.log and check configuration issues after setting --apiserver-advertise-address provided during cluster initialization.

Finally, kubeadm reset command may provide the working environment, but remember that your pods (and API server also) will be deleted.

-- d0bry
Source: StackOverflow