Kubernetes unable to retrieve logs

4/7/2018

I have kubeadm cluster deployed in CentOS VM. while trying to deploy ingress controller following github i noticed that i'm unable to see logs:

kubectl logs -n ingress-nginx nginx-ingress-controller-697f7c6ddb-x9xkh --previous

Error from server: Get https://192.168.56.34:10250/containerLogs/ingress-nginx/nginx-ingress-controller-697f7c6ddb-x9xkh/nginx-ingress-controller?previous=true: dial tcp 192.168.56.34:10250: getsockopt: connection timed out

In 192.168.56.34 (node1) netstat returns:

tcp6       0      0 :::10250                :::*                    LISTEN      1068/kubelet

In fact i'm unable to see any logs despite the status of the pod.

I disabled both the firewalld and SELinux.

I used proxy to enable kubernertes to download images, now i removed the proxy.

When navigating to the url in the error above i get Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)

I'm also able to fetch my nodes:

kubectl get node
NAME         STATUS     ROLES     AGE       VERSION
k8s-master   Ready      master    32d       v1.9.3
k8s-node1    Ready      <none>    30d       v1.9.3
k8s-node2    NotReady   <none>    32d       v1.9.3
-- BOUKANDOURA Mhamed
kubernetes

2 Answers

4/8/2018

This message is from the apiserver running on your master. The command kubectl logs, running on your local machine, fetches logs via the apiserver. So the error message reveals a firewall misconfiguration between the master and the node(s) (port 10250)

-- Janos Lenart
Source: StackOverflow

4/8/2018

getsockopt: connection timed out

Is 99.99999% a firewall issue. If it was "connection refused" then showing the output of netstat would be meaningful, but (as you can see) kubelet is listening on that port just fine -- it's the networking configuration between the machine that is running kubectl and "192.168.56.34" that is incorrectly configured to allow traffic.

The apiserver expects that everyone who would want to view logs (or use kubectl exec) can reach that port on every Node in the cluster; so be sure you don't just fix the firewall rule(s) for that one Node -- fix it for all of them.

-- mdaniel
Source: StackOverflow