On kubernetes 1.5.2 all of sudden kubectl logs is showing error while other commands are working fine, so definitely there is no issue with cluster setup but possibly some sort of bug. Kindly advise if there is workaround to get the logs working.
$ kubectl logs -f some-pod-name
Error is given below:
**Error from server: Get https://Minion-1-IP:10250/containerLogs/default/some-pod-name-3851540691-b18vp/some-pod-name?follow=true: net/http: TLS handshake timeout**
Please help.
In short, for me, the problem is caused by misconfigured proxy.
I came across this very same symptom last week. After poking around for some time This ISSUE showed up.
For me, It's because I initialized the cluster with
HTTP_PROXY=http://10.196.109.214:8118 HTTPS_PROXY=http://10.196.109.214:8118 NO_PROXY=10.196.109.214,localhost,127.0.0.1 kubeadm init
10.196.109.214
is my master node and on which I set up an http proxy. The proxy settings are automatically written in kubernete manifists. NO_PROXY here does not include any worker nodes, so that cause everything works fine but I can't retrieve any log from workers.
I just hand edited the env part of /etc/kubernetes/manifests/kube-*.yaml
and add worker nodes' ips:
env:
- name: NO_PROXY
value: 10.196.109.214,10.196.109.215,10.196.109.216,10.196.109.217,localhost,127.0.0.1
- name: HTTP_PROXY
value: http://10.196.109.214:8118
- name: HTTPS_PROXY
value: http://10.196.109.214:8118
Then find relative pods with kubectl -n kube-system get pods
and delete them with kubectl -n kube-system delete pod <pod-name>
, wait for them to be recreated by kubelet. Everything works fine now.
I think there is an issue with the cluster setup. This error message doesn't come from the connection between kubectl
and the apiserver, rather than between the apiserver and the kubelets. Therefore the certificates between these two might not be correct.
Disclaimer: I can't verify this idea right now, but we had a similar problem a while ago.