kubectl logs not working after creating cluster with kubeadm

10/5/2017

I followed the guide on "Using kubeadm to Create a Cluster" but I am not able to view logs using kubectl:

root@o1:~# kubectl logs -n kube-system etcd-o1
Error from server: Get https://149.156.11.4:10250/containerLogs/kube-system/etcd-o1/etcd: tls: first record does not look like a TLS handshake

The above IP address is the cloud frontend address not the address of the VM which probably causes the problem. Some other kubectl cmds seem to work:

root@o1:~# kubectl cluster-info
Kubernetes master is running at https://10.6.16.88:6443
KubeDNS is running at https://10.6.16.88:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

root@o1:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY     STATUS        RESTARTS   AGE
kube-system   etcd-o1                         1/1       Running       0          3h
kube-system   kube-apiserver-o1               1/1       Running       0          3h
kube-system   kube-controller-manager-o1      1/1       Running       0          3h
kube-system   kube-dns-545bc4bfd4-mhbfb       3/3       Running       0          3h
kube-system   kube-flannel-ds-lw87h           2/2       Running       0          1h
kube-system   kube-flannel-ds-rkqxg           2/2       Running       2          1h
kube-system   kube-proxy-hnhfs                1/1       Running       0          3h
kube-system   kube-proxy-qql4r                1/1       Running       0          1h
kube-system   kube-scheduler-o1               1/1       Running       0          3h

Please help.

-- daro
kubernetes

1 Answer

10/11/2017

Maybe change the address in the $HOME/admin.conf.

-- Velkan
Source: StackOverflow