Trouble with removing TLS from Kubernetes Install/Cores

6/16/2017

I'm trying to modify the coreos installer to remove TLS. We've been using TLS in dev, but our production does not have TLS and we are trying to get in sync for dev/test/prod. It's kubernetes 1.3.6. I have no control or influence on getting prod to use TLS.

I've based my work off of this master node install script: https://github.com/coreos/coreos-kubernetes/blob/v0.8.2/multi-node/generic/controller-install.sh

My modified script is here: https://gist.github.com/pswenson/a687735ce9054c39aa56fe40d8bed70a

This question is concerning one master node install at this point... once I have the master working I'll take another crack at getting workers to work.

When I run the installer, the api server seems to work fine. I can connect with kubectl get get nodes and pods. I can curl $MASTER_IP:8080 fine. However, the dashboard doesn't work.

When I run journalctl -u kubelet -f on the master node I see this in the logs:

kubelet-wrapper[1994]: I0615 19:52:13.341824    1994 logs.go:41] http: TLS handshake error from $MASTER_IP:46922: tls: first record does not look like a TLS handshake
kubelet-wrapper[1994]: I0615 19:53:07.397110    1994 logs.go:41] http: TLS handshake error from $MASTER_IP:46966: tls: first record does not look like a TLS handshake
kubelet-wrapper[1994]: I0615 19:55:18.190503    1994 logs.go:41] http: TLS handshake error from $MASTER_IP:47054: tls: first record does not look like a TLS handshake
kubelet-wrapper[1994]: I0615 19:55:47.159931    1994 logs.go:41] http: TLS handshake error from 10.168.141.212:57342: tls: first record does not look like a TLS handshake
kubelet-wrapper[1994]: I0615 19:55:50.799724    1994 logs.go:41] http: TLS handshake error from 10.168.141.212:57394: tls: first record does not look like a TLS handshake
kubelet-wrapper[1994]: I0615 19:55:56.944580    1994 logs.go:41] http: TLS handshake error from 10.168.141.212:57471: tls: first record does not look like a TLS handshake

I'm not sure what to make of the above... obviously kubelet still wants a TLS response. I have no idea why. And I'm not sure where these high ports are coming from.

When I try to get the logs via kubectl logs kube-proxy-$MASTER_IP --namespace=kube-system the response is

Error from server: Get http://$MASTER_IP:10250/containerLogs/kube-system/kube-proxy-96.118.242.147/kube-proxy: malformed HTTP response "\x15\x03\x01\x00\x02\x02"

The HTTP response looks like an HTTPS/HTTP mismatch.

So it appears that there is a remnant of TLS left. The API Server itself seems ok. But other pieces such as the kublet are still making TLS requests?

Any insights would be greatly appreciated.

Thanks!

-- phil swenson
coreos
kubernetes

1 Answer

7/6/2017

I use a similar CoreOS script to deploy a v1.6.4 cluster on bare metal, but I implement even more aggressive TLS. My version is found here.

After reading my config and the latest documentation for kubelet, it doesn't seem possible to disable encryption on kubelet's port 10250.

The high-numbered ports are used by the control plane elements trying to talk to kubelet.

You said that the dashboard doesn't work, but do the other pods get successfully deployed?

-- Eugene Chow
Source: StackOverflow