I'm working on a terraform configuration for a Kubernetes cluster in AWS.
I've got the cluster running. I can speak to it from my local machine via kubectl and start pods.
I noticed kubectl taking 10s to respond after 12 hours of launching the cluster, so looked around for possible causes.
I notice a lot of TLS handshake errors in the apiserver's container log that look like this
I0223 13:50:31.126486 1 logs.go:40] http: TLS handshake error from 10.0.3.5:49360: remote error: bad certificate
I0223 13:50:31.168158 1 logs.go:40] http: TLS handshake error from 10.0.4.5:39214: remote error: bad certificate
10.0.3.5
and 10.0.4.5
are the Kubernetes worker/minion instances which I provisioned with appropriate TLS assets. Kubelet on the workers must be communicating with the apiserver — I see the nodes if I run kubectl get nodes
on my computer, and I can launch pods via kubectl
as well, which the workers would read from the API.
Any thoughts on what these errors could mean?