Kubernetes unable to access the kube-apiserver from inside pod on node

1/17/2016

I have configured a vagrant backed kubernetes cluster but I am unable to access the kube-apiserver running on master from within pods running on nodes. I am trying to look up a service from within a pod via the api but it looks like the api keeps dropping the connection.

Using curl from within the pod I get the following output

root@itest-pod-2:/# curl -v \
--insecure -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://$KUBERNETES_SERVICE_HOST:443/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.245.0.1...
* Connected to 10.245.0.1 (10.245.0.1) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to 10.245.0.1:443 
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to 10.245.0.1:443 
root@itest-pod-2:/# 

However if I configure a single machine environment by simply installing all the node components onto the master I am able to contact the api from within a pod.

root@itest-pod-3:/# curl -v --insecure \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://$KUBERNETES_SERVICE_HOST:443/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.245.0.1...
* Connected to 10.245.0.1 (10.245.0.1) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA
* Server certificate:
*    subject: CN=10.0.2.15@1452869292
*    start date: 2016-01-15 14:48:12 GMT
*    expire date: 2017-01-14 14:48:12 GMT
*    issuer: CN=10.0.2.15@1452869292
*    SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /api/v1/namespaces/default/services?labelSelector=name%3Dtest-server HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 10.245.0.1
> Accept: */*
> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tdDY3cXUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImIxNGI4YWE3LWJiOTgtMTFlNS1iNjhjLTA4MDAyN2FkY2NhZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.HhPnit7Sfv-yUkMW6Cy9ZVbuiV2wt5PLaPSP-uZtaByOPagkb8d-8zBQE8Lx53lqxMFwBmjjxSWl-vKtSGa0ka6rEkq_xWtFJb8uDDlxz_R63R6IJBWB8YhwB7SzPVWgtHohj55D3pL8-r8NOQSQVXFAHaiGTlzmtwiE3CmJv3yBzBLALG0yvtW2YgwrO9jlxCGdFIOKae-5eduiOyZHUimxAgfBkbwSNfSzXYZTJNryfPiDBKZybh9c3Wd-pNsSZyw9gbBhbGFM7EiK9pWsdViQ__fZA2JbxX78YbajWE6CeL4FWLKFu4MuIlnmhLOvOXia_9WXz1B8XJ-MlzclZQ
> 
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Fri, 15 Jan 2016 16:37:40 GMT
< Content-Length: 171
< 
{
  "kind": "ServiceList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/default/services",
    "resourceVersion": "1518"
  },
  "items": []
}
* Connection #0 to host 10.245.0.1 left intact

What's confusing me is that the configuration is the same in both cases except that the node components have been installed into the master, which makes me think it is not a misconfiguration of ssl/https so much as it is something to do with the kubernetes network configuration.

I've looked into the logs of the apiserver but I can't see anything related to these dropped connections.

Any help would be greatly appreciated.

-- PiersyP
kubernetes

2 Answers

1/19/2016

The problem was that we had not set the bind address for the apiserver (we had set insecure bind address but not --bind-address) we thought this would not be a problem since by default the apiserver binds on all interfaces.

When bound on all interfaces calls to /api/v1/endpoints return the eth0 IP for the apiserver secure port. In most cases this would probably be fine but since we were running kubernetes on a virtualbox vm eth0 is the NAT interface created by by virtualbox that can onl be reached through host ports on which VBoxHeadless is listening.

When outgoing traffic leaves a pod it hits a set of iptables rules matching cluster service ips and redirecting to a port on the proxy the proxy then forwards the request to the actual machine in the cluster.

In this case kube-proxy did not have available the externally reachable ip for the apiservice instead it had unreachable the eth0 address used by virtualbox.

Oddly though it seems as if the proxy then attempts to contact the api on its insecure port (it knows the intended destination for the request due to the iptables rules which it creates). Since our request in this case is https the apiserver drops it after the first client hello.

Normally in curl this looks like this

root@app-master-0:/home/vagrant# curl -v --insecure \
https://10.235.1.2:8080/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.235.1.2...
* Connected to 10.235.1.2 (10.235.1.2) port 8080 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
* Closing connection 0
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
protocol

But when being proxied through the kube proxy it looks like this

root@itest-pod-2:/# curl -v --insecure \
https://$KUBERNETES_SERVICE_HOST:443/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.245.0.1...
* Connected to 10.245.0.1 (10.245.0.1) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to 10.245.0.1:443
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to 10.245.0.1:443

by adding --bind-address=xxxx with the externally reachable eth1 ip to the apiserver's args we were able to fix this.

-- PiersyP
Source: StackOverflow

1/18/2016

I assume you pass the kubelet the worker/minion certificate/key pair.

Does the api-server certificate include the distinguished name / alternate name pointing to the master (the subjectAltName on your openssl.conf should have the IP of the master when generating the API server cert)

That is I think the most common reason for this problem.

Create your CA key, and cert then:

cat <<EOT >openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
IP.1 = ${K8S_SERVICE_IP}
IP.2 = ${MASTER_HOST}
EOT

openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config ./openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile ./openssl.cnf
-- MrE
Source: StackOverflow