I've been setting up a small kubernetes cluster. I have 3 CentOS VMs, one master and 2 minions. Kubernetes runs in docker containers. I set it up with help of the following 2 articles:
Now I'm trying to install the nginx ingress controller. I work with github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx on revision 6c87fed (I also tried the tags 0.6.0 and 0.6.3 - same behavior).
I run the following commands according to the README.md from the above link:
kubectl create -f examples/default-backend.yaml
kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend
kubectl create -f examples/default/rc-default.yaml
Now the pod for the ingress controller comes up properly at first but fails after about 30 seconds or so. The log says
kubectl logs nginx-ingress-controller-ttylt
I0615 11:21:20.641306 1 main.go:96] Using build: https://github.com/bprashanth/contrib.git - git-afb16a4
F0615 11:21:50.643748 1 main.go:125] unexpected error getting runtime information: timed out waiting for the condition
Sounds like its trying to connect to a nonexistent host or so. Any ideas what I can check or how to fix it?
Regards
Edit: As this seems to be a common problem: I should add that I checked for port 80 and 443 to be available on the nodes.
I did not find a solution for the nginx ingress controller - maybe it's just broken at the moment.
Though I did 2 things to achieve my initial goal (have an ingress controller):
1 start kube-proxy with --proxy-mode=userspace as the default proxy mode does not work on the CentOS version I use (CentOS Linux release 7.2.1511 (Core)).
2 I'm using. use traefik with
docker run -d -p 1080:80 traefik \
--kubernetes \
--kubernetes.endpoint=http://my.kubernetes.master:8080
Note that my.kubernetes.master is the public ip of the kubernetes master - i.e. not a cluster ip but a real ip on a real network interface.
The endpoint I use is due to traefik having problems with the ca certificate on the default endpoint. That's not a clean solution though it's ok for my proof of concept.
We bump into this issue too and fixed it putting 'nginx-ingress-controller' and 'default-http-backend' into kube-system namespace. I think the issue is that ingress-controller doesn't have access to API server in the another namespaces. Try it.
The reason for this is actually obscured by the error message. As far as I've been able to determine using strace
and what not, the underlying error is that the TLS handshake fails. The ingress controller will repeatedly try to connect to the master on port 443, which will fail because it doesn't present a correct TLS certificate.
If you look in kube-api-server.log, you'll likely find a bunch of these:
I0705 04:16:17.150073 9521 logs.go:41] http: TLS handshake error from 172.20.1.3:39354: remote error: bad certificate
I've not been yet able to figure out a solution. However, a got a little further: I tried starting the API server with --kubelet-client-certificate
, --kubelet-client-private-key
and --kubelet-certificate-authority
, and then starting Kubelet with the TLS options pointing at the same files, at which point the Nginx controller failed with new error, this time about the cert name not matching. I believe that if you generate the right cert on each worker node, with the right IP address, it will work.
Edit: I found the solution. First of all, the Kubelet needs a kubeconfig
file. It needs to point to the CA cert as well as its own cert/key pair, which we'll call kubelet.crt
and kubelet.key
. When you generate these files, you need to explicitly list not just the IP of the master, but also the cluster IP of the master. Why? Because that's the IP that it talks to.
So when I generated the certs for Kubernetes, I used (via Google's patched version of EasyRSA):
easyrsa --batch "--req-cn=${public_ip}@`date +%s`" build-ca nopass
easyrsa --subject-alt-name="IP:${public_ip},IP:${private_ip},IP:172.16.0.1,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:kubernetes-master" build-server-full kubernetes-master nopass
easyrsa build-client-full kubelet nopass
easyrsa build-client-full kubecfg nopass
Now you'll end up with pki/ca.crt
, pki/issued/kubernetes-master.crt
, pki/private/kubernetes-master.key
, pki/issued/kubelet.crt
, pki/private/kubelet.key
, pki/issued/kubecfg.crt
and pki/private/kubecfg.key
. The kube-apiserver
must be started with:
--client-ca-file=/srv/kubernetes/ca.crt
--tls-cert-file=/srv/kubernetes/kubernetes-master.crt
--tls-private-key-file=/srv/kubernetes/kubernetes-master.key
And you need to create /var/lib/kubelet/kubeconfig
that points to kubelet.crt
, kubelet.key
and ca.crt
according to the docs.