I am trying to authenticate with a locally hosted Kubernetes cluster (v1.6.4) using a certificate. This takes part in the context of using the Kubernetes plugin for Jenkins.
I am following the guidelines for Minikube in the Kubernetes-plugin README file which I adapted to my scenario:
Convert the client certificate to PKCS:
$ sudo openssl pkcs12 -export -out kubernetes.pfx -inkey /etc/kubernetes/pki/apiserver.key -in /etc/kubernetes/pki/apiserver.crt -certfile /etc/kubernetes/pki/ca.crt -passout pass:jenkins
In Jenkins, create credentials using a certificate
Kind
: Certificate
Certificate
: Upload PKCS#12 certificate
and upload file kubernetes.pfx
Password
: jenkins
(as specified during certificate creation)Manage Jenkins
-> Add new cloud
-> Kubernetes
Kubernetes URL
: https://10.179.1.121:6443
(as output by kubectl config view
)Kubernetes server certificate key
: paste the contents of /etc/kubernetes/pki/ca.crt
.Disable https certificate check
: checked because the test setup does not have a signed certificateKubernetes Namespace
: tried both default
and kubernetes-plugin
Credentials
: CN=kube-apiserver
(i.e. the credentials created above)Now when I click on Test Connection
, this is the error message shown in the Jenkins Web UI:
Error connecting to https://10.179.1.121:6443: Failure executing: GET at: https://10.179.1.121:6443/api/v1/namespaces/kubernetes-plugin/pods. Message: Unauthorized.
The Jenkins logs show this message:
Sep 05, 2017 10:22:03 AM io.fabric8.kubernetes.client.Config tryServiceAccount
WARNING: Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
The documentation is, unfortunately, mostly limited to Kubernetes running on Minikube and to Google Cloud Engine, but I do not see a conceptual difference between the former and a locally hosted Kubernetes cluster.
The following Curl call for testing results in a very different error message:
$ curl --insecure --cacert /etc/kubernetes/pki/ca.crt --cert kubernetex.pfx:secret https://10.179.1.121:6443
User "system:anonymous" cannot get at the cluster scope.
More verbose:
$ curl -v --insecure --cacert /etc/kubernetes/pki/ca.crt --cert kubernetex.pfx:secret https://10.179.1.121:6443
* About to connect() to 10.179.1.121 port 6443 (#0)
* Trying 10.179.1.121...
* Connected to 10.179.1.121 (10.179.1.121) port 6443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found: kubernetex.pfx
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=kube-apiserver
* start date: Jun 13 11:33:55 2017 GMT
* expire date: Jun 13 11:33:55 2018 GMT
* common name: kube-apiserver
* issuer: CN=kubernetes
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.179.1.121:6443
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< Content-Type: text/plain
< X-Content-Type-Options: nosniff
< Date: Tue, 05 Sep 2017 10:34:23 GMT
< Content-Length: 57
<
* Connection #0 to host 10.179.1.121 left intact
I have also set up a ServiceAccount:
$ kubectl describe serviceaccount --namespace=kubernetes-plugin
Name: default
Namespace: kubernetes-plugin
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-6qwj1
Tokens: default-token-6qwj1
Name: jenkins
Namespace: kubernetes-plugin
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: jenkins-token-1d623
Tokens: jenkins-token-1d623
This question deals with a related problem, recommending to use either a ServiceAccount or a certificate, but the answer to the latter aproach lacks the details about how to tie an RBAC profile to that certificate. The Kubernetes documentation about authentication does not seem to cover this use case.
The WARNING: Error reading service account token indicates that the key used to encrypt ServiceAccount tokens is different between kube-apiserver (--service-account-key-file) and kube-controller-manager (--service-account-private-key-file). If your kube-apiserver command-line doesn't specify --service-account-key-file then the value of --tls-private-key-file is used and I suspect that this is the issue.
I'd suggest always explicitly setting kube-apiserver --service-account-key-file to match the kube-controller-manager --service-account-private-key-file value.