I'm running Kubernetes 1.6.2 with RBAC enabled. I've created a user kube-admin
that has the following Cluster Role binding
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: k8s-admin subjects: - kind: User name: kube-admin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
When I attempt to kubectl exec
into a running pod I get the following error.
kubectl -n kube-system exec -it kubernetes-dashboard-2396447444-1t9jk -- /bin/bash error: unable to upgrade connection: Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy)
My guess is I'm missing a ClusterRoleBinding
ref, which role am I missing?
The connection between kubectl and the api is fine, and is being authorized correctly.
To satisfy an exec request, the apiserver contacts the kubelet running the pod, and that connection is what is being forbidden.
Your kubelet is configured to authenticate/authorize requests, and the apiserver is not providing authentication information recognized by the kubelet.
The way the apiserver authenticates to the kubelet is with a client certificate and key, configured with the --kubelet-client-certificate=... --kubelet-client-key=...
flags provided to the API server.
See https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#overview for more information.