I have a script that calls kubectl -server $server --certificate-authority $ca --token $token get pod --all-namespaces
outside the cluster, where$token
is from a service account my-sa
(in namespace my-ns
) with suitably restricted permissions under RBAC.
I now want to refactor this such that the script calls kubectl --kubeconfig my-service.conf get pod --all-namespaces
instead, i.e. it should refer to a kubeconfig file instead of setting local parameters. This is in following Kubernetes' own conventions around its own kubeconfigs in /etc/kubernetes
.
I've tried the following kubeconfig my-service.conf
; <CA_DATA>
is the base64-encoded content of /etc/kubernetes/pki/ca.crt
, <SERVER>
is same as $server
, and <TOKEN>
is same as $token
:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <CA_DATA>
server: <SERVER>
name: my-cluster
contexts:
- context:
name: default-context
context:
cluster: my-cluster
user: default-user
current-context: default-context
users:
- name: my-service
user:
token: <TOKEN>
kubectl --kubeconfig /dev/null --server $server --certificate-authority /etc/kubernetes/pki/ca.crt --token $token get pods --all-namespaces
works on the command line, but kubectl --kubeconfig my-service.conf get pod --all-namespaces
produces the following error message:
Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
So there still be something wrong with the structure of my kubeconfig file. Why did the client not authenticate as system:serviceaccount:my-ns:my-sa
? What could be wrong?
UPDATE I was wondering whether it was perhaps inappropriate to use service account tokens outside the cluster (Kubernetes' own kubeconfigs use client certificates instead). But then the documentation clearly states: "service account bearer tokens are perfectly valid to use outside the cluster".
Your context config is referencing an inexistent credential...
Your credential is specified as - name: my-service
, so your context should be:
- context:
name: default-context
context:
cluster: my-cluster
user: my-service # instead of default-user