Run kubectl inside a cluster

5/28/2018

I have a Kubernetes 1.10 cluster up and running. Using the following command, I create a container running bash inside the cluster:

kubectl run tmp-shell --rm -i --tty --image centos -- /bin/bash

I download the correct version of kubectl inside the running container, make it executable and try to run

./kubectl get pods

but get the following error:

Error from server (Forbidden): pods is forbidden:
User "system:serviceaccount:default:default" cannot
list pods in the namespace "default"

Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one? How do I allow the serviceaccount to list the pods? My final goal will be to run helm inside the container. According to the docs I found, this should work fine as soon as kubectl is working fine.

-- Achim
kubectl
kubernetes
kubernetes-helm

2 Answers

3/24/2020

True kubectl will try to get everything needs to authenticate with the master.

But with ClusterRole and "cluster-admin" you'll give unlimited permissions across all namespaces for that pod and sounds a bit risky.

For me, it was a bit annoying adding extra 43MB for the kubectl client in my Kubernetes container but the alternative was to use one of the SDKs to implement a more basic client. kubectl is easier to authenticate because the client will get the token needs from /var/run/secrets/kubernetes.io/serviceaccount plus we can use manifests files if we want. I think for most common of the Kubernetes setups you shouldn't add any additional environment variables or attach any volume secret, will just work if you have the right ServiceAccount.

Then you can test if is working with something like:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME.        READY   STATUS    RESTARTS   AGE
pod1-0       1/1     Running   0          6d17h
pod2-0       1/1     Running   0          6d16h
pod3-0       1/1     Running   0          6d17h
pod3-2       1/1     Running   0          67s

or permission denied:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1

Tested on:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

You can check my answer at How to run kubectl commands inside a container? for RoleBinding and RBAC.

-- Nick G
Source: StackOverflow

5/28/2018

Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one?

Yes, it used the KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST envvars to locate the API server, and the credential in the auto-injected /var/run/secrets/kubernetes.io/serviceaccount/token file to authenticate itself.

How do I allow the serviceaccount to list the pods?

That depends on the authorization mode you are using. If you are using RBAC (which is typical), you can grant permissions to that service account by creating RoleBinding or ClusterRoleBinding objects.

See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions for more information.

I believe helm requires extensive permissions (essentially superuser on the cluster). The first step would be to determine what service account helm was running with (check the serviceAccountName in the helm pods). Then, to grant superuser permissions to that service account, run:

kubectl create clusterrolebinding helm-superuser \
  --clusterrole=cluster-admin \
  --serviceaccount=$SERVICEACCOUNT_NAMESPACE:$SERVICEACCOUNT_NAME
-- Jordan Liggitt
Source: StackOverflow