Accessing K8S master server from a remote server via a specific user

5/7/2020

Im trying to emulate role based access from an external server. My scenario is that I want a user "developer" who only has access to create / delete / list pods and have access persistent volume claims. I have a role and rolebinding. I have tested it by using

[root@K8Smaster ~]# kubectl auth can-i create pods --as=developer
yes

So I know this part is working. As you can see its being done on the K8S master server.

To get it working on an external server I

openssl genrsa -out developer.key 2048

openssl req -new -key developer.key -subj "/CN=developer/O=User" -out developer.csr

(I assume the CN and O really dont mean anything...or do they have to match something?, I matched the CN name with the User "developer")

openssl x509 -req -in developer.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out developer.crt

I copied the ca.cert / developer.crt / developer.key files to my external server.

On my external server I installed kubectl.

I followed https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

Though its not super clear as to which server I create these files and run the commands on (master or remote) seems like creating the context is on the master and everything else is on the remote

On the remote server if I run

 [developer@server ~]$ kubectl get pods
The connection to the server 10.237.107.61 was refused - did you specify the right host or port?

Or if it run

    curl https://10.237.107.61:6443/api/v1/pods --key developer.key --cert developer.crt --cacert ca.crt
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },

Not sure what Im missing.

-- encore02
kubernetes

1 Answer

5/11/2020

Kubectl is using a config file you must have to connect to the cluster. It is possible that your config file is inconsistent due to a lot of major or minor changes. If further analyses of the issues does not show good results, try to rm -f ~/.kube/config and start it from scratch.

As I see, you suspect that the problem is with the self signed certificates. It may require updating cluster root Certificate Authority (CA) on clients, then refreshing the local list for valid certificates.

Go to your local CA directory, check if ca.crt file exists, then copy it to the clients. For clients, perform the following operations:

$ sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
$ sudo update-ca-certificates

cluster: tag accepts either the filename of the CA certificate or an "inline" base64-ed version of the PEM you can see it with:

$ kubectl config set-cluster $foo --certificate-authority=... --embed-certs=true

During deployment part you have to configure also the cluster and user on external server.

Execute command below:

$ kubectl config --kubeconfig=your_kubeconfig_path set-context abc --cluster=10.237.107.61:6443 --user=developer --namespace=default

Context designates namespace/cluster-name - in your case cluster IP/cluster-user.

$ kubectl config use-context abc

While you will enter a kubectl command, the action will apply to the cluster, and namespace listed in the abc context. And the command will use the credentials of the user listed in the abc` context.

Finally if this doesn't work on external server than try to copy only developer user's related data from kubeconfig file on master k8s and copy it into external server kubeconfig file then run this commad again.

If you run the local Kubernetes set up using Vagrant you’ll notice that the ~/.kube/config file gets set up automatically after the clusters comes up; you’ll also feel comforted that the scripts which provision kubernetes inside vagrant also use these commands to set up your ~/.kube/config.

Please take a look: connecting-to-cluster, logging-into-cluster.

-- MaggieO
Source: StackOverflow