Exec command into a Pod using Kubernetes "client-go"

12/13/2019

I'm trying to exec a command into a pod, but I keep getting the error unable to upgrade connection: Forbidden

I'm trying to test my code in development by doing kubectl proxy which works for all other operations such as creating a deployment or deleting it, however it's not working for executing a command, I read that I need pods/exec so I created a service account with such role like

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dev-sa
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-view-role
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-exec-view-role
rules:
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["get","create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods-svc-account
  namespace: default
subjects:
- kind: ServiceAccount
  name: dev-sa
roleRef:
  kind: Role
  name: pod-view-role
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods-exec-svc-account
  namespace: default
subjects:
- kind: ServiceAccount
  name: dev-sa
roleRef:
  kind: Role
  name: pod-exec-view-role
  apiGroup: rbac.authorization.k8s.io

then I retrieve the bearer token for the service account and try to use it in my code

func getK8sConfig() *rest.Config {
    // creates the in-cluster config
    var config *rest.Config
    fmt.Println(os.Getenv("DEVELOPMENT"))
    if os.Getenv("DEVELOPMENT") != "" {
        //when doing local development, mount k8s api via `kubectl proxy`
        fmt.Println("DEVELOPMENT")
        config = &rest.Config{
            Host:            "http://localhost:8001",
            TLSClientConfig: rest.TLSClientConfig{Insecure: true},
            APIPath:         "/",
            BearerToken:     "eyJhbGciOiJSUzI1NiIsImtpZCI6InFETTJ6R21jMS1NRVpTOER0SnUwdVg1Q05XeDZLV2NKVTdMUnlsZWtUa28ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRldi1zYS10b2tlbi14eGxuaiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZXYtc2EiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmZDVhMzRjNy0wZTkwLTQxNTctYmY0Zi02Yjg4MzIwYWIzMDgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZXYtc2EifQ.woZ6Bmkkw-BMV-_UX0Y-S_Lkb6H9zqKZX2aNhyy7valbYIZfIzrDqJYWV9q2SwCP20jBfdsDS40nDcMnHJPE5jZHkTajAV6eAnoq4EspRqORtLGFnVV-JR-okxtvhhQpsw5MdZacJk36ED6Hg8If5uTOF7VF5r70dP7WYBMFiZ3HSlJBnbu7QoTKFmbJ1MafsTQ2RBA37IJPkqi3OHvPadTux6UdMI8LlY7bLkZkaryYR36kwIzSqsYgsnefmm4eZkZzpCeyS9scm9lPjeyQTyCAhftlxfw8m_fsV0EDhmybZCjgJi4R49leJYkHdpnCSkubj87kJAbGMwvLhMhFFQ",
        }
    } else {
        var err error
        config, err = rest.InClusterConfig()
        if err != nil {
            panic(err.Error())
        }

    }

    return config
}

Then I try to run the OpenShift example to exec into a pod

    // Determine the Namespace referenced by the current context in the
    // kubeconfig file.
    namespace := "default"

    // Get a rest.Config from the kubeconfig file.  This will be passed into all
    // the client objects we create.
    restconfig := getK8sConfig()

    // Create a Kubernetes core/v1 client.
    coreclient, err := corev1client.NewForConfig(restconfig)
    if err != nil {
        panic(err)
    }

    // Create a busybox Pod.  By running `cat`, the Pod will sit and do nothing.
    var zero int64
    pod, err := coreclient.Pods(namespace).Create(&corev1.Pod{
        ObjectMeta: metav1.ObjectMeta{
            Name: "busybox",
        },
        Spec: corev1.PodSpec{
            Containers: []corev1.Container{
                {
                    Name:    "busybox",
                    Image:   "busybox",
                    Command: []string{"cat"},
                    Stdin:   true,
                },
            },
            TerminationGracePeriodSeconds: &zero,
        },
    })
    if err != nil {
        panic(err)
    }

    // Delete the Pod before we exit.
    defer coreclient.Pods(namespace).Delete(pod.Name, &metav1.DeleteOptions{})

    // Wait for the Pod to indicate Ready == True.
    watcher, err := coreclient.Pods(namespace).Watch(
        metav1.SingleObject(pod.ObjectMeta),
    )
    if err != nil {
        panic(err)
    }

    for event := range watcher.ResultChan() {
        switch event.Type {
        case watch.Modified:
            pod = event.Object.(*corev1.Pod)

            // If the Pod contains a status condition Ready == True, stop
            // watching.
            for _, cond := range pod.Status.Conditions {
                if cond.Type == corev1.PodReady &&
                    cond.Status == corev1.ConditionTrue {
                    watcher.Stop()
                }
            }

        default:
            panic("unexpected event type " + event.Type)
        }
    }

    // Prepare the API URL used to execute another process within the Pod.  In
    // this case, we'll run a remote shell.
    req := coreclient.RESTClient().
        Post().
        Namespace(pod.Namespace).
        Resource("pods").
        Name(pod.Name).
        SubResource("exec").
        VersionedParams(&corev1.PodExecOptions{
            Container: pod.Spec.Containers[0].Name,
            Command:   []string{"date"},
            Stdin:     true,
            Stdout:    true,
            Stderr:    true,
            TTY:       true,
        }, scheme.ParameterCodec)

    exec, err := remotecommand.NewSPDYExecutor(restconfig, "POST", req.URL())
    if err != nil {
        panic(err)
    }

    // Connect this process' std{in,out,err} to the remote shell process.
    err = exec.Stream(remotecommand.StreamOptions{
        Stdin:  os.Stdin,
        Stdout: os.Stdout,
        Stderr: os.Stderr,
        Tty:    true,
    })
    if err != nil {
        panic(err)
    }

    fmt.Println("done")

so it seems like the bearer token is getting ignored and isntead I'm getting the privileges of the kubectl admin.

How can I force the rest client to use the provided bearer token? Is this the right way to exec a command into a pod?

-- perrohunter
kubernetes

1 Answer

12/13/2019

You are getting the privileges of the kubectl admin because you are connecting through localhost endpoint exposed by kubeproxy. This already authorizes you with your admin credentials.

I have replicated this and I have come up with this solution:

What you want to do is to connect directly to the API server. To retrieve API address use this command:

$ kubectl cluster-info

Then replace that localhost address with the APIserverIP address

...
        config = &rest.Config{
            Host:            "<APIserverIP:port>",
            TLSClientConfig: rest.TLSClientConfig{Insecure: true},

...

Your code is creating a pod so you also need to add create and delete permissions to your Service Account

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-view-role
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create", "delete", "get", "list", "watch"]

Let me know if that was helpful.

-- acid_fuji
Source: StackOverflow