Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context

4/10/2019

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).

This all works fine. I have no trouble with this.

However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.

I ran the following (approximate) command:

kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true  --token=mytoken

And it said: "Context "mycontext " modified."

I then ran "kubectl config view -o json" and got this:

{
    "kind": "Config",
    "apiVersion": "v1",
    "preferences": {},
    "clusters": [],
    "users": [],
    "contexts": [
        {
            "name": "mycontext",
            "context": {
                "cluster": "",
                "user": "",
                "namespace": "abc-def-ghi"
            }
        }
    ],
    "current-context": "mycontext"
}

That doesn't look right to me.

I then ran something like this:

telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n

And it said this:

T: Error: Namespace 'abc-def-ghi' does not exist

Update:

And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.

-- David M. Karr
kubernetes

1 Answer

4/10/2019

You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:

$ kubectl set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file

Then something like this for the user:

$ kubectl set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile

Then you define your context based on your cluster, user and namespace:

$ kubectl set-context dev-frontend --cluster=development --namespace=frontend --user=developer

More information here

Your config should look something like this:

$ kubectl config view -o json
{
    "kind": "Config",
    "apiVersion": "v1",
    "preferences": {},
    "clusters": [
        {
            "name": "development",
            "cluster": {
                "server": "https://1.2.3.4",
                "certificate-authority-data": "DATA+OMITTED"
            }
        }
    ],
    "users": [
        {
            "name": "developer",
            "user": {
                    "client-certificate": "fake-cert-file",
                    "client-key": "fake-key-seefile"
            }
        }
    ],
    "contexts": [
        {
            "name": "dev-frontend",
            "context": {
                "cluster": "development",
                "user": "developer",
                "namespace": "frontend"
            }
        }
    ],
    "current-context": "dev-frontend"
}
-- Rico
Source: StackOverflow