The connection to the server localhost:8080 was refused when the configuration does not point to localhost

9/9/2021

I am unable to connect to our Kubernetes cluster. The kubectl command does not seem to take the configuration into account...

When I issue a kubectl cluster-info (or kubectl get pods) I get the following error message:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

I was suspecting that the ~/.kube/config was pointing to my minikube but it is not the case:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS...==
    server: https://10.95.xx.yy:6443
  name: cluster.local
contexts:
- context:
    cluster: cluster.local
    namespace: xxx-cluster-xxx-xx-username
    user: username
  name: username-context
current-context: ""
kind: Config
preferences: {}
users:
- name: username
  user:
    client-certificate: .certs/username.crt
    client-key: .certs/username.key

Surprisingly, the $KUBECONFIG environment variable is set to the correct path:

KUBECONFIG=/Users/username/.kube/config

and the kubectl config view works fine (a.k.a. is not pointing to localhost but to https://10.95.xx.yy:6443)

Finally, I also try to specify the path to the config file when invoking kubectl (kubectl get pods --kubeconfig=/Users/username/.kube/config), but the error remains the same...

-- E. Jaep
kubectl
kubernetes

1 Answer

9/10/2021

Your current context is unset, as seen with current-context: ""; if you were to run kubectl --context username-context get pods I would expect it to do more what you want. If that turns out to be the case, one can run kubectl config use-context username-context to set the current-context going forward

-- mdaniel
Source: StackOverflow