"No resources found" from kubectl get for remote cluster

3/18/2019

I have kubectl configured for a multi-cluster setup involving local Kubernetes that comes with Docker Mac and a remote cluster using Minikube. When I switch context to my remote cluster kubectl can't find any resources like pods or services. Where can I look at logs to find out more? I do see the resources if I run kubectl on the actual remote machine.

When I execute kubectl version I get this:

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

kubectl get componentstatus returns:

NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}

kubectl cluster-info returns:

Kubernetes master is running at https://remote-cluster-ip:8443
KubeDNS is running at https://remote-cluster-ip:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Executing kubectl cluster-info dump produces a ton of output.

I followed these steps to get Minikube working and successfully deployed a sample app. https://kubernetes.io/docs/setup/minikube/

I followed these steps for multi-cluster config: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

I copied over all the crt and key files from .minikube to my local machine for use in the config. Here is my redacted config:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://localhost:6443
  name: docker-for-desktop-cluster
- cluster:
    insecure-skip-tls-verify: true
    server: https://remote-cluster-ip:8443
  name: remote-cluster
contexts:
- context:
    cluster: docker-for-desktop-cluster
    user: docker-for-desktop
  name: docker-for-desktop
- context:
    cluster: remote-cluster
    namespace: remote-cluster
    user: minikube
  name: remote-cluster
current-context: remote-cluster
kind: Config
preferences: {}
users:
- name: docker-for-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: minikube
  user:
    client-certificate: /path/to/local/client.crt
    client-key: /path/to/local/client.key
-- GabeV
kubectl
kubernetes
minikube

1 Answer

3/19/2019

Run kubectl get pods --all-namespaces to check if you can see pods across all namespaces. If you see your pods are running, than you have to set up a default namespace for your current context, i.e.

kubectl config set-context <remote-context-name> --namespace=default
-- A_Suh
Source: StackOverflow