Unable to access Kubernetes dashboard from outside the cluster

7/19/2017

I have setup Kubernetes cluster comprising a master and three nodes. I used the following for the setup:
1. kubeadm (1.7.1)
2. kubectl (1.7.1)
3. kubelet (1.7.1)
4. weave (weave-kube-1.6)
5. docker (17.06.0~ce-0~debian)

All the four instances have been setup in Google Cloud and the OS is Debian GNU/Linux 9 (stretch)

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          19m
kube-system   kube-apiserver-master            1/1       Running   0          19m
kube-system   kube-controller-manager-master   1/1       Running   0          19m
kube-system   kube-dns-2425271678-cq9wh        3/3       Running   0          24m
kube-system   kube-proxy-q399p                 1/1       Running   0          24m
kube-system   kube-scheduler-master            1/1       Running   0          19m
kube-system   weave-net-m4bgj                  2/2       Running   0          4m


$ kubectl get nodes
NAME      STATUS     AGE       VERSION
master    Ready      1h        v1.7.1
node1     Ready      6m        v1.7.1
node2     Ready      5m        v1.7.1
node3     Ready      7m        v1.7.1

The apiserver process is running with the following parameters:

root      1148  1101  1 04:38 ?  00:03:38 kube-apiserver 
--experimental-bootstrap-token-auth=true --allow-privileged=true 
--secure-port=6443
--insecure-port=0 --service-cluster-ip-range=10.96.0.0/12 
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname 
--requestheader-username-headers=X-Remote-User 
--authorization-mode=Node,RBAC --advertise-address=10.128.0.2 
--etcd-servers=http://127.0.0.1:2379

I ran the following commands for accessing the dashboard:

$ kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created

But since the dashboard was not accessible, i tried the following commands too although it didn't look quite relevant. Saw it somewhere.

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

Finally, i came across a link which looked relevant to my issue. I tried but i am getting the following error:

d:\Work>kubectl --kubeconfig=d:\Work\admin.conf proxy -p 80
Starting to serve on 127.0.0.1:80I0719 13:37:13.971200    5680 logs.go:41] http: proxy error: context canceled
I0719 13:37:15.893200    5680 logs.go:41] http: proxy error: dial tcp 124.179.54.120:6443: connectex: No connection could be made
because the target machine actively refused it.

If i do telnet to the master IP (124.179.54.120) from my laptop on port 22, it works but it doesn't work on port 6443. Port 6443 is open on master as i am able to nc on the given master port from my node machine as shown below:

tom@node1:~$ nc -zv 10.128.0.2 6443
master.c.kubernetes-174104.internal [10.128.0.2] 6443 (?) open

On my laptop, firewall is already disabled and i also disabled firewall on master.

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination

In Google Cloud console, i added TCP and UDP port 6443 to ingress requests in Google Cloud firewall's rule but still i am unable to access the dashboard using http://localhost/ui

Master config details: Master config details

Firewall config details:

Firewall config details

UPDATE: Content of d:\Work\admin.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <CA_cert>
    server: https://124.179.54.120:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: <client-cert>
    client-key-data: <client-key>

UPDATE1: From one of the three nodes, i ran the following command:

tom@node1:~$ curl -v http://127.0.0.1:8001
* Rebuilt URL to: http://127.0.0.1:8001/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8001 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8001
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Date: Thu, 20 Jul 2017 06:57:48 GMT
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8
<
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact
-- Technext
google-cloud-platform
kubeadm
kubernetes

1 Answer

8/6/2017

By default the kubectl proxy only accepts incoming connections from localhost and both ipv4 and ipv6 loopback addresses.
Try to set the --accept-hosts='.*' when running the proxy, so it starts accepting connections from any address.
You might also need to set the --address flag to a public IP, because the default value is 127.0.0.1.

More details in the kubectl proxy docs.

-- Toresan
Source: StackOverflow