I am using kubeadm-dind-cluster a Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker).
I have a fresh Centos 7 install on which I have just run ./dind-cluster-v1.13.sh up
. I did not set any other values and am using all the default values for networking.
All appears well:
[root@node01 dind-cluster]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 23h v1.13.0
kube-node-1 Ready <none> 23h v1.13.0
kube-node-2 Ready <none> 23h v1.13.0
[root@node01 dind-cluster]# kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:32769
name: dind
contexts:
- context:
cluster: dind
user: ""
name: dind
current-context: dind
kind: Config
preferences: {}
users: []
[root@node01 dind-cluster]# kubectl cluster-info
Kubernetes master is running at http://127.0.0.1:32769
KubeDNS is running at http://127.0.0.1:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@node01 dind-cluster]#
and it appears healthy:
[root@node01 dind-cluster]# curl -w '\n' http://127.0.0.1:32769/healthz
ok
I know the dashboard service is there:
[root@node01 dind-cluster]# kubectl get services kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.102.82.8 <none> 80:31990/TCP 23h
however any attempt to access it is refused:
[root@node01 dind-cluster]# curl http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
[root@node01 dind-cluster]# curl http://127.0.0.1:8080/ui
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
I also see the following in the firewall log:
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 127.0.0.1 --dport 32769 -j DNAT --to-destination 10.192.0.2:8080 ! -i br-669b654fc9cd' failed: iptables: No chain/target/match by that name.
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i br-669b654fc9cd -o br-669b654fc9cd -p tcp -d 10.192.0.2 --dport 8080 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 10.192.0.2 -d 10.192.0.2 --dport 8080 -j MASQUERADE' failed: iptables: No chain/target/match by that name.
Any suggestions on how I actually access the dashboard externally from my development machine? I don't want to use the proxy to do this.
In that situation, you'd indeed expect that everything works out-of-the box. However, seemingly the setup is missing a suitable service-account to access and manage the cluster through the dashboard.
Note I might be entirely mislead here, and maybe kubeadm-dind-cluster in fact provides such an account. Please note also that this project has been discontinued some time ago.
Anyway, here is how I fixed that problem. Hopefully it's of some help for other people (still) trying that out...
define the missing account and Role binding: Create a yaml file
# ------------------- Dashboard Secret ------------------- #
# ...already available
# ------------------- Dashboard Service Account ------------------- #
# ...already available
# ------------------- Dashboard Cluster Admin Account ------------------- #
#
# added by Ichthyo 2019-2
# - ServiceAccount and ClusterRoleBinding
# - allows administrative Access intoto Namespace kube-system
# - necessary to log-in via Kubernetes-Dashboard
#
apiVersion: v1
kind: ServiceAccount
metadata:
name: dash-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dash-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dash-admin
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Apply it to the already running cluster
kubectl apply -f k8s-dashboard-RBAC.yaml
Then find out the security token corresponding to dash-admin
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep dash-admin | awk '{print $1}')|egrep '^token:\s+'|awk '{print $2}
finally paste the extracted Token into the login screen
You should be able to access kubernetes-dashboard
using the following addresses:
ClusterIP(works for other pods in cluster):
http://10.102.82.8:80/
NodePort(works for every host who can access cluster nodes using their IPs):
http://clusterNodeIP:31990/
Usually Kubernetes dashboard uses https
protocol, so you may need to use different ports in request to kubernetes-dashboard
Service for that.
You can also access the dashboard using kube-apiserver
as a proxy:
Directly to dashboard Pod:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/pods/https:kubernetes-dashboard-pod-name:/proxy/#!/login
To dashboard ClusterIP service:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
I can guess that <master-ip>:<apiserver-port>
would mean 127.0.0.1:32769
in your case.