This is a similar question as this but I could not find a resolution from it. I have setup Kubernetes cluster with CoreOS 2 masters and 3 nodes on AWS by following this step by step guide. k8s version is 1.4.0 and all servers are in a private subnet, so I build a bastion VPN server on a different VPC and connect to a k8s cluster via the bastion server with VPC peering.
It works basically pretty well but I noticed that I cannot access kubernetes dashboard from a web browser. These are my kuberentes dashboard svc and rc yaml files.
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
---
apiVersion: v1
kind: ReplicationController
metadata:
name: kubernetes-dashboard-v1.4.0
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
If I just access https://master-host/ui
, it returns an authentication error. I understand it and feel no problem because the api server needs an authentication. But when I run kubectl proxy --port=8001
then access http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/
, a browser returns
Error: 'dial tcp 10.10.93.3:9090: i/o timeout'
Trying to reach: 'http://10.10.93.3:9090/'
while a request to the api server just works file like http://localhost:8001/static
returns:
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1alpha1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1alpha1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/ping",
"/logs",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
}
It looks pods on master cannot connect to a pod on nodes. From busybox on a node,
kubectl exec busybox -- wget 10.10.93.3:9090
can fetch an index.html so node-to-node communication should be ok.
A result of service describe:
❯❯❯ kubectl describe svc kubernetes-dashboard --namespace=kube-system ⏎ master ⬆ ✭ ✚ ✱ ➜ ◼
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.11.0.82
Port: <unset> 80/TCP
Endpoints: 10.10.93.9:9090
Session Affinity: None
No events.
What else I'm missing? If I use a NodePort I can see the dashboard but I don't want to expose the dashboard. I suspect there is either some missing port I have to open on AWS security group settings, or some flanneld/docker/cni network settings went wrong and it causes the issue.
This is a log of the dashboard pod.
Starting HTTP server on port 9090
Creating API server client for https://10.11.0.1:443
Successful initial request to the apiserver, version: v1.4.0+coreos.1
Creating in-cluster Heapster client
so it looks nothing actually reached to the dashboard.
[Updated] I found those logs on api-server pod.
proxy.go:186] Error proxying data from backend to client: write tcp [master-ip-address]:443->[vpn-ip-address]:61980: write: connection timed out
So obviously something happened when proxing between api server and VPN server.
Your service config seems to have a typo:
spec:
selector:
k8s-app: kubernetes-dashboar
You should be able to do kubectl describe svc kubernetes-dashboard --namespace=kube-system
and see a valid endpoint when things are fine.
Ah...finally I noticed that there is a mistake on my AWS security setting. That is, I open TCP 8472 port for flanneld about master => node communication, that should be UDP. I knew that should be UDP so it took very long time until I re-checked it and noticed the mistake.
After I updated the setting, kubectl proxy
instantly worked and I can now see kubernetes dashboard.