I have created a Kubernetes v1.3.3 cluster on CoreOS based on the contrib repo. My cluster appears healthy, and I would like to use the Dashboard but I am unable to access the UI, even when all authentication is disabled. Below are details of the kubernetes-dashboard
components, as well as some API server configs/output. What am I missing here?
Dashboard Components
core@ip-10-178-153-240 ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "345970"
selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard
uid: bb49360f-551c-11e6-be8c-02b43b6aa639
subsets:
- addresses:
- ip: 172.16.100.9
targetRef:
kind: Pod
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
uid: d4791722-5908-11e6-9697-02b43b6aa639
ports:
- port: 9090
protocol: TCP
core@ip-10-178-153-240 ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "109199"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: bb4804bd-551c-11e6-be8c-02b43b6aa639
spec:
clusterIP: 172.20.164.194
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
core@ip-10-178-153-240 ~ $ kubectl describe svc/kubernetes-dashboard --
namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 172.20.164.194
Port: <unset> 80/TCP
Endpoints: 172.16.100.9:9090
Session Affinity: None
No events.
core@ip-10-178-153-240 ~ $ kubectl get po kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}}
creationTimestamp: 2016-08-02T23:28:34Z
generateName: kubernetes-dashboard-v1.1.0-
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
version: v1.1.0
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g
uid: d4791722-5908-11e6-9697-02b43b6aa639
spec:
containers:
- image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-lvmnw
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-10-178-153-57.us-west-2.compute.internal
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-lvmnw
secret:
secretName: default-token-lvmnw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53
lastState: {}
name: kubernetes-dashboard
ready: true
restartCount: 0
state:
running:
startedAt: 2016-08-02T23:28:34Z
hostIP: 10.178.153.57
phase: Running
podIP: 172.16.100.9
startTime: 2016-08-02T23:28:34Z
API Server config
/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws
API Server is accessible from remote host (laptop)
$ curl http://10.178.153.240:8080/
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1alpha1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1alpha1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
UI is not accessible remotely
$ curl -L http://10.178.153.240:8080/ui
Error: 'dial tcp 172.16.100.9:9090: i/o timeout'
Trying to reach: 'http://172.16.100.9:9090/'
UI is accessible from Minion Node
core@ip-10-178-153-57 ~$ curl -L 172.16.100.9:9090
<!doctype html> <html ng-app="kubernetesDashboard">...
API Server route tables
core@ip-10-178-153-240 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.240 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.240
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.240 metric 1024
172.16.0.0/12 dev flannel.1 proto kernel scope link src 172.16.6.0
172.16.6.0/24 dev docker0 proto kernel scope link src 172.16.6.1
Minion (where pod lives) route table
core@ip-10-178-153-57 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.57 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.57
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.57 metric 1024
172.16.0.0/12 dev flannel.1
172.16.100.0/24 dev docker0 proto kernel scope link src 172.16.100.1
Flannel Logs It seems that this one route is misbehaving with Flannel. I'm getting these errors in the logs but restarting the daemon does not seem to resolve it.
...Watch subnets: client: etcd cluster is unavailable or misconfigured
... L3 miss: 172.16.100.9
... calling NeighSet: 172.16.100.9
If you try adding another service like in the definition below, then you I think you should be able to access the dashboard using any of the node IP and the nodeport which is in this example 30100
kind: Service
apiVersion: v1
metadata:
name: kube-expose-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
nodePort: 30100
targetPort: 9090
selector:
app: kubernetes-dashboard
For anyone who finds their way to this question, I wanted to post the final resolution as it was not a Flannel, Kubernetes, or SkyDNS issue, it was an inadvertent firewall. As soon as I opened up the firewall on the API server, my Flannel routes were fully functional and I could access the Dashboard (assuming basic auth was enabled on the API Server).
So in the end, user error :)
Either you have to expose your service outside of the cluster using a service of type NodePort as mentioned in the previous answer, or if you enabled Basic Auth on your API Server you can reach your service using the following URL:
http://kubernetes_master_address/api/v1/proxy/namespaces/namespace_name/services/service_name