I deployed Kubernetes cluster with Ansible and kuberspay
Noone task get "fail" status on my cluster.
But my Dashborad pod get errors
Then i tryed to open my UI dashboard.
But I cant get access to Kubernetes Daschboard:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
network:
ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.2.67.201 netmask 255.0.0.0 broadcast 10.255.255.255
ether 00:50:56:9c:0e:b0 txqueuelen 1000 (Ethernet)
RX packets 7564605 bytes 6551785783 (6.1 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2925952 bytes 4385152422 (4.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 51783307 bytes 11498086915 (10.7 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 51783307 bytes 11498086915 (10.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440
inet 10.233.70.0 netmask 255.255.255.255
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Inventory file:
master ansible_host=10.2.67.201 ansible_user=root
worker1 ansible_host=10.2.67.203 ansible_user=root
worker2 ansible_host=10.2.67.205 ansible_user=root
worker3 ansible_host=10.2.67.206 ansible_user=root
#[all:vars]
#ansible_python_interpreter=/usr/bin/python3
[kube-master]
master
[kube-node]
worker1
worker2
worker3
[etcd]
master
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
kubectl get pods -n kube-system:
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6d57b44787-xlj89 1/1 Running 13 5d23h
calico-node-dwm47 0/1 CrashLoopBackOff 74 5h33m
calico-node-hhgzk 1/1 Running 12 5d23h
calico-node-tk4mp 0/1 CrashLoopBackOff 75 5h33m
calico-node-w7zvs 0/1 CrashLoopBackOff 75 5h33m
coredns-74c9d4d795-xpbsd 0/1 ContainerCreating 0 5h32m
dns-autoscaler-7d95989447-7kqsn 1/1 Running 7 5d23h
kube-apiserver-master 1/1 Running 0 5d23h
kube-controller-manager-master 1/1 Running 0 5d23h
kube-proxy-9bt8m 1/1 Running 0 5h33m
kube-proxy-cbrcl 1/1 Running 0 5h33m
kube-proxy-stj5g 1/1 Running 0 5h33m
kube-proxy-zql86 1/1 Running 0 5h33m
kube-scheduler-master 1/1 Running 0 5d23h
kubernetes-dashboard-7c547b4c64-6skc7 0/1 CrashLoopBackOff 367 5d23h
nginx-proxy-worker1 1/1 Running 0 5h33m
nginx-proxy-worker2 1/1 Running 0 5h33m
nginx-proxy-worker3 1/1 Running 0 5h33m
nodelocaldns-6t92x 1/1 Running 0 5h33m
nodelocaldns-kgm4t 1/1 Running 0 5h33m
nodelocaldns-xl8zg 1/1 Running 0 5h33m
nodelocaldns-xwlwk 1/1 Running 9 5d23h
Some Problem Pod's Logs:
kubectl logs kubernetes-dashboard-7c547b4c64-6skc7 --namespace=kube-system
2019/09/18 14:45:09 Starting overwatch
2019/09/18 14:45:09 Using in-cluster config to connect to apiserver
2019/09/18 14:45:09 Using service account token for csrf signing
2019/09/18 14:45:10 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.233.0.1:443/version: dial tcp 10.233.0.1:443: connect: no route to host
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
So I cant open my Dashboard.
Logs from another failed pods: calico-node-dwm47, calico-node-tk4mp, calico-node-w7zvs
2019-09-19 06:48:26.023 [INFO][8] startup.go 256: Early log level set to info
2019-09-19 06:48:26.024 [INFO][8] startup.go 272: Using NODENAME environment for node name
2019-09-19 06:48:26.024 [INFO][8] startup.go 284: Determined node name: worker3
Calico node failed to start
ERROR: Error accessing the Calico datastore: dial tcp 10.2.67.201:2379: connect: no route to host[
core-dns:
kubectl logs coredns-74c9d4d795-xpbsd --namespace=kube-system
Error from server (BadRequest): container "coredns" in pod "coredns-74c9d4d795-xpbsd" is waiting to start: ContainerCreating