I have a Kubernetes cluster in vagrant (1.14.0) and installed calico.
I have installed the kubernetes dashboard. When I use kubectl proxy to visit the dashboard:
Error: 'dial tcp 192.168.1.4:8443: connect: connection refused'
Trying to reach: 'https://192.168.1.4:8443/'
Here are my pods (dashboard is restarting frequently):
$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-etcd-cj928                          1/1     Running   0          11m
calico-node-4fnb6                          1/1     Running   0          18m
calico-node-qjv7t                          1/1     Running   0          20m
calico-policy-controller-b9b6749c6-29c44   1/1     Running   1          11m
coredns-fb8b8dccf-jjbhk                    1/1     Running   0          20m
coredns-fb8b8dccf-jrc2l                    1/1     Running   0          20m
etcd-k8s-master                            1/1     Running   0          19m
kube-apiserver-k8s-master                  1/1     Running   0          19m
kube-controller-manager-k8s-master         1/1     Running   0          19m
kube-proxy-8mrrr                           1/1     Running   0          18m
kube-proxy-cdsr9                           1/1     Running   0          20m
kube-scheduler-k8s-master                  1/1     Running   0          19m
kubernetes-dashboard-5f7b999d65-nnztw      1/1     Running   3          2m11slogs of the dasbhoard pod:
2019/03/30 14:36:21 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
I can telnet from both master and nodes to 10.96.0.1:443.
What is configured wrongly? The rest of the cluster seems to work fine, although I see this logs in kubelet:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml"kubelet seems to run fine on the master. The cluster was created with this command:
kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16For me the issue was I needed to create a NetworkPolicy that allowed Egress traffic to the kubernetes API
Exclude -- node-name parameter from kubeadm init command
try this command
kubeadm init --apiserver-advertise-address=$(hostname -i) --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=192.168.0.0/16you should define your hostname in /etc/hosts
#hostname
YOUR_HOSTNAME
#nano /etc/hosts
YOUR_IP HOSTNAME
if you set your hostname in your master but it did not work try
# systemctl stop kubelet
# systemctl stop docker
# iptables --flush
# iptables -tnat --flush
# systemctl start kubelet
# systemctl start dockerand you should install dashboard before join worker node
and disable your firewall
and you can check your free ram.