I am following the document https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ to try to create a kubernetes cluster with 3 vagrant ubuntu vm in my local mac. But I can only see the master by running "kubectl get nodes" in master node after "kubeadm join" successfully. After tried several possible ways googled from internet, still the same issue.
Here listed some information about my cluster:
3 vagrant virtual machines (ubuntu 16.04): - (master) eth0: 10.0.2.15, eth1: 192.168.101.101 --> kubeadm init --ignore-preflight-errors Swap --apiserver-advertise-address=192.168.101.101 - (worker1) eth0: 10.0.2.15, eth1: 192.168.101.102 --> kubeadm join 192.168.101.101:6443 --token * --discovery-token-ca-cert-hash sha256: --ignore-preflight-errors Swap - (worker2) eth0: 10.0.2.15, eth1: 192.168.101.103 --> kubeadm join 192.168.101.101:6443 --token --discovery-token-ca-cert-hash sha256:* --ignore-preflight-errors Swap
Any ideas on this?
Regards, Jacky
Be sure that any node [masters and workers] have unique hostname. After few hours just realize that my master and cloned VM's from master have same hostname master, after change my worker nodes hostname into worker-node-01 and worker-node-02 all works perfect.
your problem with default route on the salve node fix the routing table.
I use script like this to fix the routes after OS starup.
#!/bin/bash
if $( ip route |grep -q '^default via 10.0.2.2 dev' ); then
ip route delete default via 10.0.2.2
fi
if ! $( ip r |egrep -q '^default .* eth1'); then
ip route add default via 192.168.15.1
fi
exit 0