Unable to setup multiple node kubernetes cluster using kubeadm (Vagrant)

5/27/2019

I have been setting up multi node kubernetes cluster using kubeadm.Setup included 1 master and worker node each. I have created the VM using vagrant.

I followed the docs, https://kubernetes.io/docs/setup/independent/install-kubeadm/ https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm

Created 2 VM's using vagrant
IP: Master- 192.168.33.10 , Worker- 192.168.1.21 (Both host only network)

I have experienced 2 scenarios,

Case 1:

  1. Ran kubeadm init --pod-network-cidr=10.244.0.0/16 successfully with all pods running.

  2. Installed "Canal" pod network add on.

  3. Followed all the instructions given at the end of the successfull kubeadm init command.

  4. SSH into 2nd VM and ran kubeadm join .. command and I am struck at "[preflight] Running pre-flight checks"

Case 2:

  1. Did the same process with tag --apiserver-advertise-address=192.168.33.10

  2. Successfully ran the command kubeadm init --apiserver-advertise-address=192.168.33.10

  3. But when I ran the command kubectl get nodes it only showed the master node. (expected the worker node to show too).

Kindly help me understand how can I complete this setup. Thank you.

-- Prankul
kubeadm
kubernetes

1 Answer

5/29/2019

I have github repository which does exactly what you want. I am pretty sure that you will get idea from it. If anything is not clear, please update with comment or original post.

-- coolinuxoid
Source: StackOverflow