I am trying set up a K8S via VMS and installed the master node successfully.
kubectl get nodes -o wide
master Ready control-plane,master 14m v1.20.1 192.168.5.13 <none> Ubuntu 18.04.5 LTS 4.15.0-124-generic docker://18.6.3
and worker node is also prepared installed ocker, kubeadm, kubectl and kubelet. Firewall disabled, swap is off on master and worker as well.
When I try to run as root on worker following command:
kubeadm join 192.168.5.13:6443 --token pk2b8n.i89ywir9vs7cqm7n --discovery-token-ca-cert-hash sha256:e9214f892d58196fa6608968f82965113e5dc1928c00d7cf066b52ae4d7037f0 --control-plane
Getting the error below
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
To see the stack trace of this error execute with --v=5 or higher
I couldn't handle it. Any help? I'll be really thankful.
It took lots of time to find out, I hope it helps someone!
vagrant@master:~$ kubeadm token create --print-join-command
kubeadm join 192.168.33.13:6443 --token 7vz9ab.uf2o74um8sqfncv1 --discovery-token-ca-cert-hash sha256:e9214f892d58196fa6608968f82965113e5dc1928c00d7cf066b52ae4d7037f0
Now go to the worker node and run this command again as root
root@worker:/ kubeadm join 192.168.33.13:6443 --token 7vz9ab.uf2o74um8sqfncv1 --discovery-token-ca-cert-hash sha256:e9214f892d58196fa6608968f82965113e5dc1928c00d7cf066b52ae4d7037f0
On master, you can verify
vagrant@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 32m v1.20.1
worker-1 NotReady <none> 10s v1.20.1