I'm playing around with Kubernetes on Virtualbox. I have created 2 VMs, one is the Master - the other one is the Worker. The Worker is a clone of the basis installation of the master. I guess that's the root-cause of the problem, maybe there's some config left over which causes conflicts.
When I try to join the Worker with ...
sudo kubeadm join 192.168.56.101:6443 --token ... --discovery-token-ca-cert-hash ...
I get the following error ...
error execution phase kubelet-start: a Node with name "test-virtualbox" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
I have tried to reset the config with ...
sudo kubeadm reset
But after running the join command I get the same error again.
I also tried to delete the node "test-virtualbox" by running ...
sudo kubectl delete node test-virtualbox
But this results in the error ...
The connection to the server localhost:8080 was refused
just running "sudo kubectl delete node test-virtualbox" on the master
There are several problems or questions you mentioned
a Node with name "test-virtualbox" and status "Ready" already exists in the cluster.
kubeadm
uses the hostname as the node name by default, as you Clone the worker from the master, they should have the same hostname as the error said.
the solution for this could be:
1. Give Worker a new hostname by hostnamectl
or some other tools.
2. Use the --node-name flag when joining the worker
sudo kubeadm reset
reset
is used to reset a node inside the cluster, your worker has not yet joined, it does nothing when you did this.
sudo kubectl delete node test-virtualbox
This is executed on the Worker, right?
The Worker has not yet joined the cluster, so you do not have a kubeconfig
in ~/.kube/config
, kubectl will use localhost:8080
as server address by default, it can not connect to the target API Server, so that error occurred.