I have some VMs on top of a private cloud (OpenStack). While trying to make a cluster on the master node, it initiates the cluster on its private IP by default. When I tried to initiate a cluster based on public IP of master node, using --apiserver-advertise-address=publicIP
flag, it gives error.
Initiation phase stops as below:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
I've noticed that I do not see the public IP of VM from inside it (running "ip addr"), but VMs are reachable via their public IPs.
Is there a way to setup a Kubernetes cluster on top of "public IPs" of nodes at all?
Private IP addresses are used for communication between instances, and public addresses are used for communication with networks outside the cloud, including the Internet. So it's recommended to setup cluster only on private adresses.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IP addresses, configured by the cloud administrator, is available in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project.
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Try to add floating IPs of machines to /etc/hosts file on master node from which you want to deploy cluster and run installation again.