Please Example Kubernetes External Address vs Internal Addresses

11/15/2018

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?

I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.

KOPS deployed to GCP = external address is the real public IP addresses of the nodes.

Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).

Kubeadm deployed, NO cloud provider = no external ip.

I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.

My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).

Thank you

-- jsirianni
kubernetes
networking

1 Answer

11/16/2018

Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.

When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.

https://github.com/google/metallb

-- adam wilhelm
Source: StackOverflow