Why are the "Internal IP address"s of my nodes set to their external IP?

9/19/2019

I have recently set up a kubernetes cluster in Digital Ocean. I manually set up 3 machines and created the cluster using kubeadm with the calico network plugin.

I used the following argument with kubeadm init: --apiserver-advertise-address=10.135.184.137 to make sure nodes use the internal IP to communicate with each other.

However, once I set everything up, I issued kubectl get nodes -o wide and found that the INTERNAL-IP for each node is set to the external one:

NAME                 STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cluster-a-master-1   Ready    master   22m     v1.15.4   155.90.90.117   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   containerd://1.2.6
cluster-a-worker-1   Ready    <none>   10m     v1.15.4   155.90.90.193     <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   containerd://1.2.6
cluster-a-worker-2   Ready    <none>   9m24s   v1.15.4   155.90.90.224   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   containerd://1.2.6

Why is it like this? what confused it and how can I correct it? And does this also mean that nodes communicate with each other using the external interface?

-- Tom Klino
kubernetes

0 Answers