When I run the following command to get info from my on-prem cluster,
kubectl cluster-info dump
I see the followings per each node.
On master
"addresses": [
{
"type": "ExternalIP",
"address": "10.10.15.47"
},
{
"type": "InternalIP",
"address": "10.10.15.66"
},
{
"type": "InternalIP",
"address": "10.10.15.47"
},
{
"type": "InternalIP",
"address": "169.254.6.180"
},
{
"type": "Hostname",
"address": "k8s-dp-masterecad4834ec"
}
],
On worker node1
"addresses": [
{
"type": "ExternalIP",
"address": "10.10.15.57"
},
{
"type": "InternalIP",
"address": "10.10.15.57"
},
{
"type": "Hostname",
"address": "k8s-dp-worker5887dd1314"
}
],
On worker node2
"addresses": [
{
"type": "ExternalIP",
"address": "10.10.15.33"
},
{
"type": "InternalIP",
"address": "10.10.15.33"
},
{
"type": "Hostname",
"address": "k8s-dp-worker6d2f4b4c53"
}
],
My question here is..
1.) Why some nodes have different ExternalIP and InternalIP and some don't? 2.) Also for the node that have different ExternalIP and InternalIP are in same CIDR range and both can be reached from outside. What is so internal / external about these two IP address? (What is the purpose?) 3.) Why some node have random 169.x.x.x IP-address?
Trying to still learn more about Kubernetes and it would be greatly helpful if someone can help me understand. I use contiv as network plug-in
What you see is part of the status of these nodes:
These fields are set when a node is added to the cluster and their exact meaning depends on the cluster configuration and is not completely standardised, as stated in the Kubernetes documentation.
So, the values that you see are like this, because your specific Kubernetes configuration sets them like this. With another configuration you get different values.
For example, on Amazon EKS, each node has a distinct InternalIP, ExternalIP, InternalDNS, ExternalDNS, and Hostname (identical to InternalIP). Amazon EKS sets these fields to the corresponding values of the node in the cloud infrastructure.