I've been asked by Azure support to open the question here, though i think this is an AKS bug.
When deploying a cluster each node 'node.status.addresses' should show an externalip or hostname of the node by design but there is a VM name in hostname address in instead of it in AKS made cluster. Which makes it is really hard to know node public ips for various reasons we need them.
Is there any standard or nonstandard way to get node public ip ?
There is a preview of a feature enabling a public IP per node. Please see https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-in-a-node-pool
There is the public IP exposed for the Azure Kubernetes Service, but it's not directly to the node. Actually, the Kubernetes node will not be exposed to the internet with a public IP.
The AKS nodes create in a VNet on Azure and access or can be accessed through the Azure Load Balancer with a public IP. The VNet is a private network as a resource of Azure. For the VNet, there are two types such as Basic and Advanced. You can get more details, see Network concepts for applications in Azure Kubernetes Service (AKS).
AKS nodes are not exposed to the public internet and therefore will not have an exposed public IP.
With that said, I’ve been investigating an issue where nodes either lose or fail to ever get an internal IP. We (AKS) have implemented an initial fix, which restarts kubelet, and does seem to at least temporarily mitigate the lack of an internal IP. There are ongoing efforts upstream to find and fix the real root cause.
I don’t think I’ve come across the scenario of a node not having a hostname address though. I’m going to log a backlog item to investigate any clusters that appear to be experiencing this symptom. I can’t promise an immediate fix, but I am definitely going to look into this further early next week.