I've got an existing Azure VNET with a site-to-site VPN gateway to onpremise resources. This works fine and VM's in the VNET can access internal resources as well as be exposed to the internet.
I've created a Kubernetes cluster in said VNET and deployed some pods exposed via LoadBalancer.
The pods can access internet and they can access both vnet resouces and on-prem resources (good). The pods are reachable from the on-prem network (good). But the LoadBalancer (even though it states a public IP) is not accessible from the internet. I can access it (the public IP of the LB) from within the vnet, just not from internet.
I've create an identical cluster, but let it create it's own VNET and there it works fine. It's just when I place it in my existing VNET with a VPN gateway I cannot reach them.
kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1h <none>
mail2servicebus LoadBalancer 10.0.187.136 xx.xx.xx.xx 25:31459/TCP 1h app=mail2servicebus
The VNET has peering to another VNET in addition to the VPN gateway if that somehow has anything to do.
This makes no sense, gateway or not doesnt affect external comms (unless you are using express route gateway and advertising 0.0.0.0/0 through it). Only idea that comes to mind - network security group. Also, you cannot put vms in the gatewaysubnet
this is not supported
Apparently my home ISP is blocking access to the SMTP port (tcp 25) for all addresses except their own smtp server (spam prevention or something like that). So the service is indeed exposed to the internet, it's just me that can't access it. Worked like a charm when I came to work.