Why is public IP of my pod different than the load balancer IP?

8/8/2019

I have a KOPS cluster with nginx-ingress controller that uses a classic load balancer. The external app requires to whitelist IP for authorization. I created a Network Load Balancer for this purpose and put my application behind it. I tried to connect it with the Network Load Balancer IP but it failed to authorize again. I logged into the pod and checked my public ip using this command

dig +short myip.opendns.com @resolver1.opendns.com

This gave me a completely different public IP that is not tied to any of the load balancers that I have. I whitelisted that IP and it worked. Doing some more digging I found out that pods running on one node had the same pubic IP and similarly all the pods running on a different node had the same public IP. How is this happening in the first place? and also how am I supposed to whitelist a static IP when kubernetes is using a public IP address which I cannot even find where it is?

-- Anshul Tripathi
kubernetes
kubernetes-ingress
kubernetes-pod
nginx-ingress

1 Answer

8/8/2019

When your PODs are in AWS they reside on EC2 instances. Traffic coming outside those EC2 instances will be routed either by Internet Gateway or by NAT Gateway.

So if you have private cluster it means that all instances have private IPs and traffic coming outside that cluster (pods) is running through NAT Gateway so "other side" see NAT Gateway's public IP address.

If you have public cluster it means that all instances have public IP addressees and traffic coming outside that cluster (pods) is running through Internet Gateway so "other side" see particular instance public IP address.

-- Jakub Bujny
Source: StackOverflow