Is it possible to find incoming IP addresses in Google Container Engine cluster?

8/6/2016

My nginx access log deployed in a GKE Kubernetes cluster (with type LoadBalancer Kubernetes service) shows internal IPs instead of real visitor IP.

Is there a way to find real IPs anywhere? maybe some log file provided by GKE/Kubernetes?

-- yair
google-kubernetes-engine
kubernetes

3 Answers

3/23/2018

Adding of these lines to my nginx.conf HTTP block fixed this issue for me and real visitor IPs started displaying in Stackdriver LogViewer:

http {
...
real_ip_recursive on;
real_ip_header X-Forwarded-For;
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.0.0/24;
set_real_ip_from 10.0.0.0/8;
...
}

I'm a happy camper :)

-- dzhi
Source: StackOverflow

8/6/2016

Right now, the type: LoadBalancer service does a double hop. The external request is balanced among all the cluster's nodes, and then kube-proxy balances amongst the actual service backends.

kube-proxy NATs the request. E.g. a client request from 1.2.3.4 to your external load balancer at 100.99.98.97 gets NATed in the node to 10.128.0.1->10.100.0.123 (node's private IP to pod's cluster IP). So the "src ip" you see in the backend is actually the private IP of the node.

There is a feature planned with a corresponding design proposal for preservation of client IPs of LoadBalancer services.

-- CJ Cullen
Source: StackOverflow

8/8/2016

You could use the real IP module for nginx.

Pass your internal GKE net as a set_real_ip_from directive and you'll see the real client IP in your logs:

set_real_ip_from 192.168.1.0/24;

Typically you would add to the nginx configuration:

  1. The load balancers IP
    i.e. the IP that you see in your logs instead of the real client IP currently

  2. The kubernetes network
    i.e. the subnet your Pods are in, the "Docker subnet"

-- Antoine Cotten
Source: StackOverflow