I've installed kubernetes 1.2.0 with the following configuration
export nodes="user@10.0.0.30 user@10.0.0.32"
export role="ai i"
export NUM_NODES=2
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
export KUBE_PROXY_EXTRA_OPTS="--proxy-mode=iptables"
I've created a nginx pod and expose with load balancer and external IP address
kubectl expose pod my-nginx-3800858182-6qhap --external-ip=10.0.0.50 --port=80 --target-port=80
I'm using kubernetes on bare metal so i've assigned 10.0.0.50 ip to master node.
If i try curl 10.0.0.50 (from outside kubernetes) and use tcpdump on nginx pod i see traffic, the source ip is always from the kubernetes master node
17:30:55.470230 IP 172.16.60.1.43030 > 172.16.60.2.80: ...
17:30:55.470343 IP 172.16.60.2.80 > 172.16.60.1.43030: ...
i'm using mode-proxy=iptables. and need to get the actual source ip. what am i doing wrong ?
You're not doing anything wrong, unfortunately. It's an artifact of how packets are proxied from the machine that receives them to the destination container.
There's been a bunch of discussion around the problem in a very long Github issue, but no solutions found yet other than running your front-end load balancer outside of the Kubernetes cluster (like using a cloud load balancer, which attach the X-FORWARDED-FOR header).
This was added as an annotation in Kubernetes 1.5 (docs here).
In 1.7, it has graduated to GA, so you can specify the load balancing policy on a Service with spec.externalTrafficPolicy
field (docs here):
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "example-service",
},
"spec": {
"ports": [{
"port": 8765,
"targetPort": 9376
}],
"selector": {
"app": "example"
},
"type": "LoadBalancer",
"externalTrafficPolicy": "Local"
}
}