Kubernetes + metallb + traefik: how to get real client ip?

5/29/2018

traefik.toml:

defaultEntryPoints = ["http", "https"]

[entryPoints]
  [entryPoints.http]
  address = ":80"
    [entryPoints.http.forwardedHeaders]
      trustedIPs = ["0.0.0.0/0"]
    [entryPoints.http.redirect]
      entryPoint = "https"
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]
    [entryPoints.https.forwardedHeaders]
      trustedIPs = ["0.0.0.0/0"]
[api]

traefik Service:

kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: http
    - protocol: TCP
      port: 443
      name: https
  type: LoadBalancer

Then:

kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4
deployment "source-ip-app" created

kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
service "clusterip" exposed

kubectl get svc clusterip
NAME        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
clusterip   ClusterIP   10.5.55.102   <none>        80/TCP    2h

Create ingress for clusterip:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: clusterip-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: clusterip.staging
    http:
      paths:
      - backend:
          serviceName: clusterip
          servicePort: 80

clusterip.staging ip: 192.168.0.69

From other pc with ip: 192.168.0.100:

wget -qO - clusterip.staging

and get results:

CLIENT VALUES:
client_address=10.5.65.74
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://clusterip.staging:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
cache-control=max-age=0
host=clusterip.staging
upgrade-insecure-requests=1
x-forwarded-for=10.5.64.0
x-forwarded-host=clusterip.staging
x-forwarded-port=443
x-forwarded-proto=https
x-forwarded-server=traefik-ingress-controller-755cc56458-t8q9k
x-real-ip=10.5.64.0
BODY:
-no body in request-

kubectl get svc --all-namespaces

NAMESPACE     NAME                      TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                                                 AGE
default       clusterip                 NodePort       10.5.55.102   <none>         80:31169/TCP                                            19h
default       kubernetes                ClusterIP      10.5.0.1      <none>         443/TCP                                                 22d
kube-system   kube-dns                  ClusterIP      10.5.0.3      <none>         53/UDP,53/TCP                                           22d
kube-system   kubernetes-dashboard      ClusterIP      10.5.5.51     <none>         443/TCP                                                 22d
kube-system   traefik-ingress-service   LoadBalancer   10.5.2.37     192.168.0.69   80:32745/TCP,443:30219/TCP                              1d
kube-system   traefik-web-ui            NodePort       10.5.60.5     <none>         80:30487/TCP                                            7d

How to get real ip (192.168.0.100) in my installation? Why x-real-ip 10.5.64.0? I could not find the answers in the documentation.

-- Andrey Perminov
kubernetes
traefik

1 Answer

5/30/2018

When kube-proxy uses the iptables mode, it uses NAT to send data to the node where payload works, and you lose the original SourceIP address in that case.

As I understood, you use Matallb behind the Traefik Ingress Service (because its type is LoadBalancer). That means traffic from the client to the backend goes that way:

Client -> Metallb -> Traefik LB -> Traefik Service -> Backend pod.

Traefik works correctly and adds headers x-*, including x-forwarded-for and x-real-ip which contain a fake address, and that's why:

From the Metallb documentation:

MetalLB understands the service’s externalTrafficPolicy option and implements different announcements modes depending on the policy and announcement protocol you select.

  • Layer2

    This policy results in uniform traffic distribution across all pods in the service. However, kube-proxy will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the cluster’s leader node.

  • BGP

    • “Cluster” traffic policy

      With the default Cluster traffic policy, every node in your cluster will attract traffic for the service IP. On each node, the traffic is subjected to a second layer of load-balancing (provided by kube-proxy), which directs the traffic to individual pods.

      ......

      The other downside of the “Cluster” policy is that kube-proxy will obscure the source IP address of the connection when it does its load-balancing, so your pod logs will show that external traffic appears to be coming from your cluster’s nodes.

    • “Local” traffic policy

      With the Local traffic policy, nodes will only attract traffic if they are running one or more of the service’s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by kube-proxy, there is no “horizontal” traffic flow between nodes.

      This policy provides the most efficient flow of traffic to your service. Furthermore, because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.

Finally, the only way to get the real source IP address is to use "Local" mode of TrafficPolicy.

If you set it up, you will get what you want.

-- Anton Kostenko
Source: StackOverflow