ELB(Classic Load Balancer) Proxy Protocol Not working on Kubernetes Cluster

2/7/2019
  • Created K8s Cluster on AWS (EKS).
  • Created Deployment workload.
  • Created Service Type Loadbalancer with annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" (Which should enable proxy protocol on ELB) for TCP Port 4334.

So in my pod I am not able to see the Proxy protocol preserving the client IP. Tried packet sniffers and tcpdump but no where I can see that client IP is preserved by protocol.

Can anybody tell me how to verify proxy protocol is preserving the client IP ?

Refer the Load balancer mentioned below. It has a policy named "k8s-proxyprotocol-enabled" which is applied on 'BackendServerDescriptions' on Instance port 31431.

One thing I observed is that in 'ListenerDescriptions' for instance port 31431 policy name is empty. For proxy protocol to work as expected is it required to have 'k8s-proxyprotocol-enabled' should be applied on listener policy in listener description?

Can anyone confirm below config is sufficient for Proxy protocol to preserve source IP or extra config has to be done?

"LoadBalancerDescriptions": [
    {
        "Subnets": [
            "subnet-1",
            "subnet-2",
            "subnet-2"
        ],
        "CanonicalHostedZoneNameID": "******",
        "CanonicalHostedZoneName": "*************",
        "ListenerDescriptions": [
            {
                "Listener": {
                    "InstancePort": 31431,
                    "LoadBalancerPort": 4334,
                    "Protocol": "TCP",
                    "InstanceProtocol": "TCP"
                },
                "PolicyNames": []
            }
        ],
        "HealthCheck": {
            "HealthyThreshold": 2,
            "Interval": 10,
            "Target": "TCP:31499",
            "Timeout": 5,
            "UnhealthyThreshold": 6
        },
        "VPCId": "vpc-***********",
        "BackendServerDescriptions": [
            {
                "InstancePort": 31431,
                "PolicyNames": [
                    "k8s-proxyprotocol-enabled"
                ]
            }
        ],
        "Instances": [
            {
                "InstanceId": "i-085ece5ecf"
            },
            {
                "InstanceId": "i-0b4741cf"
            },
            {
                "InstanceId": "i-03aea99"
            }
        ],
        "DNSName": "***************************",
        "SecurityGroups": [
            "sg-********"
        ],
        "Policies": {
            "LBCookieStickinessPolicies": [],
            "AppCookieStickinessPolicies": [],
            "OtherPolicies": [
                "k8s-proxyprotocol-enabled"
            ]
        },
        "LoadBalancerName": "a1df476de2aa011e9aabe0af927e6700",
        "CreatedTime": "2019-02-07T06:18:01.020Z",
        "AvailabilityZones": [
            "us-east-1a",
            "us-east-1b",
            "us-east-1c"
        ],
        "Scheme": "internet-facing",
        "SourceSecurityGroup": {
            "OwnerAlias": "906391276258",
            "GroupName": "k8s-elb-a1df476de2aa011e9aabe0af927e6700"
        }
    }
]
-- Karthik
amazon-eks
amazon-elb
aws-elb
kubernetes

1 Answer

2/8/2019

Yes, setting this annotation is enough to enable Proxy Protocol v1 at Load Balancer level (ELB Classic).

service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

I`m having ingress-nginx controller exposed via LoadBalancer Service type with aforementioned annotation, and when run it with logging level set to debug, I can see that each client request retains real Source IP:

172.20.32.78 - [172.20.32.78] - - [08/Feb/2019:18:02:43 +0000] "PROXY TCP4 xxx.xxx.xxx.xx 172.20.xx.xxx 42795 80" 400 157 "-" "-" 0 0.172 []


xxx.xxx.xxx.xx - is my private ip address, not LB one.

The other thing is to enable Proxy Protocol at LB`s backend, so that it can understand forwarded client requests correctly (here are described steps for NGINX).

-- Nepomucen
Source: StackOverflow