I have to set up an FTP server in a GCP Kubernetes cluster, and do not know how to route clients so that multiple requests from the same IP to different ports get routed to the same Kubernetes pod.
In the (passive) FTP protocol, the server opens up a new port, and sends the port number to the client. The client then creates a new connection to that port. Hence, I need to ensure that the second request gets routed to the same pod, because only on that pod a server is waiting for a new connection.
I tried a minimal sample with a workload of two pods that do nothing, and have the ports 21, and 30000-30098 exposed. Then, I set up service of type LoadBalancer as follows (reduced to the relevant portions):
kind: Service
spec:
type: LoadBalancer
sessionAffinity: ClientIP
loadBalancerIP: IP_OF_LB
ports:
- name: ftp-control
port: 21
protocol: TCP
targetPort: 21
- name: pasv-30000
port: 30000
protocol: TCP
targetPort: 30000
# and so on for the remaining ports up to 30098
Now I log onto each of the pods with a shell, and I start to listen manually on one port as follows:
netcat -l -p 30001
Then, I use telnet from my workstation to connect to the IP address of the load balancer.
telnet IP_OF_LB 30001
This way, I can see which pod gets the incoming connection request.
For a single port, the load balancer always forwards my request to the same pod.
However, when I try several ports, I can see that subsequent requests get routed to different pods, even though the session affinity is set to ClientIP
.
Is there any setting that I have missed? I assume, that a session affinity by client IP would only use the IP of the client to determine the target pod. However, it looks as if it is using the IP and the port.
Does anyone know if there are more settings that I can try to get the desired behavior?
I have posted a bug report at Google, and they say it's intended behavior: bug report
I assumed that if I define a Kubernetes service of type load balancer, then the load is balanced on a pod-level. Instead, the load balancer balances traffic on a VM-basis.
Apparently, the traffic is balanced twice. First, the load balancer distributes the traffic to the VMs, and then Kubernetes balances that traffic to different pods.
That means, session affinity will not work for loadbalancers and Kubernetes in the Google Kubernetes Engine.
I can see you configure your service as LoadBalancer type, however it would be troublesome to configure session affinity on it. Therefore, I actually recommend to do the following:
Backend config -> Nodeport service -> Ingress
Please check using client IP affinity and Backend Configs for more information.