Kube-proxy with IPVS mode doesn't keep a connection

11/26/2020

I have a k8s cluster with an ipvs kube-proxy mode and a database cluster outside of k8s.

In order to get access to the DB cluster I created service and endpoints resources:

---
apiVersion: v1
kind: Service
metadata:
  name: database
spec:
  type: ClusterIP
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306

---
apiVersion: v1
kind: Endpoints
metadata:
  name: database
subsets:
- addresses:
  - ip: 192.168.255.9
  - ip: 192.168.189.76
  ports:
  - port: 3306
    protocol: TCP

Then I run a pod with MySQL client and try to connect to this service:

mysql -u root -p password -h database

In the network dump I see a successful TCP handshake and successful MySQL connection. On the node where the pod is running (hereinafter the worker node) I see the next established connection:

sudo netstat-nat -n | grep 3306
tcp   10.0.198.178:52642             192.168.189.76:3306            ESTABLISHED

Then I send some test queries from the pod in an opened MySQL session. They all are sent to the same node. It's expected behavior.

Then I monitor established connections on the worker node. After about 5 minutes the established connection to the database node is missed.

But in the network dump I see that TCP finalization packets are not sent from the worker node to the database node. As a result, I get a leaked connection on the database node.

How ipvs decides to drop an established connection? If ipvs drops a connection, why it doesn't finalize TCP connection properly? Is it a bug or do I misunderstand something with an ipvs mode in kube-proxy?

-- Al Ryz
ipvs
kube-proxy
kubernetes

1 Answer

12/4/2020

Kube-proxy and Kubernetes don't help to balance persistent connections.

The whole concept of the long-lived connections in Kubernetes is well described in this article:

Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others. If you're using HTTP/2, gRPC, RSockets, AMQP or any other long-lived connection such as a database connection, you might want to consider client-side load balancing.

I recommend going through the whole thing but overall it can be summed up with:

  • Kubernetes Services are designed to cover most common uses for web applications.

  • However, as soon as you start working with application protocols that use persistent TCP connections, such as databases, gRPC, or WebSockets, they fall apart.

  • Kubernetes doesn't offer any built-in mechanism to load balance long-lived TCP connections.

  • Instead, you should code your application so that it can retrieve and load balance upstreams client-side.

-- WytrzymaƂy Wiktor
Source: StackOverflow