Kubernetes service pod prioritization

8/10/2018

I have the following scenario at hand and i would like to get the lowest possible latency between pods ( i do not wish to deploy all containers to same pod). So the following scenario is what i would like to achieve:

Lets say you have a Kubernetes cluster with 4 nodes: - on each node you have a sqlproxy to central database that is not part of kubernetes - on each node you have 4 pods that call database through that sqlproxy

As far as I have learned when I create a service that, service distributes traffic "randomly" between pods in service. Meaning that pod from node 1 could call sqlproxy on node 4 and pod on node 2 could call sqlproxy on node 3 and so on.

I would like to achieve that pods on node 1 call whenever possible sqlproxy on node 1, so it has the minimum latency.

Is this even possible or are the delays between nodes so small they can be disregarded?

-- Nejc
kubernetes
networking

2 Answers

8/11/2018

The solution presented by @Oliver - running sqlproxy as a sidecar container in the same pod as the application - will probably give you the lowest latency.

If for some reason you still want to go with running one sqlproxy instance per node (e.g. to take advantage of database connection pooling and reuse), the application would need to dynamically discover at run-time the IP address of the node on which it is running, and use it to connect to the sqlproxy instance on that same node.

Below is a way to dynamically discover the IP address of the host node and set it as an environment variable (see also The Downward API in the Kubernetes docs):

...
spec:
  containers:
  - name: app-container-name
    image: <app-image>
    env:
    - name: POD_HOST_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.hostIP

Using $(POD_HOST_IP) we can then reference the environment variable elsewhere in the same deployment configuration.

-- apisim
Source: StackOverflow

8/10/2018

Deploy the SQL proxy as a side car to your app (two containers in one pod, one being the app, one being the proxy) .

Your deployment will look something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: ...
  name: my-app
  labels:
    app: my-app

spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: app
          image: << app image >>
          ports:
            - containerPort: 8080
        - name: sql-proxy
          image: << SQL proxy image >>
          ports:
            - containerPort: 3306

Now make the app connect to localhost:3306 to connect to the SQL proxy running in the same pod. In that way you avoid the potentially expensive cross-node hop and keep the connection local.

-- Oliver
Source: StackOverflow