GKE - Bypass Pod LoadBalancer (Pod's external IP) to Pod's container's IP at runtime for WebSocket purpose

11/28/2019

I have the following situation:

I have a couple of microservices, only 2 are relevant right now. - Web Socket Service API - Dispatcher Service

We have 3 users that we'll call respectively 1, 2, and 3. These users connect themselves to the web socket endpoint of our backend. Our microservices are running on Kubernetes and each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api. Each pod has its Load Balancer and this will be each time the entry point.

In our situation, we will then have the following "schema":

enter image description here


Now that we have a representation of our system (and a legend), our 3 users will want to use the app and connect.

enter image description here

As we can see, the load balancer of our pod forwarded the web socket connection of our users across the different containers. Each container, once it gets a new connection, will let to know the Dispatcher Service, and this one will save it in its own database.

Now, 3 users are connected to 2 different containers and the Dispatcher service knows it.


The user 1 wants to message user 2. The container A will then get a message and tell the Dispatcher Service: Please, send this to the user 2.

As the dispatcher knows to which container the user 2 is connected, I would like to send a request directly to my Container instead of sending it to the Pod. Sending it to the Pod is resulting in sending a request to a load balancer which actually dispatches the request to the most available container instance...

enter image description here

How could I manage to get the container IP? Can it be accessed by another container from another Pod?

To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP

Thanks!

edit 1

There is my web-socket-service-api.yaml

apiVersion: v1
kind: Service
metadata:
  name: web-socket-service-api
spec:
  ports:
    # Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
    - port: 8080
      targetPort: 8080
      protocol: TCP
      name: grpc
    # Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
    - port: 8081
      targetPort: 8081
      protocol: TCP
      name: rest
    # Port that accepts WebSockets.
    - port: 8082
      targetPort: 8082
      protocol: TCP
      name: websocket
  selector:
    app: web-socket-service-api
  type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-socket-service-api
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: web-socket-service-api
    spec:
      containers:
        - name: web-socket-service-api
          image: gcr.io/[PROJECT]/web-socket-service-api:latest
          ports:
            - containerPort: 8080
            - containerPort: 8081
            - containerPort: 8082
-- Emixam23
docker
kubernetes
kubernetes-pod
websocket

2 Answers

11/28/2019

Dispatcher ≈ Message broker

As how I understand your design, your Dispatcher is essentially a message broker for the pods of your Websocket Service. Let all Websocket pods connect to the broker and let the broker route messages. This is a stateful service and you should use a StatefulSet for this in Kubernetes. Depending on your requirements, a possible solution could be to use a MQTT-broker for this, e.g. mosquitto. Most MQTT brokers have support for websockets.

Scale out: Multiple replicas of pods

each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api.

This is not how Kubernetes is intented to be used. Use multiple replicas of pods instead of multiple containers in the pod. I recommend that you create a Deployment for your Websocket Service with as many replicas you want.

Service as Load balancer

Each pod has its Load Balancer and this will be each time the entry point.

In Kubernetes you should create a Service that load balance traffic to a set of pods.

Your solution

To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP

Yes, I mostly agree. That is similar to what I have described here. But I would let the Websocket Service establish a connection to the Broker/Dispatcher.

-- Jonas
Source: StackOverflow

11/29/2019

Any pod, has some information about itself. And one of the info, is it own IP address. As an example:

apiVersion: v1
kind: Pod
metadata:
  name: envars-fieldref
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv MY_POD_IP;
          sleep 10;
        done;
      env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP

Within the container, MY_POD_IP would contain the IP address of the pod. You can let the dispatcher know about it.

$ kubectl logs envars-fieldref
10.52.0.3


$ kubectl get po -owide
NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE                           NOMINATED NODE   READINESS GATES
envars-fieldref                  1/1     Running   0          31s   10.52.0.3    gke-klusta-lemmy-3ce02acd-djhm   <none>           <none>

Note that it is not a good idea to rely on pod IP address. But this should do the trick.

Also, it is exactly the same thing to send a request to the pod or to the container.

-- suren
Source: StackOverflow