Can we create service to link two PODs from different Deployments >

12/20/2019

My application has to deployments with a POD. Can I create a Service to distribute load across these 2 PODs, part of different deployments ? If so, How ?

-- Chandu
kubernetes
load-balancing

2 Answers

12/20/2019

Yes it is possible to achieve. Good explanation how to do it can be found on Kubernete documentation. However, keep in mind that both deployments should provide the same functionality, as the output should have the same format.

A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.

Based on example from Documentation.

1. nginx Deployment. Keep in mind that Deployment can have more than 1 label.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      run: nginx
      env: dev
  replicas: 2
  template:
    metadata:
      labels:
        run: nginx
        env: dev
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

2. nginx-second Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-second
spec:
  selector:
    matchLabels:
      run: nginx
      env: prod
  replicas: 2
  template:
    metadata:
      labels:
        run: nginx
        env: prod
    spec:
      containers:
      - name: nginx-second
        image: nginx
        ports:
        - containerPort: 80

Now to pair Deployments with Services you have to use Selector based on Deployments labels. Below you can find 2 service YAMLs. nginx-service which pointing to both deployments and nginx-service-1 which points only to nginx-second deployment.

## Both Deployments
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: nginx
---
### To nginx-second deployment
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-1
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    env: prod

You can verify that service binds to deployment by checking the endpoints.

$ kubectl get pods -l run=nginx -o yaml | grep podIP
    podIP: 10.32.0.9
    podIP: 10.32.2.10
    podIP: 10.32.0.10
    podIP: 10.32.2.11
$ kk get ep nginx-service
NAME            ENDPOINTS                                              AGE
nginx-service   10.32.0.10:80,10.32.0.9:80,10.32.2.10:80 + 1 more...   3m33s
$ kk get ep nginx-service-1
NAME              ENDPOINTS                     AGE
nginx-service-1   10.32.0.10:80,10.32.2.11:80   3m36s
-- PjoterS
Source: StackOverflow

12/20/2019

Yes, you can do that. Add a common label key pair to both the deployment pod spec and use that common label as selector in service definition

With the above defined service the requests would be load balanced across all the matching pods.

-- P Ekambaram
Source: StackOverflow