Multi-deployment scale k8s

10/23/2019

I've explored k8s, yet couldn't find an answer to a not trivial setup I have.

I have 2 apps (2 containers) which work together but not related.
app1 receives and filters data, then sends to app2.

  1. I've decided to have deployment per application;
    1.1. (And not both containers reside in the same POD) because they shouldn't share anything - communication take place on standard network.
    1.2. Each could scale independently

-> Q1 is this approach correct ?

So I have 2 deployments corresponding to these 2 apps. Defined in deployment1.yaml and deployment2.yaml. Each could scale independently with kubectl scale.

My two deployments should work together, so in case of scale I would like to:

  1. Scale both deployment1 and deployment2

-> Q2 Is there a way to perform multi deployment scale with k8s ?

  1. Do something for the newly created instance of deployment1 so it would know of new instance of deployment2 existence (ip address, etc...). i.e. I would like to send some request to it with deployment2 ip.

-> Q3 Is there a way to have some kind of hook for after-successful-scale in order to run some code? e.g. bash script? If there is - who runs it? the master node? Some init container?

Regardless of 3. I would like also deployment1 to pull its configuration from somewhere when it starts. Or alternatively somehow attach files to it when it starts.

-> Q4 Is there a way to do it dynamically? First instance of deployment1 might have different configuration than the second instance. I understand I might use StatefulSets instead of Deployments so I would recognize unique instance.
I thought of using some shared volume / ConfigMap so all instances would read their own unique configuration with their instance number. But I also thought there's a more standard way to do it, so I ask.


I thought what is the way to perform these requirements in the trivial setup of web apps, and thought:

  1. Both deployments scale automatically when need to according to some trigger.
  2. Both deployments export a LoadBalancer, so app1 always talks to app2's LoadBalancer - then deployment1 is really independent from deployment2; both scale and LoadBalancer handles the load.

-> Q5 Is that thought correct for the trivial case?

-- hudac
cloud
kubernetes
microservices

3 Answers

10/30/2019

The more I look at it, the more I understand K8S directly isn't the answer to my requirements.

I'm implementing a VNF so I need another layer of abstraction.
Currently I'm looking at OSM which from couple of first glances looks like it has what I need.

I'll try to update after I know more.

-- hudac
Source: StackOverflow

10/23/2019

For q5: Just create a Service, let the applications talk inside the kubernetes cluster and the Service will handle load balancing (per request). You don't need a Service of type LoadBalancer and make the applications use an external IP.

q1-q4: You can write your own operator using the kubernetes API (and probably client libraries). There is a watch API so you cat notified automatically of changes that are relevant for your operator, so you don't have to poll.

-- Thomas
Source: StackOverflow

10/23/2019

For 1 & 2: You can use two deployments, A and B, and two services, svcA and svcB to expose the pods. That way, A can simply refer to B using the name 'svcB'.

Do you really need to know if a new instance of pod created? Because if there are multiple instances of pod B is running, then the service svcB will act as a load balancer, and distribute the load among the instances of B.

For storing configurations, use a ConfigMap, and mount to configmap in your deployment for A and B. However, this will give you identical configurations for all instances of pods.

If you really need to know which particular pod you're working with, you can use stateful sets instead of deployments. The hostname will include the pod id, so you can distinguish between different instances.

-- Burak Serdar
Source: StackOverflow