I am working at a client where they have 2 datacenters. We are setting up an Openshift cluster with nodes in both data centers.
For high availability we want apps to run with a minimum of 2 pods and each pod have to run in different data centers.
How is that best done in Openshift?
Is it required to create two deployment configs where each config targets different data center nodes?
Or is it possible to have just one deployment config and have Openshift always guarantee that pods of the same deployment config are started up in two different nodes that are in two different data centers?
Thanks.
I can guide you to some extent on this as I deploy the apps in the similar fashion that you are looking for.
High Availabilty:
For HA, openshift will make sure the minimum number of pods running (as per the RC - replication controller configuration) at any given time. RC will monitor the pod continuously and re-deploys the pod in case of any failures. Reason for running the application in different data-centers is to have HA across regions.
Deployment Config:
Each pod running in a data center will have a respective deployment config associated with it. You can view the all the deployment configs associated within the project.
The configuration of DC in different data centers may be similar unless there are specific changes in environment variables as well as PVC (persistent volume claims which is the external storage for the pod).
With respect to the traffic routing, load balancer will route the traffic based on the configuration.
Hope this helps to give an overview for your query.
By default, Kubernetes will make a best effort to schedule Pods across nodes if they will fit and no rules are in place.
If you were to use separate deployment configurations you could schedule applications by labelling specific nodes and setting a nodeSelector
value in the for each.
However you should not be required to do this and can likely use anti-affinity rules to schedule the Pods as required. You will still need to label nodes. You can specify that multiple Pods running behind the same service should never be co-located.
Attaching link to K8 documentation which demonstrates what I've described for Zookeeper stateful set. See here