Do I need multiple statefulsets for each rack/zone when using topologyspreadconstraint consider 2 cases where I have a single or multiple datacenter?

8/23/2021

Currently I deployed Cassandra in k8s without multi-rack in single/multiple data-centers using single rack.

Now I am planning to deploy Cassandra across multiple racks in single/multiple DCs. I am planning to use topologySpreadConstraints for this. I will define to constraints one for zone and another for node and will add nodes label accordingly. Here is the link which I am referring for above implementation.

The idea behind this is to achieve High Availability (HA) so that if one rack goes down my service will be available & pods should not be scheduled on other racks. When it's restored, pods should be restored back on it.

But I am not sure how many statefulset (sts) I should use? 1. Should I use one sts if I have one DC and N sts if I have N DC? 2. Or, I should always use N sts if I have N Racks in each DC?

Sample Code Consider, I have 3 nodes, 3 racks and am trying to deploy 2 pods on each rack and node. And added zone & node labels on all nodes.

apiVersion: v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
      foo: bar
  serviceName: "nginx"
  replicas: 6 # by default is 1
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: node-pu
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        foo: bar
  - maxSkew: 1
    topologyKey: zone-pu
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        foo: bar
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
   ... # removed other config
-- Pushpendra
cassandra
kubernetes

1 Answer

8/23/2021

I'm going to assume that you're not using the cass-operator in K8ssandra since the CassandraDatacenter in cass-operator owns the StatefulSets.

You don't need to create a StatefulSet for each logical Cassandra rack. It should be able to schedule pods in different availability zones.

But I would suggest creating a different StatefulSet for each logical Cassandra DC so you can control how the pods get scheduled across racks/zones. Cheers!

-- Erick Ramirez
Source: StackOverflow