I am creating StatefulSets and I want pods within one StatefulSet to be distributed across different nodes of the k8s cluster. In my case - one StatefulSet is one database replicaset.
sts.Spec.Template.Labels["mydb.io/replicaset-uuid"] = replicasetUUID.String()
sts.Spec.Template.Spec.Affinity.PodAntiAffinity = &corev1.PodAntiAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: []corev1.PodAffinityTerm{
{
LabelSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "mydb.io/replicaset-uuid",
Operator: metav1.LabelSelectorOpIn,
Values: []string{replicasetUUID.String()},
},
},
},
TopologyKey: "kubernetes.io/hostname",
},
},
}
However, with these settings, I get the opposite. storage-0-0
and storage-0-1
are on the same replicaset and on the same node...
Moreover, they have exactly the same label mydb.io/replicaset-uuid
$ kubectl -n mydb get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
storage-0-0 1/1 Running 0 40m x.x.x.x kubernetes-cluster-x-main-0 <none> <none>
storage-0-1 1/1 Running 0 39m x.x.x.x kubernetes-cluster-x-main-0 <none> <none>
storage-1-0 1/1 Running 0 40m x.x.x.x kubernetes-cluster-x-slave-0 <none> <none>
storage-1-1 1/1 Running 0 40m x.x.x.x kubernetes-cluster-x-slave-0 <none> <none>
mydb-operator-58c9bfbb9b-7djml 1/1 Running 0 46m x.x.x.x kubernetes-cluster-x-slave-0 <none> <none>
I suggest using an podAntiAffinity rule in the statefulset definition to deploy your application so that no two instances are located on the same host.
Reference: An example of a pod that uses pod affinity
It works correctly, as @jesmart wrote in the comment:
The description of the problem works correctly I just indicated the wrong image with the application