I'm trying to understand if it is good practice to use a podAntiAffinity
rule to prefer that Pod
's in my Deployment
avoid being scheduled on the same node. Thus spreading the Pod
's out on my Kubernetes cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app.kubernetes.io/name"
operator: In
values:
- "foo"
topologyKey: "kubernetes.io/hostname"
The documentation suggests avoiding the use of podAntiAffinity
for clusters with hundreds of nodes which suggests that there is a performance impact to using them.
Also, if I don't use them, isn't the default scheduler behaviour to space out Pod
's anyway?
I suppose it also matters what the Deployment
is for. It makes sense to use a podAntiAffinity
for a Redis cache for example but wouldn't it make even more sense to use a DaemonSet
for that case? Also, what is the recommendation for a web server Pod
?
You use Pod/Node Affinity rules if you want to schedule pods on some nodes by matching specified condition in more expressive methods. I am not sure if you can use it to avoid being scheduled on same node. If you don't use affinity rule, then kube-scheduler will look for feasible node to schedule pod on it and this is generally is not the same node.
You make kube-scheduler "to think" more by defining affinity rules and this is normal that in big clusters it can affect to performance.
Also to understand how the kube-scheduler iterates over Nodes by default, you can check this documentation.