Kubernetes Pod AntiAffinity behaviour with Node selector

5/20/2021

I have a K8s Deployment with both node selector and hard pod antiaffinity in its spec. The node selector restricts the Pod deployments to a node pool while the antiaffinity restricts no two pods with label "App: test-pool" gets scheduled together.

OBSERVATION

The target nodepool have 9 nodes with a unique label value that is used by the node selector to target the pod deployments. When I have 9 replicas of the deployment, all are scheduled on different nodes, which I assume is due to antiaffinity. But, once I increase the replicas to 10, I see that the 10th pod also gets successfully deployed on one of the 9 nodes in the node pool, thereby, ignoring the hard anti affinity rule. The following is the snippet of the depoyment:

kind: Deployment
metadata:
  name: test-app
  labels:
    App: test-app
spec:
  replicas: 10
  selector:
    matchLabels:
      App: test-app
  strategy:
    rollingUpdate:
      maxSurge: 34%
      maxUnavailable: 34%
    type: RollingUpdate
  template:
    metadata:
      labels:
        App: test-app
    spec:
      nodeSelector:
        app: test-pool
      tolerations:
        - key: "dedicated"
          operator: "Equal"
          value: "test-pool"
          effect: "NoSchedule"
        - key: "dedicated"
          operator: "Equal"
          value: "test-pool"
          effect: "NoExecute"
            affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - test-app
            topologyKey: kubernetes.io/hostname
   
...
...

EXPECTATION

I was expecting a scheduling failure with reference to anti affinity rule. Can someone explain why pod antiaffinity rule is being ignored during scheduling.

-- jada12276
affinity
google-kubernetes-engine
kubernetes
nodeselector

0 Answers