Node anti affinity not working on kubernetes

2/23/2017

I've tried to run the nginx ingress service on multiple hosts using this config

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: ingress-nginx
  namespace: default
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: ingress-nginx
        k8s-addon: ingress-nginx.addons.k8s.io
      annotations:
        scheduler.alpha.kubernetes.io/affinity: >
          {
            "podAntiAffinity": {
              "preferredDuringSchedulingIgnoredDuringExecution": [{
                "labelSelector": {
                  "matchExpressions": [
                    { "key": "app", "operator": "In", "values": ["ingress-nginx"] }
                  ]
                },
                "topologyKey": "kubernetes.io/hostname",
                "weight": 100
              }]
            }
          }
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
          name: ingress-nginx
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
            - --nginx-configmap=$(POD_NAMESPACE)/ingress-nginx

however k8s has scheduled two pods in the same node, is there something wrong with it? I'm using kubernetes 1.5.2

Update: If i use required instead of preferred all the three pods gives failed to fit in any node fit failure summary on nodes : MatchInterPodAffinity (3), PodToleratesNodeTaints (1) at first but after a minute or two they're correctly scheduled on separate hosts

-- alex88
kubernetes

1 Answer

2/23/2017

its working correctly. You need to create a service first. If you create a Service for that Deployment first, Kubernetes will spread your pods across nodes.

Corresponding section in the documentation: http://kubernetes.io/docs/user-guide/config-best-practices/#services

Replication sets take the place of the controllers but i believe the service still works the same.

-- JamStar
Source: StackOverflow