Ensure minimum number of pods in every node with node selector

11/30/2019

With node_selector we can schedule a particular deployment replicas to a certain node pool. But how to make sure that at least one pod is running in every node (say size of node pool is more than 1)

I need this to ensure my pods are spread across the node pool, so that if a particular node face an issue (say disconnected from cluster) my application would still run.

-- Jawahar
google-kubernetes-engine
kubernetes
microservices

2 Answers

12/2/2019

Kubernetes having dedicated resource type called daemonset.This will ensure your pod is running on each node

kind: DaemonSet
metadata:
  name: ssd-monitor
spec:
  selector:
    matchLabels:
      app: ssd-monitor
  template:
    metadata:
      labels:
        app: ssd-monitor
    spec:
      containers:
      - name: main
        image: luksa/ssd-monitor

You can see 2 pods running on 2 nodes

[root@master ~]# kubectl get po -o wide
NAME                         READY   STATUS                       RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
ssd-monitor-24qd7            1/1     Running                      0          2m17s   10.36.0.7    node2.k8s   <none>           <none>
ssd-monitor-w7nxr            1/1     Running                      0          2m17s   10.44.0.12   node1.k8s   <none>           <none>
-- user10912187
Source: StackOverflow

11/30/2019

With nodeSelector you can directly tie a Pod to a node, but it doesn't provide any means for spreading the Pods of a Deployment across the nodes.

To spread Pods across the nodes, you can use Pod anti-affinity.

For example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - my-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: my-app
        image: my-app:1.0.0

This schedules the Pods so that no two Pods of the Deployment are located on the same node, if possible.

For example, if you have 5 nodes and 3 replicas in the Deployment, then each Pod should be scheduled to a different node. If you have 5 nodes and 6 replicas, then the first 5 Pods should be scheduled to a different node each and the 6th Pod is scheduled to a node which already already has Pod (because there's no other possibility).

See more examples in the Kubernetes documentation.

-- weibeld
Source: StackOverflow