Kubernetes updating only 1 pod instead of all ( 2 replicas) on Rolling Update

12/14/2018

I've set up 2 replicas of a deployment.

when I use

strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1

It only updates 1 pod when I update it via set Image. The second pod doesnot get updated with new code. This means 1 have 2 pods running different images.

When I set maxSurge 25% and maxUnavailable 25%, the pods don't get replaced at all.

Here's the complete yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "89"
  creationTimestamp: 2018-11-26T09:40:48Z
  generation: 94
  labels:
    io.kompose.service: servicing
  name: servicing
  namespace: default
  resourceVersion: "6858872"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/servicing
  uid: 5adb98c8-f15f-11e8-8752-42010a800188
spec:
  replicas: 2
  selector:
    matchLabels:
      io.kompose.service: servicing
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: servicing
    spec:
      containers:
      - env:
        - name: JWT_KEY
          value: ABCD
        - name: PORT
          value: "3001"
        image: gcr.io/something/something
        imagePullPolicy: Always
        name: servicing-container
        ports:
        - containerPort: 3001
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 3001
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 25m
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2018-12-13T11:55:00Z
    lastUpdateTime: 2018-12-13T11:55:00Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 94
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2
-- Alamgir Munir Qazi
kubernetes

1 Answer

12/14/2018

You have setup initialDelaySeconds to 5, periodSeconds to 5 and failureThreshold to 3, it means that kubernetes will wait initial 5 seconds to do first probe to your application is ready or not and then periodically probe your app every 5 seconds if it is ready or not and will do it 3 times. So your application will be checked at 10 second, 15 second and 20 second and if pod doesn't come up in that time, it bails out without upgrading.

You might need to increase this failureThreshold so that your app does have enough time to come up.

Also, I would suggest you to make maxUnavailable to 0 so that pod will be deleted only when the new pod comes up in its replacement.

Check my answer here for better understanding:

Kubernetes 0 Downtime using Readiness Probe and RollBack strategy not working

-- Prafull Ladha
Source: StackOverflow