Kubernetes deployment does not perform a rolling update when using a single replica

8/11/2017

I modified the deployment config (production.yaml), changing the container image value.

I then ran this: kubectl replace -f production.yaml.


While this occurred, my service did not appear to be responding, in addition:

kubectl get pods:

wordpress-2105335096-dkrvg 3/3 Running 0 47s

a while later... :

wordpress-2992233824-l4287 3/3 Running 0 14s

a while later... :

wordpress-2992233824-l4287 0/3 ContainerCreating 0 7s

It seems it has terminated the previous pod before the new pod is Running... Why?


produciton.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      terminationGracePeriodSeconds: 30
      containers:
        - image: eu.gcr.io/abcxyz/wordpress:deploy-1502463532
          name: wordpress
          imagePullPolicy: "Always"
          env:
            - name: WORDPRESS_HOST
              value: localhost
            - name: WORDPRESS_DB_USERNAME
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
        - image: eu.gcr.io/abcxyz/nginx:deploy-1502463532
          name: nginx
          imagePullPolicy: "Always"
          ports:
            - containerPort: 80
              name: nginx
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
        - image: gcr.io/cloudsql-docker/gce-proxy:1.09
          name: cloudsql-proxy
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances=abcxyz:europe-west1:wordpressdb2=tcp:3306",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql
      volumes:
        - name: wordpress-persistent-storage
          gcePersistentDisk:
            pdName: wordpress-disk
            fsType: ext4

        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:
-- Chris Stryczynski
kubectl
kubernetes

1 Answer

8/12/2017

I believe this behaviour is correct according to the Kubernetes documentation. Assuming you specify n replicas for a deployment, the following steps will be taken by Kubernetes when updating a deployment:

  1. Terminate old pods, while ensuring that at least n - 1 total pods are up
  2. Create new pods until a maximum of n + 1 total pods are up
  3. As soon as new pods are up, go back to step 1 until n new pods are up

In your case n = 1, which means that in the first step, all old pods will be terminated.

See Updating a Deployment for more information:

Deployment can ensure that only a certain number of Pods may be down while they are being updated. By default, it ensures that at least 1 less than the desired number of Pods are up (1 max unavailable). Deployment can also ensure that only a certain number of Pods may be created above the desired number of Pods. By default, it ensures that at most 1 more than the desired number of Pods are up (1 max surge). In a future version of Kubernetes, the defaults will change from 1-1 to 25%-25%.

-- user3151902
Source: StackOverflow