Partially Rollout Kubernetes Pods

7/29/2018

I have 1 node with 3 pods. I want to rollout a new image in 1 of the three pods and the other 2 pods stay with the old image. Is it possible?

Second question. I tried rolling out a new image that contains error and I already define the maxUnavailable. But kubernetes still rollout all pods. I thought kubernetes will stop rolling out the whole pods, once kubernetes discover an error in the first pod. Do we need to manually stop the rollout?

Here is my deployment script.

# Service setup
apiVersion: v1
kind: Service
metadata:
  name: semantic-service
spec:
  ports:
    - port: 50049
  selector:
    app: semantic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: semantic-service
spec:
  selector:
    matchLabels:
      app: semantic
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: semantic
    spec:
      containers:
      - name: semantic-service
        image: something/semantic-service:v2
-- Hana Alaydrus
docker
google-cloud-platform
kubernetes

1 Answer

7/30/2018

As @David Maze wrote in the comment, you can consider using canary where it is possible distinguish deployments of different releases or configurations of the same component with multiple labels and then track these labels and point to different releases, more information about Canary deployments can be find here. Another way how to achieve your goal can be Blue/Green deployment in case if you want to use two different environments identical as possible with a comprehensive way to switch between Blue/Green environments and rollback deployments at any moment of time.

Answering the second question depends on what kind of error a given image contains and how Kubernetes identifies this issue in the Pod, as maxUnavailable: 1 parameter states maximum number of Pods that can be unavailable during update. In the process of Deployment update within a cluster deployment controller creates a new Pod and then delete the old one assuming that the number of available Pods matches rollingUpdate strategy parameters.

Additionally, Kubernetes uses liveness/readiness probes to check whether the Pod is ready (alive) during deployment update and leave the old Pod running until probes have been successful on the new replica. I would suggest checking probes to identify the status of the Pods when deployment tries rolling out updates across you cluster Pods.

-- mk_sta
Source: StackOverflow