Does updating a Deployment (rolling update) keep both new and old coexisting replicas receiving traffic at the same time?

8/2/2017

I just want to find out if I understood the documentation right:

Suppose I have an nginx server configured with a Deployment, version 1.7.9 with 4 replicas.

apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Now I update the image to version 1.9.1:

kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1

Withkubectl get pods I see the following:

 > kubectl get pods
NAME                                   READY     STATUS        RESTARTS   AGE
nginx-2100875782-c4fwg   1/1       Running       0          3s
nginx-2100875782-vp23q   1/1       Running       0          3s
nginx-390780338-bl97b    1/1       Terminating   0          17s
nginx-390780338-kq4fl    1/1       Running       0          17s
nginx-390780338-rx7sz    1/1       Running       0          17s
nginx-390780338-wx0sf    1/1       Running       0          17s

2 new instances (c4fwg, vp23q) of 1.9.1 have been started coexising for a while with 3 instances of the 1.7.9 version.

What happens to the request made to the service at this moment? Do all request go to the old pods until all the new ones are available? Or are the requests load balanced between the new and the old pods?

In the last case, is there a way to modify this behaviour and ensure that all traffic goes to the old versions until all new pods are started?

-- codependent
kubernetes

1 Answer

8/2/2017

The answer to "what happens to the request" is that they will be round-robin-ed across all Pods that match the selector within the Service, so yes, they will all receive traffic. I believe kubernetes considers this to be a feature, not a bug.

The answer about the traffic going to the old Pods can be answered in two ways: perhaps Deployments are not suitable for your style of rolling out new Pods, since that is the way they operate. The other answer is that you can update the Pod selector inside the Service to more accurately describe "this Service is for Pods 1.7.9", which will pin that Service to the "old" pods, and then after even just one of the 1.9.1 Pods has been started and is Ready, you can update the selector to say "this Service is for Pods 1.9.1"

If you find all this to be too much manual labor, there are a whole bunch of intermediary traffic managers that have more fine-grained control than just using pod selectors, or you can consider a formal rollout product such as Spinnaker that will automate what I just described (presuming, of course, you can get Spinnaker to work; I wish you luck with it)

-- mdaniel
Source: StackOverflow