What is the impact of replicas field while updating existing Deployments in kubernetes?

11/23/2019

i'm trying to understand the kubernetes Deployment strategy by applying minReadySeconds and readinessprobe to the existing Deployment.But Getting different results If i have mentioned replicas field. existing deployment manifest as below:

kind: Deployment
apiVersion: apps/v1
metadata:
 name: kubia-dep
spec:
 replicas: 3
 selector:
  matchLabels:
   app: dev
 template:
  metadata:
    name: dep-spec
    labels:
     app: dev
  spec:
   containers:
   - name: kubia-dep-cn
     image: luksa/kubia:v2

Deployment and pod status as below

[root@master ~]# kubectl get deployments
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
kubia-dep   3/3     3            3           30s
[root@master ~]#

[root@master ~]# kubectl get po
NAME                         READY   STATUS    RESTARTS   AGE
kubia-dep-54cd566dc8-4vbjq   1/1     Running   0          113s
kubia-dep-54cd566dc8-kssrb   1/1     Running   0          113s
kubia-dep-54cd566dc8-vgjpv   1/1     Running   0          113s
[root@master ~]#

Now i'm applying minReadySeconds,readinessProbe and updating image as luksa/kubia:v3 which having the code as All http requests from the fifth request onward will return an internal server error (HTTP status code 500) to test the benefits of combination of minReadySeconds,readinessProbe

kind: Deployment
apiVersion: apps/v1
metadata:
 name: kubia-dep
spec:
 replicas: 3
 minReadySeconds: 10
 selector:
  matchLabels:
   app: dev
 strategy:
  rollingUpdate:
   maxSurge: 1
   maxUnavailable: 0
  type: RollingUpdate
 template:
  metadata:
   name: dep-spec
   labels:
    app: dev
  spec:
   containers:
   - name: kubia-dep-cn
     image: luksa/kubia:v3
     readinessProbe:
      httpGet:
       path: /
       port: 8080
      periodSeconds: 1

Deployment status,pod status as below

[root@master ~]# kubectl get deployments
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
kubia-dep   0/3     3            0           9m4s
[root@master ~]#

[root@master ~]# kubectl get po
NAME                        READY   STATUS    RESTARTS   AGE
kubia-dep-7c884d659-h4svd   0/1     Running   0          6m3s
kubia-dep-7c884d659-nrwdf   0/1     Running   0          6m5s
kubia-dep-7c884d659-wmzfk   0/1     Running   0          6m8s
[root@master ~]#

Now no pods are in ready status.I mean like all 3 pods are having the bug image version and if i hit the curl i'm getting

while true;do curl http://10.104.60.165;done
curl: (7) Failed connect to 10.104.60.165:80; Connection refused
curl: (7) Failed connect to 10.104.60.165:80; Connection refused
curl: (7) Failed connect to 10.104.60.165:80; Connection refused

expectation: Until the first new v3 pod is available, the rollout process will not continue and existing v2 pods will serve the incoming requests

Actual: 3 pods with new buggy v3 versions are running and making all connections to apps as refused

So is this is the impact of replicas field mentioned while updating the existing deployment?.If so how it is working?

-- user10912187
kubernetes

0 Answers