Old ReplicaSet and Old pods are still alive with New ReplicaSet and New pods when using command 'kubectl replace -f [yaml file]'

8/13/2018

What happened:

Old ReplicaSet and Old pods are still alive with New ReplicaSet and New pods when using command 'kubectl replace -f [yaml file]'

What you expected to happen:

The old ReplicaSet should be scale down to 0 and the old pods will be deleted.

How to reproduce it (as minimally and precisely as possible):

The difference between the 2 yaml files. The new yaml file has:

  1. Add component: propel-sx in spec.template.metadata.labels
  2. Add affinity section in spec.template.spec.affinity

    affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: component
              operator: In
              values:
              - propel-sx
          topologyKey: kubernetes.io/hostname
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: workLoad
              operator: In
              values:
              - ExtraHigh
          topologyKey: kubernetes.io/hostname

Anything else we need to know?:

I found a strange thing that the old ReplicaSet will lose the field 'ownerReferences'. I think this field will be used to connect the old ReplicaSet and the new ReplicaSet. Scale down the old pod and scale up the new pod. But I don't know why the field 'ownerReferences' lost.

Environment: - Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release): CentOS Linux 7 (Core)

  • Kernel (e.g. uname -a):

Linux shc-sma-cd75.hpeswlab.net 3.10.0-862.3.2.el7.x86_64 #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

-- Cain
kubernetes

1 Answer

8/13/2018

NOTE that kubectl replace destroys the object and creates a new one in its place. This might cause the replica sets (which are 'children' of the 'Deployment' object) to be left behind and no longer assumed to belong to any deployment at all. In theory, this could be mitigated by explicitly specifying a label selector that the Deployment uses to find which pods are its own (see example here), but it is probably best to use kubectl apply instead, so that your deployment is updated rather than deleted and re-made (this will preserve the label selectors that are automatically set up by k8s if you haven't set any).

I assume that you want the default 'rolling upgrade' behavior of Deployments: they create a new replica set and start it before the old one is drained (rolling upgrade). If not: to change this, add

strategy:
    type: Recreate

in the deployment spec. This will make the old replica set get drained before the new one is started.

-- Leo K
Source: StackOverflow