Kubernetes keeping old replicasets with running pods after deployments

10/29/2019

Not sure why this is happening but we're seeing old replicasets with active pods running in our Kubernetes cluster despite the fact the deployments they are attached to have been long deleted (up to 82 days old). Our deployments have spec.replicas set to a max of 2, however we're seeing up to 6/8 active pods in these deployments.

We are currently running k8s version 1.14.6. Also below is a sample deployment

{
  "kind": "Deployment",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "xxxxxxxxxxxxxxxx",
    "namespace": "default",
    "annotations": {
      "deployment.kubernetes.io/revision": "15",
    }
  },
  "spec": {
    "replicas": 2,
    "selector": {
      "matchLabels": {
        "app": "xxxxxxxx"
      }
    },
    "template": {
      "spec": {
        "containers": [
          {
            "name": "xxxxxxxx",
            "image": "xxxxxxxx",
            "ports": [
              {
                "containerPort": 80,
                "protocol": "TCP"
              }
            ], 
            "resources": {},
            "imagePullPolicy": "Always"
          }
        ],
        "restartPolicy": "Always",
        "terminationGracePeriodSeconds": 30,
        "securityContext": {},
        "schedulerName": "default-scheduler"
      }
    },
    "strategy": {
      "type": "RollingUpdate",
      "rollingUpdate": {
        "maxUnavailable": 1,
        "maxSurge": 1
      }
    },
    "minReadySeconds": 10,
    "revisionHistoryLimit": 2147483647,
    "progressDeadlineSeconds": 2147483647
  },
  "status": {
    "observedGeneration": 15,
    "replicas": 2,
    "updatedReplicas": 2,
    "readyReplicas": 2,
    "availableReplicas": 2,
    "conditions": [
      {
        "type": "Available",
        "status": "True",
        "reason": "MinimumReplicasAvailable",
        "message": "Deployment has minimum availability."
      }
    ]
  }
}
-- NealR
kubernetes

2 Answers

10/29/2019

It may be an issue with Labels. There are no labels defined in your pod spec..

-- Praveen
Source: StackOverflow

10/30/2019

Changes to label selectors make existing pods fall out of ReplicaSet's scope, so if you change labels and label selector the pods are no longer "controlled" by ReplicaSet.

If you run kubectl get pods <pod_name> -o yaml where <pod_name> is a pod created by ReplicaSet, you will see owner reference. However if you change labels and run the same command, owner reference is no longer visible because it fell out of ReplicaSet scope.

Also if you create bare pods and they happen to have the same labels as ReplicaSet, they will be acquired by ReplicaSet. It happens because RS is not limited to pods created by its template- it can acquire pods matching its selectors and terminate them as desired number specified in RS manifest will be exceeded.

If a bare pod is created before RS with the same labels, RS will count this pod and deploy only required number of pods to achieve desired number of replicas.

You can also remove ReplicaSet without affecting any of its Pods by using kubectl delete with --cascade=false option.

-- KFC_
Source: StackOverflow