ReplicaSet doesn't delete Replication Controller pods?

5/28/2020

When I created a replicaset and a replication controller the replicaset didn't delete the replication controller's pods and I'm trying to understand why.

Context: I gave the replicaset matchLabels section the same labels as in the replication controller's labels section.

From my understanding replicaset ensures there is only a set amount of pods with the specified labels in the matchlabel section. When I create a pod on its own with the same labels the replicaset gets rid of that pod but it doesn't seem to delete the replication controller's pods. So I guess my question is does the replication controller keep its pods running or does the replicaset not interfere with replication controller pods?

-- Szymon Goldbaum
kubernetes

2 Answers

5/28/2020

A replica set is an updated version of a replication controller you can take it this way. one supports set-based selector.

You can consider both as different.

for list of replicas sets you can

kubectl get rs

for list of replication controller you can

kubectl get rc

Replication controller concept: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/

replica sets concepts : https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/

-- Harsh Manvar
Source: StackOverflow

5/29/2020

I think it is related to metadata.ownerReferences. As per Replicaset documentation:

A ReplicaSet is linked to its Pods via the Pods’ metadata.ownerReferences field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet’s identifying information within their ownerReferences field. It’s through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly.

A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a Controller and it matches a ReplicaSet’s selector, it will be immediately acquired by said ReplicaSet.

Owners and dependents page also supply you with an example of how to use and control metadata.ownerReferences

I played with nginx..

1) rs.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-set
  labels:
    app: nginx
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

2) rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

3) manually create pod: kubectl run nginx --image=nginx --labels="app=nginx". In case of such start pod has NO ownerReferences at all

Both RS and RC delete nginx pod, if I start it manually. They both rely on "app: nginx" label.

However you can deploy simultaneously both RS and RC - they will not touch each other pods.

The key difference is their OwnerReference.

RS:

ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: nginx-set

RC:

ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicationController
    name: nginx

So suppose this is thats why rc and rs live in peace and not touching each other ))

-- VKR
Source: StackOverflow