3 pods were running under ReplicationController 'rc1', then I deleted only rc1 (not pds) and created a new ReplicaSet 'rs1' with the same label selector of rc1. So as expected rs1 matched the existing pods created but rc1.
After some time, I created the ReplicationController rc2 with the same manifest file as that of rc1. Now, rc1 is spun up new pods instead of referring pods with same labels.
So I was wondering if it is possible that a pod can be scoped under two different ReplicaSets/ReplicationsControllers?
A ReplicaSet purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
So I was wondering if it is possible that a pod can be scoped under two different ReplicaSets/ReplicationsControllers?
The link a ReplicaSet has to its Pods is via the Pods’ metadata.ownerReferences field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet’s identifying information within their ownerReferences field. It’s through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a Controller and it matches a ReplicaSet’s selector, it will be immediately acquired by said ReplicaSet. That is explained very well (with examples) in the official documentation.
After some time, I created the ReplicationController rc2 with the same manifest file as that of rc1. Now, rc1 is spun up new pods instead of referring pods with same labels.
Please note that a Deployment
that configures a ReplicaSet
is now the recommended way to set up replication.
A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated.
Hope that helps.