Kubernetes : Replication Controller still there after deletion

12/5/2018

I manage a K8s-cluster, managed by terraform :

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

I want to delete a stack; so I removed the code and I applied. It throwed an error due to a timeout. I retried and I went successfully.

but now, I still have 2 replications controller (that are empty) :

portal-api                                          0         0         0         2h
portal-app                                          0         0         0         2h

no more service, no more horizontal_pod_scheduler; but still my replication_controller.

I tried to remove them :

$ kubectl delete rc portal-api                                                                                                      
error: timed out waiting for "portal-api" to be synced

Same if I want to force the deletion :

$ kubectl delete rc portal-api --cascade=false --force=true
$ 
$ kubectl get rc
[...]
portal-api                                          0         0         0         2h
portal-app                                          0         0         0         2h
[...]

I also still can see its configuration (filled with a deletionTimestamp) :

$ kubectl edit rc portal-api

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: ReplicationController
metadata:
  creationTimestamp: 2018-12-05T14:00:15Z
  deletionGracePeriodSeconds: 0
  deletionTimestamp: 2018-12-05T15:22:00Z
  finalizers:
  - orphan
  generation: 3
  labels:
    App: portal-api
  name: portal-api
  namespace: default
  resourceVersion: "32590661"
  selfLink: /api/v1/namespaces/default/replicationcontrollers/portal-api
  uid: 171f605e-f896-11e8-b761-02d4b8553a0e
spec:
  replicas: 0
  selector:
    App: portal-api
  template:
    metadata:
      creationTimestamp: null
      labels:
        App: portal-api
    spec:
      automountServiceAccountToken: false
      containers:
      - env:
        - name: AUTHORITY_MGR
          value: http://system-authority-manager-service
        image: gitlab.********************:4567/apps/portal/api:prd
        imagePullPolicy: Always
        name: portal-api
        ports:
        - containerPort: 3300
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 256Mi
        terminationGracePeriodSeconds: 30
    status:
  replicas: 0

Could someone help me on this ? Any idea ?

thanks,

-- Fred
amazon-web-services
kubectl
kubernetes

2 Answers

12/5/2018

Using kubectl edit rc portal-api remove finalizer part from the resource:

finalizers:
  - orphan
-- nightfury1204
Source: StackOverflow

12/6/2018

This is about Garbage Collection and how to delete certain objects that once had an owner, but no longer have one.

When you delete an object, you can specify whether the object’s dependents are also deleted automatically. Deleting dependents automatically is called cascading deletion. There are two modes of cascading deletion: background and foreground.

If you delete an object without deleting its dependents automatically, the dependents are said to be orphaned.

You can read the documentation regarding Controlling how the garbage collector deletes dependents, how does the Foreground cascading deletion and Background cascading deletion works.

Setting the cascading deletion policy

To control the cascading deletion policy, set the propagationPolicy field on the deleteOptions argument when deleting an Object. Possible values include “Orphan”, “Foreground”, or “Background”.

Prior to Kubernetes 1.9, the default garbage collection policy for many controller resources was orphan. This included ReplicationController, ReplicaSet, StatefulSet, DaemonSet, and Deployment. For kinds in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 group versions, unless you specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the apps/v1 group version, dependent objects are deleted by default

kubectl also supports cascading deletion. To delete dependents automatically using kubectl, set --cascade to true. To orphan dependents, set --cascade to false. The default value for --cascade is true.

Here’s an example that orphans the dependents of a ReplicaSet: kubectl delete replicaset my-repset --cascade=false

-- Crou
Source: StackOverflow