GKE delete deployment does not delete replicaset

5/19/2020

Since yesterday, I am facing a weird error on K8s ( using GKE)

I have a deployment with 1 pod running. I delete deployment and it used to terminate the pod and the replicaset with it.

But now, if I delete the deployment, the replicaset does not get deleted and thus the pod keeps running.

Does anyone else have this issue? Or know of a way to resolve this?

Should I bring down replica count of my deployment to 0 before deleting it? OR some other solution?

I am using v1.15.9-gke.24

dummy example reproduced

dummy_deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dummy-deployment
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      name: dummy
  template:
    metadata:
      labels:
        name: dummy
    spec:
      serviceAccountName: dummy-serviceaccount
      nodeSelector:
        cloud.google.com/gke-nodepool : default-pool
      containers:
      - name: pause
        image: gcr.io/google_containers/pause
        resources:
          limits:
            memory: 100M
          requests:
            cpu: 100m
            memory: 100M

dummy_serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dummy-serviceaccount
  namespace: default

Commands I run

kubectl apply -f dummy_serviceaccount.yaml
kubectl apply -f dummy_deployment.yaml
kubectl -n default get pods | grep dummy
kubectl delete deployments dummy-deployment
kubectl -n default get pods | grep dummy
kubectl -n default get replicasets | grep dummy

INTERESTING OBSERVATION

deployment.extensions "dummy-deployment" deleted

deployment.apps/dummy-deployment created

When creating a new deployment using kubectl apply, deployment.apps gets created. But when deleting a deployment using kubectl delete, deployment.extensions gets deleted.

NO LOGS created from kubectl get events immediately after deleting deployment using kubectl -n default delete deployment dummy-deployment

LOGS FROM kubectl get events immediately after creating the deployment

2m24s       Normal   Scheduled           pod/dummy-deployment-69946b945f-txvvr          Successfully assigned default/dummy-deployment-69946b945f-txvvr to gke-XXX-default-pool-c7779722-7j9x
2m23s       Normal   Pulling             pod/dummy-deployment-69946b945f-txvvr          Pulling image "gcr.io/google_containers/pause"
2m22s       Normal   Pulled              pod/dummy-deployment-69946b945f-txvvr          Successfully pulled image "gcr.io/google_containers/pause"
2m22s       Normal   Created             pod/dummy-deployment-69946b945f-txvvr          Created container pause
2m22s       Normal   Started             pod/dummy-deployment-69946b945f-txvvr          Started container pause
2m24s       Normal   SuccessfulCreate    replicaset/dummy-deployment-69946b945f         Created pod: dummy-deployment-69946b945f-txvvr
2m24s       Normal   ScalingReplicaSet   deployment/dummy-deployment                    Scaled up replica set dummy-deployment-69946b945f to 1

kubectl -n default get pods | grep dummy

BEFORE : empty

AFTER:

kubectl -n default get pods | grep dummy 
dummy-deployment-69946b945f-txvvr 1/1 Running 0 6s 

kubectl -n default get replicasets | grep dummy

BEFORE : empty

AFTER:

kubectl -n default get replicasets | grep dummy
dummy-deployment-69946b945f 1 1 1 12s
-- crossvalidator
deployment
google-kubernetes-engine
kubernetes
replicaset

1 Answer

5/20/2020

If you just want a simple deletion, please try using

kubectl delete -f dummy_deployment.yaml

kubectl delete -f dummy_serviceaccount.yaml

It is a best practice to delete the Kubernetes objects using the file from which they were created for a full deletion. Also, try and create your objects within namespaces to keep the process isolated. Thereby you can do a kubectl delete namespace test-ns and save yourself all the hassle of deleting entities one by one.

You can always get a higher level of verbosity (--v=6 or --v=9) of what all is getting deleted by and in which manner with --v=6 or --v=9 flag

kubectl delete deployment dummy-deployment --v=6

However, the behaviour is not expected, there must be something wrong with the cluster resource deletion. Its the case with azure PVC it takes too much of time to delete.

-- redzack
Source: StackOverflow