A new GKE cluster created at v1.1.1 and using latest kubectl (from gcloud components update) when deleting resources (say a pod) sometimes kubectl get pods in a 'Terminating' state and other times they delete (are removed from kubectl get pods output) right away.
NAME READY STATUS RESTARTS AGE
cassandra 1/1 Terminating 0 44s
Is this new behavior of kubectl? I don't recall it doing this at my prior levels.
You can explicitly set TerminationGracePeriodSeconds to zero in the PodSpec to obtain the old behavior.