I have a kubernetes cluster on Amazon EKS and from time to time there appear some pods with the state Unknown
. I read that this is because my pods have no memory limits set and after changing that no new pods with that state have appeared. But I tried to remove the existing ones using kubectl delete pod <pod_name>
and it didn't work. How should I delete them?
did you directly deploy the pod? or is it deployed from deployment or statefulset? try delete the deployment or statefulset.
if nothing works out then delete the namespace the pod is in. But all objects get deleted from that namespace
You can force delete the pod like this:
kubectl delete pod <pod_name> --grace-period=0 --force
In a Kubernetes cluster one can create Pod using Kubernetes Workload. There are Workloads of following kinds:
If you use any of the above list other than Pod, then the Pod's ownerReference (.metadata.ownerRefference
) is set for that Pod. Say, If you create a Deployment named d1
, then it first create a ReplicaSet named d1-***
in which case the ownerRefference for the ReplicaSet is the Deployment d1
. Then the ReplicaSet will create a number of Pod(s) (with prefix d1-***-***
). So the Pod's ownerRefference will be the ReplicaSet d1-***
.
UPDATE:
If you don't want to delete your original Deployment or other Workload due to keep the prod up, then you will be able to accomplish your want by force deleting the Pod:
$ kubectl delete pod <pod_name> --namespace <namespace> --grace-period 0 --force
According to the kubectl
command reference,
--grace-period
: Default is -1
. Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).--force
:* Default is false
. Only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.That's the case for you.