I recently had cause to restart a fluentd-elasticsearch pod for all my nodes. Out of 7 nodes where the pods were deleted only 1 of them deleted and came back as "Running". Is there a way to completely purge a pod in k8s?
If you want to debug this pod, read K8s user guide for debugging pods. You may try kubectl describe pod
or kubectl log pods
to see what went wrong.
Note that it's recommended to use replication controller to manager your pods, if you haven't already. It makes sure that a specified number of pods are running at any one time. If a pod is deleted, the replication controller will create one for you.
I'm not sure of the cause. But I fixed it by moving /etc/kubernetes/manifests/fluentd-es.yaml to a temp dir, killing the running containers, and moving it back.
fluentd-elasticsearch
pods are static pods which are created via placing pod manifest files (fluentd-es.yaml
) in a directory watched by Kubelet. The corresponding pod (a.k.a. the mirror pod) with the same name and namespace in the API server is created automatically for the purpose of introspection -- it reflects the status of the static pod.
Kubernetes treats the static pod (the pod manifest file) in the directory as the source of the truth; operations (deletion/update, etc) on the mirror pod will not have any effect on the static pod.
You are encouraged to move away from static pods and use DaemonSet, except for a few particular use cases (e.g., standalone Kubelets). The system add-on pods such as fluentd-elasticsearch
will be converted to DaemonSet
eventually.