I am using kubernetes cluster to run dev environments for myself and other developers. I have written a few shell functions to help everyone deal with their pods without typing long kubectl commands by hand. For example, to get a prompt on one of the pods, my functions use the following
kubectl exec -it $(kubectl get pods --selector=run=${service} --field-selector=status.phase=Running -o jsonpath="{.items[*].metadata.name}") -- bash;
where $service is set to a service label I want to access, like postgres or redis or uwsgi.
Since these are development environments there is always one of each types of pods. The problem I am having is that if I delete a pod to make it pull a fresh image (all pods are managed by deployments, so if I delete a pod it will create a new one), for a while there are two pods, one shows as terminating and the other as running in kubectl get pods
output. I want to make sure that the command above selects the pod that is running and not the one terminating. I thought --field-selector=status.phase=Running
flag would do it, but it does not. Apparently even if the pod is in the process of terminating it still reports Running status in status.phase field. What can I use to filter out terminating pods?
Use this one
$ kubectl exec -it $(kubectl get pods --selector=run=${service} | grep "running" | awk '{print $1}') -- bash;
or
$ kubectl exec -it $(kubectl get pods --selector=run=${service} -o=jsonpath='{.items[?(@.status.phase==“Running”)].metadata.name}') -- bash;