kubectl: pod status showing up as <none> status while using custom columns

3/27/2019
$ kubectl get pods -o=custom-columns=NameSpace:.metadata.namespace,POD_NAME:.metadata.name,POD_STATUS:.metadata.Status

NameSpace     POD_NAME                                     POD_STATUS
kube-system   etcd-docker-for-desktop                      <none>
kube-system   kamus-decryptor-65fb5845b-qmzcj              <none>
kube-system   kamus-decryptor-65fb5845b-xrcrq              <none>
kube-system   kamus-encryptor-5fd59d766d-2qzqd             <none>
kube-system   kamus-encryptor-5fd59d766d-brzht             <none>
kube-system   kube-apiserver-docker-for-desktop            <none>
kube-system   kube-controller-manager-docker-for-desktop   <none>
kube-system   kube-dns-86f4d74b45-fwgc4                    <none>
kube-system   kube-proxy-zqhl8                             <none>
kube-system   kube-scheduler-docker-for-desktop            <none>
kube-system   kubernetes-dashboard-669f9bbd46-65lhk        <none>
kube-system   tiller-deploy-78c6868dd6-bkscs               <none>

Do you know why POD_STATUS is showing up as none?

$ k get pods

is showing the correct status. It seems to have Status column:

NAME                                         READY     STATUS    RESTARTS   AGE
etcd-docker-for-desktop                      1/1       Running   61         75d
kamus-decryptor-65fb5845b-qmzcj              1/1       Running   2          18h
kamus-decryptor-65fb5845b-xrcrq              1/1       Running   2          18h
kamus-encryptor-5fd59d766d-2qzqd             1/1       Running   0          18h
kamus-encryptor-5fd59d766d-brzht             1/1       Running   0          18h
kube-apiserver-docker-for-desktop            1/1       Running   93         69d
kube-controller-manager-docker-for-desktop   1/1       Running   6          69d
kube-dns-86f4d74b45-fwgc4                    3/3       Running   0          75d
kube-proxy-zqhl8                             1/1       Running   2          69d
kube-scheduler-docker-for-desktop            1/1       Running   6          69d
kubernetes-dashboard-669f9bbd46-65lhk        1/1       Running   2          49d
tiller-deploy-78c6868dd6-bkscs               1/1       Running   5          20h
-- user674669
kubectl
kubernetes

1 Answer

3/27/2019

The key Status doesn't exist in .medadata, try .status.phase:

kubectl get pods -o=custom-columns=NameSpace:.metadata.namespace,POD_NAME:.metadata.name,POD_STATUS:.status.phase
-- cookiedough
Source: StackOverflow