Postgres pod suddenly dissapeared after update in gcloud

7/26/2018

We changed kubernetes node version because of this message, and because for some reason, pods were unable to be scheduled enter image description here Before this message, however, there was a postgres pod running enter image description here

As you can see the pod is gone, for some reason, why is it so? enter image description here I cannot seem to get it back, when I try kubectl get events I get that no resources cannot be found, is there anyway to revive the postgres container, or get information about it, why is it down? What could I do? kubectl logs postgres doesn't seem to work either.

What I want to get is where was this postgres pod running (like the location path), or if the configuration of this pod is still available, or if this is lost forever. If the pod is dead, can I still access to it's "graveyard" (that means the database data), or was this cleaned up?

Update

Okay so it turns out this pod wasn't managed by a controller, so that's why when it died there was no traces of it, but why there is no log information that this pod was killed?

-- Rainb
gcloud
kubernetes

1 Answer

7/26/2018

Judging by the name your pod has, it wasn't provisioned using a deployment or a replicaset (if it was, like your other pods, it'd have a random id after its name)

More than likely, it's a standalone pod, which means one the node is gone, the pod is gone.

It might be possible to use kubectl get pods --show-all but it's unlikely.

If your database has a persistent volume, you may still be able to retrieve the data by reattaching that to a new postgres pod.

In future, you might consider setting the termination message and message path and also ensuring all pods are in a replicaset or deployment with persistent volumes attached.

-- jaxxstorm
Source: StackOverflow