Google (Stackdriver) Logging fails after Kubernetes rolling-update

5/19/2016

When performing a kubectl rolling-update of a replication controller in Kubernetes (Google Container Engine), the Google (Stackdriver) Logging agent doesn't pick up the newly deployed pod. The Log is stuck at the last message produced from the old pod.

Consequently, the logs for the replication controller are out-of-date until we do a manual restart (i.e. kubectl scale and kubectl delete) of the pod and the logs are updated again.

Can anybody else confirm that behaviour? Is there a workaround?

-- codemoped
google-cloud-platform
google-kubernetes-engine
kubernetes
stackdriver

1 Answer

5/19/2016

I can try to repro the behavior, but first can you try running kubectl logs <pod-name> on the newly created pod after doing the rolling-update to verify that the new version of your app was producing logs at all?

This sounds more likely to be an application problem than an infrastructure problem, but if you can confirm that it is an infra problem I'd love to get to the bottom of it.

-- Alex Robinson
Source: StackOverflow