When performing a kubectl rolling-update
of a replication controller in Kubernetes (Google Container Engine), the Google (Stackdriver) Logging agent doesn't pick up the newly deployed pod. The Log is stuck at the last message produced from the old pod.
Consequently, the logs for the replication controller are out-of-date until we do a manual restart (i.e. kubectl scale
and kubectl delete
) of the pod and the logs are updated again.
Can anybody else confirm that behaviour? Is there a workaround?
I can try to repro the behavior, but first can you try running kubectl logs <pod-name>
on the newly created pod after doing the rolling-update to verify that the new version of your app was producing logs at all?
This sounds more likely to be an application problem than an infrastructure problem, but if you can confirm that it is an infra problem I'd love to get to the bottom of it.