Logs Missing in Google Stackdriver Logging for duplicate applications

9/28/2016

Generally logging on Stackdriver has been great at picking up logs automatically from Kubernetes on GKE by going to Container Engine -> cluster -> namespace -> application name ("postgres" for example).

We have two groups of pods running a staging and production application in the Kubernetes default namespace on GKE. Only one of these logs to the pods' default application name on Stackdriver.

A snippet of the two truncated postgres deployment configurations:

Production

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: replicas: 1 template: metadata: labels: app: database-production track: production spec: containers: - name: postgres image: postgres

Staging

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres-staging spec: replicas: 1 template: metadata: labels: app: database-staging track: staging spec: containers: - name: postgres-staging image: postgres

Only one of these logs to postgres in Stackdriver, the other doesn't appear at all. I've tried changing the container name of the deployments with no effect. The only thing that seems to work is to host the whole application under another namespace but the Kubernetes docs recommend against using namespaces unless the application is huge.

Is there any way to concretely define how Stackdriver Logging / GKE names logs via a Kubernetes deployment configuration?

-- sazerac
google-kubernetes-engine
kubernetes
stackdriver

0 Answers