NGINX Logs have no jsonPayload field in Stackdriver

1/16/2019

I have a basic nginx deployment serving static content running on a GKE cluster. I have configured Stackdriver Logging for the cluster as per instructions here (I enabled logging for an existing cluster), and I also enabled the Stackdriver Kubernetes Monitoring feature explained here. The logging itself seems to be working fine, as I can see the logs from nginx in Stackdriver.

I am trying to create some log-based metrics like number of fulfilled 2xx requests, but all I am getting in the log entries in Stackdriver is the textPayload field. From what I understand, enabling Stackdriver Monitoring on the cluster spins up some Fluentd agents (which I can see if I run kubectl get pods -n kube-system), and they should have an nginx log parser enabled by default (as per documentation here). However, none of the log entries that show up in Stackdriver have the jsonPayload field that should be there for structured logs.

I'm using the default log_format config for nginx, and I've verified that the default nginx parser is able to parse the logs my application is writing (I copied the default Fluentd nginx parser plugin regular expression and a log entry from my application to this tool and it was able to parse the entry)

I'm sure I must be missing something, but I can't figure out what.

Edit:

For reference, here is my NGINX log format:

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent"';

And I have tried the following so far:

  • Upgrade my k8s cluster from version 1.11.5 to 1.11.6 (due to issue with structured logging in version 1.11.4, which was fixed in 1.11.6)
  • Downgrade from version 1.11.6 to 1.11.3
  • Make a brand new cluster with the GCP console (version 1.10.9), with Stackdriver Monitoring and Stackdriver Logging options enabled and deploy my application on that. Still no jsonPayload field, only textPayload.

So far, none of these have solved it.

-- Ragnar Mikael Halldórsson
google-cloud-stackdriver
google-kubernetes-engine
nginx

2 Answers

1/17/2019

Are you running Kubernetes 1.11.4, by any chance? It's a known issue with Beta release 1.11.4. The fix is available in Beta Update (Kubernetes 1.11.6). Please confirm your version.

-- Asif Tanwir
Source: StackOverflow

3/28/2019

After being in contact with Google Cloud Support, we were able to devise a workaround for this issue, although the root cause still remains unknown.

The workaround is to define the NGINX log format itself as a JSON string. This will allow the Google-Fluentd parser to correctly parse the payload as a JSON object. This is the only solution that has worked for me so far.

For reference, the log format I used is:

log_format json_combined escape=json
'{'
'"time_local":"$time_local",'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"request_method":"$request_method",'
'"request":"$request",'
'"status": "$status",'
'"body_bytes_sent":"$body_bytes_sent",'
'"request_time":"$request_time",'
'"http_referrer":"$http_referer",'
'"http_user_agent":"$http_user_agent"'
'}';
access_log /var/log/nginx/access.log json_combined;
-- Ragnar Mikael Halldórsson
Source: StackOverflow