Nginx fails to send logs to remote syslog if logging instance is restarted

2/14/2019

I working with a kubernetes cluster where I have a Nginx instance and a ELK stak to collect the cluster logs.

I have the following configuration for the nginx in order to output its logs to my logstash container:

access_log syslog:server=qa-logstash.monitoring.svc:5046,tag=nginx_access main;
error_log syslog:server=qa-logstash.monitoring.svc:5046,tag=nginx_error info;

This configuration seems to ok, because when I start my nginx, the logs are being sent correctly to my logstash.

The issue arises if, for some reason, the logstash container goes down or is restarted. If that happens, nginxs stops sending its logs to my logstash, even after the logstash is up and running again.

The only way I can get it to work again is to restart my nginx.

Does nginx have a mechanism to handle cases like this? Am I missing something in my configuration? I feel like this should be working out of the box and I've made some mistake on my end.

Thank you

-- Sérgio
kubernetes
logstash
nginx

1 Answer

2/14/2019

I recommend you to use a log collection tool like fluentd or filebeat to get your nginx's logs. So, even if the logstash instance fail your nginx will continue to work well without needing to restart it.

You can choose to deploy your log collection tool as a sidecar alongside your nginx container or using a daemonset to collect logs from all pods inside your cluster.

-- alexandrevilain
Source: StackOverflow