I have a Kubernetes Cluster and I've been trying to forward logs to Splunk with this splunk-connect-for-kubernetes repo which is essentially Splunk's own kubernetes-oriented configuration of fluentd.
I initially could see logs in Splunk but they appeared to just be related to the system components but not from the pods that I needed from.
I think I tracked down to the problem in the global values.yaml
file. I experimented a bit with the fluentd path
and containers path
and found that I likely needed to update the containers pathDest
to the same file path as the pods logs.
It looks like something like this now:
fluentd:
# path of logfiles, default /var/log/containers/*.log
path: /var/log/containers/*.log
# paths of logfiles to exclude. object type is array as per fluentd specification:
# https://docs.fluentd.org/input/tail#exclude_path
exclude_path:
# - /var/log/containers/kube-svc-redirect*.log
# - /var/log/containers/tiller*.log
# - /var/log/containers/*_kube-system_*.log (to exclude `kube-system` namespace)
# Configurations for container logs
containers:
# Path to root directory of container logs
path: /var/log
# Final volume destination of container log symlinks
pathDest: /app/logs
But now I can see in my the logs for my splunk-connect
repeated logs like
[warn]: #0 [containers.log] /var/log/containers/application-0-tcc-broker-0_application-0bb08a71919d6b.log unreadable. It is excluded and would be examined next time.
I had a very similar problem once and changing the path in the values.yaml file helped to solve the problem. It is perfectly described in this thread:
Found the solution for my question -
./splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging/values.yaml: path:
/var/log/containers/*.log
Changed to:
path:/var/log/pods/*.log
works to me.
The cited answer may not be readable. Just try changing /var/log/containers/*.log
to /var/log/pods/*.log
in your file.
See also this similar question on stackoverflow.