I am experimenting jenkins in k8s cluster. My environment is minikube.
I setup a standalone jenkins server in ubuntu, then I used k8s plugin to startup slave pods for jobs. Sometimes when I misconfigured, pods are very short-lived. Those pods exist only seconds, there are logs, but they disappear when pods are gone.
I tried loki grafana to collect logs for analysis. I installed loki in k8s cluster using loki-stack. With some tweaking, loki-grafana works. I can see logs of most of the pods in grafana except those started by jenkins.
My question is, is it possible to collect logs of those short-lived pods? Is there anything I need to configure? Or is it simply impossible?
https://grafana.com/docs/loki/latest/clients/promtail/configuration/#target_config
# Period to resync directories being watched and files being tailed to discover
# new ones or stop watching removed ones.
sync_period: "10s"
Promtail will (re-)discover new log files every 10s.
If you can make your Jenkins pods to live a little longer than 10s, then their logs will be discovered by and tailed by Promtail.
For example, you can attach a preStop handler, which sleep 10s...
https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/