I have a application pod where I am logging log messed to a file a specific location.
I have already shared this location to other pod using emptyDir volumeMount.
I am getting standard stdout & stderr in my ELF stack - dashboard. How do I capture my custom logs?
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elk
namespace: default
labels:
k8s-app: elk-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: elk-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: elk
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "cp-os-logging-dashboard"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: logs
mountPath: /home/services/*/logs/
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: logs
hostPath:
path: /home/services/*/logs/
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
I have experimented with volume host-path's, emptyDir and other varieties prior to asking question here. All I want is access my application logs from daemonset. I was able to do that without daemonset.
Kubernetes will send all the logs to nodes /var/log etc. You need hostPath volume for fluentd daemoset to pick it up and send to your logger. EmptyDir, as the name suggest, will be empty when the pod is scheduled to a node.
...
...
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Check https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd and https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml for more info.