I am new to Kubernetes and when I used to work with Docker swarm I was able to redirect logging the following way:
myapp:
image: myregistry:443/mydomain/myapp
deploy:
mode: global
restart_policy:
condition: on-failure
logging:
driver: gelf
options:
gelf-address: "udp://localhost:12201"
environment:
- LOGGING_LEVEL=WARN
this way, instead of consulting logs using docker service logs -f myapp
or in this case kubectl logs -f myapp
, I would have them redirected to monitor them in a centralised manner (e.g. using ELK).
Is this possible with Kubernetes? What is the equivalent solution?
Thank you for your help
The ELK stack is a very common approach to log aggregation and indexing. There are many great tutorials on deploying this stack onto Kubernetes, or you can go with this stable helm chart that installs everything with one command:
https://github.com/helm/charts/tree/master/stable/elastic-stack
You can tweak your deployment via the values.yaml if you haven't worked with Helm before.
Yes , there are many solutions both opensource and commerial to send all kubernetes logs ( apps and cluster and everything ) to systems like ELK.
Assuming you have the ElasticSearch already setup.
We are using FluentBit to send K8S logs to EFK:
Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster
https://github.com/fluent/fluent-bit-kubernetes-logging
We are also using SearchGurard with ELk to Restrict users to see logs that belong to apps running in thier own namespaces only.