What are the Kubernetes alternatives logging to ELK/EFK stack for Node.js apps

1/30/2022

I have started a very complex guide on how to install EFK (Elasticsearch + Fluentd + Kibana) on my Kubernetes cluster on DigitalOcean. This process spawned a namespace with 3 different Elasticsearch Pods, 3 Fluentd Pods and 1 Kibana Pod.

CPU jumped from 5% to 95% - permanent. RAM jumped from 34% to about 80% - permanent.

I didn't stop, and continued to trying get the water from the rock - I have forwarded a port so I can checkout the Kibana dashboard, who asked me to provide an index name. I tried to enter logstash-* as described in many articles, but seems that Kibana didn't accept this input so I picked something from the list and no logs did show up.

Eventually I gave up, after 5 hours, and tried to delete the namespace - so I can clean up those actions. But the namespace remain on status: "Terminating" - as long as for 3 hours now.

I just have a very simple Node.js app and I want to see its logs:

  1. Date and time
  2. If its an error I want to see the stack trace.
  3. From node who produced the log.
  4. It will be just amazing to also have the current state (cpu and ram) of the whole system.
-- Raz Buchnik
kubernetes
logging

2 Answers

1/31/2022

Namespace gets stuck in terminating status when kubernetes cannot delete some of the resources in the namespace. One of useful article for solving stuck in termination is here.

I did not understand fully understand what is your intention.

If you are looking for a centralized solution for logs from all pods (not just one namesapce) and not looking for any paid solution like Datadog or sumologic, ELK stack is one of the best pick. Adding a daemonset is costly since fluentd pod will be in every node and it does not make much sense if your solution is just for one application.

If you are just troubleshooting your app and you do not want to use kubectl logs -l <your app label>, you can use lens or octant to look at logs.

-- ffran09
Source: StackOverflow

1/31/2022

Eventually I gave up, after 5 hours, and tried to delete the namespace

  • so I can clean up those actions. But the namespace remain on status: "Terminating" - as long as for 3 hours now.

For this namespace issue, you can follow the easy solution just need to remove single line from YAML config and save config : https://stackoverflow.com/a/57726448/5525824

If you just want to debug the application and need logs you can use the kubectl get logs <PODs name>

However, if you are looking for a good solution that worked well with us is using the Graylog with UDP gelf method.

Graylog also uses in background Elasticsearch and MongoDB however there is no collector required at node level instead your application pushes logs to Graylog using Gelf UDP method so not much memory consumption will be there.

Read more at : https://www.graylog.org/

Helm chart : https://github.com/helm/charts/tree/master/stable/graylog

What is GELF UDP ?

The Graylog Extended Log Format (GELF) is a uniquely convenient log format created to deal with all the shortcomings of classic plain Syslog. This enterprise feature allows you to collect structured events from anywhere, and then compress and chunk them in the blink of an eye.

Here is NPM library to push the logs to Graylog : https://www.npmjs.com/package/node-gelf

-- Harsh Manvar
Source: StackOverflow