Kubectl logs command unable to print the output

8/16/2019

I have a docker image of my kafka_consumer code which is consuming messages from a topic. Now what i did was that i created a pod out of my docker image which is running successfully. When i issue the command kubectl logs it just prints first three lines of logs and then exits. When i run the docker image it gives me complete output where in consumer record is printed several times. What is wrong with kubernetes logs then?

I tried using kubectl logs <pod-name> which returns me only 3 lines of logs.

I expect the output to show the proper detailed messages which will look like this:

log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.consumer.ConsumerConfig).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

ConsumerRecord(topic = tt, partition = 0, offset = 399219, CreateTime = 1565941033699, serialized key size = 3, serialized value size = 6, headers = RecordHeaders(headers = [], isReadOnly = false), key = key, value = hello )

When i use kubectl i get only first three lines of log4j and not the ConsumerRecord.

-- Vaibhav Srivastava
apache-kafka
docker
kubernetes

3 Answers

9/13/2019

I figured out that the problem was that the pod was not able to connect to local kafka which was installed in my system. That is why it was not able to print any record from kafka. What i did was i used an external kafka cluster and then it started working as expected.

-- Vaibhav Srivastava
Source: StackOverflow

8/16/2019

When you kubectl logs to any pod, you're querying the stdout/stderr streams sent to the node's log files (/var/log/*), which in turn depends on the underlying host OS logrotate.

You can try to determine if this is the problem by either, comparing the logs from kubectl logs versus the ones sent into a logging backend (if any, for instance Fluentd or the ELK stack) or directly SSHing into the node and locating them directly in the logging path.

There aren't many details on how are you running your Kubernetes cluster, but the information provided strongly suggests that this is a node level issue, specific to how pod logs are being managed internally.

Finally, the approach using kubectl exec <podname> -- /bin/bash is a good idea to determine if the container is locally storing these logs and the issue is when they're sent into the host node.

-- yyyyahir
Source: StackOverflow

8/16/2019

You can use kubectl logs --follow [Pod name] -c [Container name], this will show you the realtime logs

adding the --follow will let you view logs in real time

-- opensource-developer
Source: StackOverflow