Consider an upgraded Kubernetes deployment, when the old pods are being stopped and new started. We have pods running tomcat, connected to MySQL database.
Once the tomcat receives the stop command, it starts stopping the application. However, there are still tasks running and losing connection to the database leads to a big number of exceptions being thrown. That is okay as the application doesn't do anything anymore. My problem is that this "spams" our logs with fatal errors and it is hard to distinguish which exceptions are relevant and which are just caused by the application being stopped.
One of such exceptions is:
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [sun.reflect.NativeMethodAccessorImpl]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
Is there a way to prevent logging of such exceptions? Is it possible to stop logging at the moment once the stop signal is sent?
You should manage your logging with something like fluentd, and then use a exclude filter for your exceptions logs. There is no way to kubernetes or the container engine to filter what is displayed on the stdout.
kubernetes provides life-cycle callbacks to pod for handling such scenarios, you can execute custom script to remove logs while pod is being shutting down with PreStop
callback e.g
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
preStop:
exec:
command: [" your custom command to delete log entries"]