I am trying to use logstash to send data from kafka to s3 via logstash, and I am getting an SIGTERM in the logstash process with no apparent error messages.
I am using the following helm template override.yaml file.
# overrides stable/logstash helm templates
inputs:
main: |-
input {
kafka{
bootstrap_servers => "kafka.system.svc.cluster.local:9092"
group_id => "kafka-s3"
topics => "device,message"
consumer_threads => 3
codec => json { charset => "UTF-8" }
decorate_events => true
}
}
# time_file default = 15 minutes
# size_file default = 5242880 bytes
outputs:
main: |-
output {
s3 {
codec => "json"
prefix => "kafka/%{+YYYY}/%{+MM}/%{+dd}/%{+HH}-%{+mm}"
time_file => 5
size_file => 5242880
region => "ap-northeast-1"
bucket => "logging"
canned_acl => "private"
}
}
podAnnotations: {
iam.amazonaws.com/role: kafka-s3-rules
}
image:
tag: 7.1.1
my AWS IAM role should be attached to the container via iam2kube. The role itself allows all actions on S3.
My S3 bucket has a policy as follows:
{
"Version": "2012-10-17",
"Id": "LoggingBucketPolicy",
"Statement": [
{
"Sid": "Stmt1554291237763",
"Effect": "Allow",
"Principal": {
"AWS": "636082426924"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::logging/*"
}
]
}
The logs for the container are as follows.
2019/06/13 10:31:15 Setting 'path.config' from environment.
2019/06/13 10:31:15 Setting 'queue.max_bytes' from environment.
2019/06/13 10:31:15 Setting 'queue.drain' from environment.
2019/06/13 10:31:15 Setting 'http.port' from environment.
2019/06/13 10:31:15 Setting 'http.host' from environment.
2019/06/13 10:31:15 Setting 'path.data' from environment.
2019/06/13 10:31:15 Setting 'queue.checkpoint.writes' from environment.
2019/06/13 10:31:15 Setting 'queue.type' from environment.
2019/06/13 10:31:15 Setting 'config.reload.automatic' from environment.
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-06-13T10:31:38,061][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-13T10:31:38,078][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2019-06-13T10:32:02,882][WARN ][logstash.runner ] SIGTERM received. Shutting down.
Is there anyway to get more detailed logs, or does anyone know what I am dealing with? I greatly appriciate any help or advice! :no_mouth:
Looking at the pod details for logstash, I was able to identify the issue.. I an entry similar to the following.
I0414 19:41:24.402257 3338 prober.go:104] Liveness probe for "mypod:mycontainer" failed (failure): Get http://10.168.0.3:80/: dial tcp 10.168.0.3:80: connection refused
It specified a "connection refused" for liveness probe, and after 50~60 seconds of uptime restarted the pod.
looking at the liveness probe in the helm chart Values.yaml
it shows the following settings.
...
livenessProbe:
httpGet:
path: /
port: monitor
initialDelaySeconds: 20
# periodSeconds: 30
# timeoutSeconds: 30
# failureThreshold: 6
# successThreshold: 1
...
Only InitialDelaySeconds
is set, so the others should be Kubernetes defaults as shown here to the following.
# periodSeconds: 10
# timeoutSeconds: 1
# failureThreshold: 1
# successThreshold: 3
This indicates the following give or take a few seconds:
+------+-----------------------------+
| Time | Event |
+------+-----------------------------+
| 0s | Container created |
| 20s | First liveness probe |
| 21s | First liveness probe fails |
| 31s | Second liveness probe |
| 32s | Second liveness probe fails |
| 42s | Third liveness probe |
| 43s | Third liveness probe fails |
| 44s | Send SIGTERM to application |
+------+-----------------------------+
After some troubleshooting to find the correct InitialDelaySeconds
value, I put the following into my override.yaml
file to fix the issue.
livenessProbe:
initialDelaySeconds: 90
It seems that depending on the plugins being used, Logstash may not respond to HTTP requests for upwards of 100s.