I am trying to aggregate raspberry pi logs (IoT Devices) into Logstash/ElasticSearch running in EKS.
filebeat
is already running in EKS to aggregate container logs.
This is my manifest file
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: kube-logging
labels:
app: logstash
data:
logstash.conf: |-
input {
tcp {
port => 5000
type => syslog
}
}
filter {
grok {
match => {"message" => "%{SYSLOGLINE}"}
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.2.1
imagePullPolicy: Always
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
ports:
- name: logstash
containerPort: 5000
protocol: TCP
securityContext:
runAsUser: 0
resources:
limits:
memory: 800Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /usr/share/logstash/pipeline/logstash.conf
readOnly: true
subPath: logstash.conf
volumes:
- name: config
configMap:
defaultMode: 0600
name: logstash-config
---
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
selector:
app: logstash
clusterIP: None
ports:
- name: tcp-port
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logstash-external
namespace: kube-logging
labels:
app: logstash
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: tcp
spec:
rules:
- host: logstash.dev.domain.com
http:
paths:
- backend:
serviceName: logstash
servicePort: 5000
able to send test message :
echo -n "test message" | nc logstash.dev.domain.com 5000
But don't see anything for tcpdump port 5000
in logstash container.
If I run echo -n "test message" | nc logstash.dev.domain.com 5000
from logstash container, then I see this message showing up for tcpdump port 5000
on logstash container.
Within EKS
from any container I can send test message echo -n "test message 4" | nc -q 0 logstash 5000
and its received by logstash
and pushed to ElasticSearch
.
But not from outside of the cluster. So looks like traefik
ingress controller is the issue here.
I have traefik
ingress controller for EKS.
traefik.toml: |
defaultEntryPoints = ["http","https"]
logLevel = "INFO"
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.http.whiteList]
sourceRange = ["0.0.0.0/0""]
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[entryPoints.https.whiteList]
sourceRange = ["0.0.0.0/0"]
[entryPoints.tcp]
address = ":5000"
compress = true
and Service :
kind: Service
apiVersion: v1
metadata:
name: ingress-external
namespace: kube-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: traefik-ingress-lb
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: tcp-5000
protocol: TCP
port: 5000
targetPort: 5000
What is wrong here ?
The hosts setting in the elasticsearch output looks suspicious to me. As stated in the documentation one have to specify an URI there. You should specify the instance with the protocol like
http(s)://{IP or name}:9200
like you do in your curl request.
I would give that a try.
If you haven't used logstash before in the past then you may need to manually create a logstash index. The data won't appear under filebeat as elasticsearch isn't receiving the data from filebeat but logstash itself. I may be wrong in this answer altogether. However, if you go to:
Settings > Index patterns > Create Index pattern Then proceed to type in logstash where it asks for a name and select logstash from underneath like so:
After creating this you should then get a drop down on the Discover page that says logstash. Under the logstash drop down you should see all of the data you are pushing through
You may already have a logstash index setup and this may not be the issue at all