I've implemented an example of ELK stack to centralize logging of my k8s cluster using minikube. I've also deployed a cron job to delete 1 day old logs every hour as title of example.
Because it's scheduled at the beginning of each hour (schedule: "0 * * * *"), I've noticed that at the first application, when the new hour began, the /data folder of elasticsearch passed from 14M to 2.1M instantaneously, sign that the cronjob just deleted old file logs.
The fact is, after some bunches of seconds, the /data folder passed from 2.1M to 2.7M and then back down to 2.4M like logs were deleted every 15-20 seconds and not at the begin of each hour. If I continue monitoring the /data folder, it's going up and down continuously.
Anyway /logs folder is 13M and has grown up fastly in some days but the cron job didn't delete its files at all. Is also that normal?
This is the basic logging pod I've deployed in default namespace:
https://k8s.io/examples/debug/counter-pod.yaml
This is the elastic search deployment deployed in logging namespace:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-pvc # name of PVC essential for identifying the storage data
labels:
k8s-app: elasticsearch-logging
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch-logging
namespace: logging
labels:
k8s-app: elasticsearch-logging
spec:
replicas: 1
selector:
matchLabels:
k8s-app: elasticsearch-logging
template:
metadata:
labels:
k8s-app: elasticsearch-logging
spec:
containers:
- name: elasticsearch-logging
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.0
resources:
limits:
cpu: 500m
memory: 2400Mi
requests:
cpu: 100m
memory: 2350Mi
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-logging
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MINIMUM_MASTER_NODES
value: "1"
initContainers:
- image: registry.hub.docker.com/library/alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
volumes:
- name: elasticsearch-logging
persistentVolumeClaim:
claimName: elasticsearch-pvc
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: logging
labels:
k8s-app: elasticsearch-logging
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
This is the cronjob deployed in logging namespace:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: elasticsearch-curator
namespace: logging
labels:
k8s-app: elasticsearch-logging
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
metadata:
name: elasticsearch-curator
labels:
k8s-app: elasticsearch-logging
spec:
restartPolicy: "Never"
containers:
- name: ingestor
image: python:3.6-alpine
args: ["sh", "-c", "pip install elasticsearch-curator && curator_cli --host elasticsearch-logging delete_indices --filter_list '[{\"filtertype\":\"age\",\"source\":\"creation_date\",\"direction\":\"older\",\"unit\":\"days\",\"unit_count\":1},{\"filtertype\":\"pattern\",\"kind\":\"prefix\",\"value\":\"logstash\"}]' || true"]
backoffLimit: 1
What's wrong with my cron job?