I am running a k8 cluster with 8 workers and 3 master nodes. And my pods are evicting repetively with the ephemeral storage issues. Below is the error I am getting on Evicted pods:
Message: The node was low on resource: ephemeral-storage. Container xpaas-logger was using 30108Ki, which exceeds its request of 0. Container wso2am-gateway-am was using 406468Ki, which exceeds its request of 0.
To overcome the above error, I have added ephemeral storage limits and request to my namespace.
apiVersion: v1
kind: LimitRange
metadata:
name: ephemeral-storage-limit-range
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 130Mi
type: Container
Even after adding the above limits and requests to my namespace, my pod is reaching its limits and then evicting.
Message: Pod ephemeral local storage usage exceeds the total limit of containers 2Gi.
How can I monitor my ephemeral storage, where does it store on my instance? How can I set the docker logrotate to my ephemeral storage based on size? Any suggestions?
"Ephemeral storage" here refers to space being used in the container filesystem that's not in a volume. Something inside your process is using a lot of local disk space. In the abstract this is relatively easy to debug: use kubectl exec
to get a shell in the pod, and then use normal Unix commands like du
to find where the space is going. Since it's space inside the pod, it's not directly accessible from the nodes, and you probably can't use tools like logrotate
to try to manage it.
One specific cause of this I've run into in the past is processes configured to log to a file. In Kubernetes you should generally set your logging setup to log to stdout instead. This avoids this specific ephemeral-storage problem, but also avoids a number of practical issues around actually getting the log file out of the pod. kubectl logs
will show you these logs and you can set up cluster-level tooling to export them to another system.