Is Kubernetes monitoring data garbage collected?

9/14/2016

I recently had to disable the fluentd-elasticsearch Kubernetes addon because it ended up eating all the disk space on one of my minions which in turn prevented an important pod from starting.

I am now worried that the monitoring addon might end up eating disk space as well. Is the monitoring data (stored in influxdb) ever garbage collected or does it keep eating away at disk space? Are there other Kubernetes components that eat up disk space indefinitely?

I setup my cluster using ./cluster/kube-up.sh on AWS.

Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}

-- Olivier Lalonde
kubernetes

1 Answer

10/10/2016

To answer your specific question: you should lookin for pods using emptydir (kubectl get po --all-namespaces -o yaml | grep emptyDir) or hostPath.

As a general policy: If you use a pv it should be limited by the space available on the pv. Such a pv is usually backed by cloudprovider or nfs and is mounted over the network.

If you use "emptyDir", your storage is taken out of kubelet's --root-dir. Depending on the distribution/setup, this might be an isolated partition making it impossible for a rogue app to take down the node.

If you use hostPath, you are explicitly choosing a path on the node. If you're running without enough privileges to claim sensitive portions of the filesystem and fill them with data, the node goes down.

There's work in the logging front to make this better: https://github.com/kubernetes/kubernetes/issues/17183

There is also image/container GC, which kicks in if your disk usage is above a threshold. You should check if the version of kubernetes you're using had GC issues (will be mentioned in the release notes).

-- Prashanth B
Source: StackOverflow