We have a Kubernetes 1.1 cluster on AWS provisioned using kube-up.sh
.
Part of the base installation includes fluentd-elastisearch
. We want to uninstall it. Specifically, we have been unsuccessful in removing the static pods running one-per-node.
We do not use the Kubernetes-hosted fluentd-elastisearch
, but instead use an externally hosted instance. As far as I can tell, fluentd-elastisearch
is not required to run Kubernetes, and so I have been trying to remove it from our cluster.
There seem to be two parts to the elastisearch setup. The first is the addon
defined on the master in /etc/kubernetes/addons/fluentd-elasticsearch
. We moved this file out of the addons directory and manually deleted the associated Replication Controllers.
This leaves the static pods:
kube-ac --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
fluentd-elasticsearch-ip-10-0-5-105.us-west-2.compute.internal 1/1 Running 1 6d
fluentd-elasticsearch-ip-10-0-5-124.us-west-2.compute.internal 1/1 Running 0 6d
fluentd-elasticsearch-ip-10-0-5-180.us-west-2.compute.internal 1/1 Running 0 6d
fluentd-elasticsearch-ip-10-0-5-231.us-west-2.compute.internal 1/1 Running 0 6d
We believe the static pods are launched on each node due to the presence on each node of /etc/kubernetes/manifests/fluentd-es.yaml
.
This file appears to be placed by salt
configuration /srv/pillar/cluster-params.sls
which contains enable_node_logging: 'true'
.
We flipped the flag to 'false'
, killed the existing nodes, allowing new ones be provisioned via the Auto Scaling Group. Unfortunately the newly spawned hosts still have the static fluentd-elasticsearch
pods.
There are a couple of other possible files we think may be involved, on the master host:
/var/cache/kubernetes-install/kubernetes/saltbase/salt/fluentd-es/fluentd-es.yaml
/var/cache/salt/minion/files/base/fluentd-es/fluentd-es.yaml
We are hitting a wall with our lack of salt
experience. Pointers most welcome.
You can stop the static pods by deleting the static pod manifest file. On all nodes run:
sudo rm /etc/kubernetes/manifests/fluentd-es.yaml
Here is the documentation on static pods:
You can tweak configuration settings prior to spinning up your cluster that will skip installing some of the optional add ons. The settings are in cluster/aws/config-default.sh and to disable fluentd-es you should set KUBE_LOGGING_DESTINATION=none
before running kube-up.sh
.
I believe we have working steps to remove fluentd from a cluster which already has it installed.
rm
(or mv
) /etc/kubernetes/addons/fluentd-elasticsearch/
Delete remnant ReplicationControllers:
kubectl --namespace=kube-system delete rc elasticsearch-logging-v1 kibana-logging-v1
In /srv/pillar/cluster-params.sls
change existing settings to
enable_node_logging: 'false'
logging_destination: 'none'
salt '*' saltutil.clear_cache
salt '*' saltutil.sync_all
On existing nodes, manually remove the fluentd static pod
rm /etc/kubernetes/manifests/fluentd-es.yaml