Our project recently migrated away from Stackdriver Logging. However, I cannot figure out how to get rid of the fluentd-cloud-logging-*
pods in the kube-system
namespace. If I delete the individual pods, they come right back.
How do I kill them off for good?
It's not clear to me how they're getting recreated; there is certainly no DaemonSet bringing them back.
I already set monitoringService
to none
in the configuration described by gcloud container clusters describe
.
It depends on your Kubernetes Master version.
It is possible to disable Logging by choosing Legacy Logging: Disabled
.
You have two options:
You can select System logging and monitoring only (beta)
. This will stop log collection from application but system monitoring and log collection will remain. Here is the description of what system logs:
When the system-only option is selected, the following logs are collected:
All pods running in namespaces kube-system, istio-system, knative-serving,gke-system, and config-management-system.
Key services that are not containerized including docker/containerd runtime, kubelet, kubelet-monitor, node-problem-detector, and kube-container-runtime-monitor.
The node’s serial ports output, if the VM instance metadata serial-port-logging-enable is set to true.
Source: https://cloud.google.com/monitoring/kubernetes-engine/installing
The fluentd-cloud-logging pods in the kube-system namespace are defined in the /etc/kubernetes/manifests/
folder of each host machine; that is, they're defined using the Static Pods mechanism.
As of this time, there's no way to change the setting globally. As a workaround, though, I can just delete the file in the manifests folder on each node, using something like the startup script pod. When the file is deleted, the pod will be deleted as well.
(Thanks to GCP support for this answer.)