I'm following the k8s logging instructions on how to configure cluster level logging. I'm using kube-aws cli Tool to configure the cluster, and I can't seem to find a way to make it work. I've tried setting the env vars as they mentioned in the k8s logging guide (KUBE_ENABLE_NODE_LOGGING and KUBE_LOGGING_DESTINATION) before running kube-aws up
but that didn't seem to change anything.
After that, I've tried running the es and kibana rc's and services manually by taking them from the cluster/addons/fluentd-elasticsearch directory on k8s github repo, but that ran only those specific services and not the fluentd-elasticsearch service which supposed to run also by the tutorial example.
running kubectl get pods --namespace=kube-system
returns the following:
where we can see that the fluentd-elasticsearch-kubernetes-node
is missing.
Also tried connecting to the cluster but failed with:
unauthorized
following the k8s logging instructions and running the command kubectl config view
didn't return any username and password, and when tried accessing the es url, I didn't get any dialog with asking for username and password. Not sure if it related to the first issue.
Not sure what I'm missing here.
Thanks.
KUBE_ENABLE_NODE_LOGGING
and KUBE_LOGGING_DESTINATION
env vars are used by kube-up.sh
script. I dont know much about aws cli tool you have mentioned, but looking at the code, it doesnt look like those env vars affect that cli.
http://kubernetes.io/docs/getting-started-guides/aws/ details the steps required to bring up a kubernetes cluster on AWS using the kube-up
script.
So it seems that there is no support for this on kube-aws currently, quoting one of the authors:
We are currently working on a kube-was distribution for this approach that includes Kibana for visualizing the elastic search data.
Also a suggested workaround appears in this issue page including extra details regarding it's status: https://github.com/coreos/coreos-kubernetes/issues/320
I've managed to get the cluster-level logging running on a small testing cluster started through the CoreOS kube-aws
tool using the following steps. Please be aware that although I've had this running, I haven't really played with it sufficiently to be able to guarantee that all works correctly!
Enable log collection on nodes
You'll need to edit the cloud-config-worker
and cloud-config-controller
to export kubelet-collected logs and create the log directory
[Service] Environment="RKT_OPTS=--volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log" Environment=KUBELET_VERSION=v1.2.4_coreos.1 ExecStartPre=/usr/bin/mkdir -p /var/log/containers ExecStart=/usr/lib/coreos/kubelet-wrapper \ --api-servers=http://127.0.0.1:8080 \ --config=/etc/kubernetes/manifests ...other flags...
(taken from the 'Use the cluster logging add-on' section here)
Install the logging components I used the components from here (as you've already attempted). As you noticed, this does not run fluentd, and assumes that it is run as part of the cluster bootstrapping. To get fluentd running I've extracted the fluentd Daemonset definition discussed here into a separate file:
{ "apiVersion": "extensions\/v1beta1", "kind": "DaemonSet", "metadata": { "name": "fluent-elasticsearch", "namespace": "kube-system", "labels": { "k8s-app": "fluentd-logging" } }, "spec": { "template": { "metadata": { "name": "fluentd-elasticsearch", "namespace": "kube-system", "labels": { "k8s-app": "fluentd-logging" } }, "spec": { "containers": [ { "name": "fluentd-elasticsearch", "image": "gcr.io\/google_containers\/fluentd-elasticsearch:1.15", "resources": { "limits": { "memory": "200Mi" }, "requests": { "cpu": "100m", "memory": "200Mi" } }, "volumeMounts": [ { "name": "varlog", "mountPath": "\/var\/log" }, { "name": "varlibdockercontainers", "mountPath": "\/var\/lib\/docker\/containers", "readOnly": true } ] } ], "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "varlog", "hostPath": { "path": "\/var\/log" } }, { "name": "varlibdockercontainers", "hostPath": { "path": "\/var\/lib\/docker\/containers" } } ] } } } }
this Daemonset runs fluentd on each of the cluster nodes.
(NOTE: Whilst I've only tried adding these components after the cluster is already running, there's no reason you shouldn't be able to add these to to the cloud-config-controller
in order to bring these up at the same time the cluster is started - which is more inline with that discussed on the referenced issue)
These instruction all assume that you're working with a cluster that you're happy to restart, or haven't yet started, in order to get the logging running - which I assume from your question is the situation you're in. I've also been able to get this working on a pre-existing cluster, by manually editing the AWS settings, and can add additional information on doing this if that is in fact what you are trying to do.