While trying to install "incubator/fluentd-cloudwatch" using helm on Amazon EKS, and setting user to root, I am getting below response.
Command used :
helm install --name fluentd incubator/fluentd-cloudwatch --set awsRegion=eu-west-1,rbac.create=true --set extraVars[0]="{ name: FLUENT_UID, value: '0' }"
Error:
Error: YAML parse error on fluentd-cloudwatch/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 38: did not find expected ',' or ']'
If we do not set user to root, then by default, fluentd runs with "fluent" user and its log shows:
[error]: unexpected error error_class=Errno::EACCES error=#<Errno::
EACCES: Permission denied @ rb_sysopen - /var/log/fluentd-containers.log.pos>`
Download and update values.yaml as below. The changes are in awsRegion, rbac.create=true and extraVars field.
annotations: {}
awsRegion: us-east-1
awsRole:
awsAccessKeyId:
awsSecretAccessKey:
logGroupName: kubernetes
rbac:
## If true, create and use RBAC resources
create: true
## Ignored if rbac.create is true
serviceAccountName: default
# Add extra environment variables if specified (must be specified as a single line
object and be quoted)
extraVars:
- "{ name: FLUENT_UID, value: '0' }"
Then run below command to setup fluentd on Kubernetes cluster to send logs to CloudWatch Logs.
$ helm install --name fluentd -f .\fluentd-cloudwatch-values.yaml incubator/fluentd-cloudwatch
I did this and it worked for me. Logs were sent to CloudWatch Logs. Also make sure your ec2 nodes have IAM role with appropriate permissions for CloudWatch Logs.
Based on this looks like it's just trying to convert eu-west-1,rbac.create=true
to a JSON field as field and there's an extra comma(,) there causing it to fail.
And if you look at the values.yaml you'll see the right separate options are awsRegion
and rbac.create
so --set awsRegion=eu-west-1 --set rbac.create=true
should fix the first error.
With respect to the /var/log/... Permission denied
error, you can see here that its mounted as a hostPath
so if you do a:
# (means read/write user/group/world)
$ sudo chmod 444 /var/log
and all your nodes, the error should go away. Note that you need to add it to all the nodes because your pod can land anywhere in your cluster.