cannot get kubernetes resource "events" using metricbeat, getting error "Failure 403 events is forbidden:"

9/5/2019

I am configuring my EFK stack to keep all Kubernetes related logs including the events. I searched and found the metricbeat config file and deployed it in my cluster.

Problem: All other metricbeat modules are working fine except for the "event" resource. I can see logs from status_pod, status_node etc but no logs availble for events module.

ERROR : 2019/09/04 11:53:23.961693 watcher.go:52: ERR kubernetes: List API error kubernetes api: Failure 403 events is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "events" in API group "" at the cluster scope

My metricbeat.yml file:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
    kubernetes.io/cluster-service: "true"
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false


    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
    kubernetes.io/cluster-service: "true"
data:
  # This module requires `kube-state-metrics` up and running under `kube-system` namespace
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
      period: 10s
      hosts: ["kube-state-metrics:5602"]

    - module: kubernetes
      enabled: true
      metricsets:
        - event

---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: metricbeat
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:6.0.1
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "elasticsearch"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-modules
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - events
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
---
-- vivek
elasticsearch
fluentd
kubernetes
metricbeat

1 Answer

9/5/2019

You are running your deployment with the default service account.
Set the name of the ServiceAccount in the spec.serviceAccountName field in the Deployment definition.

kind: Deployment
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: metricbeat
        kubernetes.io/cluster-service: "true"
    spec:
        serviceAccountName: metricbeat   **<<---  here**

Also maybe you need to add to your ClusterRole definition, the resource pods/log

-- EAT
Source: StackOverflow