Failed to update lock: configmaps forbidden: User "system:serviceaccount:ingress

7/24/2020

Getting the Below error:-

Failed to update lock: configmaps "ingress-controller-leader-internal-nginx-internal" is forbidden: User "system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx-internal"

I am using multiple ingress controller in my setup with two different namespaces. Ingress-Nginx-internal Ingress-Nginx-external

After installation everything works fine till 15 hrs, getting the above error.

Ingress-nginx-internal.yaml

https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml

In the above deploy.YAML, I replace. sed 's/ingress-Nginx/ingress-Nginx-internal/g' deploy.yaml

the output of below command:-

# kubectl describe cm ingress-controller-leader-internal-nginx-internal -n ingress-nginx-internal
Name:         ingress-controller-leader-internal-nginx-internal
Namespace:    ingress-nginx-internal
Labels:       <none>
Annotations:  control-plane.alpha.kubernetes.io/leader:
                {"holderIdentity":"ingress-nginx-internal-controller-657","leaseDurationSeconds":30,"acquireTime":"2020-07-24T06:06:27Z","ren...

Data
====
Events:  <none>

Service:-

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-internal-2.0.3
    app.kubernetes.io/name: ingress-nginx-internal
    app.kubernetes.io/instance: ingress-nginx-internal
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-internal
  namespace: ingress-nginx-internal
-- me25
kubernetes
nginx

2 Answers

5/25/2021

If you use the multi ingress remember to change the resource name in the Role.

-- Ela Dute
Source: StackOverflow

7/24/2020

From the docs here

To run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic) the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:

spec:
  template:
     spec:
       containers:
         - name: nginx-ingress-internal-controller
           args:
             - /nginx-ingress-controller
             - '--election-id=ingress-controller-leader-internal'
             - '--ingress-class=nginx-internal'
             - '--configmap=ingress/nginx-ingress-internal-controller'

When you create the ingress resource for internal you need to specify the ingress class as specified above i.e nginx-internal

Check the permission of the service account using below command. It it returns no then create the role and rolebinding.

kubectl auth can-i update configmaps --as=system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal -n ingress-nginx-internal

RBAC

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cm-role
  namespace: ingress-nginx-internal
rules:
- apiGroups:
  - ""
  resources:
  - configmap
  verbs:
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cm-rolebinding
  namespace: ingress-nginx-internal
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cm-role
subjects:
- kind: ServiceAccount
  name: ingress-nginx-internal
  namespace: ingress-nginx-internal
-- Arghya Sadhu
Source: StackOverflow