Mistakenly updated configmap aws-auth with rbac & lost access to the cluster

11/28/2019

Was trying to restrict IAM users with the rbac of AWS EKS cluster. Mistakenly updated the configmap "aws-auth" from kube-system namespace. This removed the complete access to the EKS cluster.

Missed to add the groups: in the configmap for the user.

Tried providing full admin access to the user/role that is lastly mentioned in the configmap, But no luck.

Any idea of recovering access to the cluster would be highly appreciable.

The config-map.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapUsers: |
    - userarn: arn:aws:iam::1234567:user/test-user
      username: test-user
-- Sandy
amazon-eks
amazon-web-services
configmap
kubernetes
rbac

2 Answers

11/30/2019

Did a work-around for this issue:

Since the IAM user who created the EKS Cluster by default possess complete access over the cluster, inspite of the aws-auth configmap. Since the IAM user who created, had been deleted, we re-created the IAM user, as it would have the same arn (if IAM user is created with the same name as before).

Once created the user credentials(access & secret keys) for the user, we got back access to the EKS cluster. Following which, we modified the config-map as required.

-- Sandy
Source: StackOverflow

11/28/2019

First thing I would try is restoring the original aws-auth ConfigMap (you can find it here):

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Replace the placeholder for rolearn with the ARN of the IAM role associated with your worker nodes as explained in the EKS documentation.

When the cluster works again, you can add IAM users to the ConfigMap again, which is also described in the EKS docs.

-- weibeld
Source: StackOverflow