I am running a kubernetes cluster on EKS with two worker nodes. Both nodes are showing NotReady status and when I checked the kubelet logs on both nodes, there are below errors
k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Unauthorized k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Unauthorized
Is there anyway I can check which credentials are being used and how to fix this error?
Check the aws-auth
ConfigMap whether the Role used by the node has proper permissions. Also you enable the EKS control plane logs on cloudwatch and check the authenticator logs on what Role is being denied access.
You can reset the configmap anytime with the same user/role that was used to create the cluster, even if it is not present in the configmap.
It is important that you do not delete this role/user from IAM.