AWS EKS kubectl - No resources found in default namespace

10/10/2019

Trying to setup a EKS cluster. An error occurred (AccessDeniedException) when calling the DescribeCluster operation: Account xxx is not authorized to use this service. This error came form the CLI, on the console I was able to crate the cluster and everything successfully. I am logged in as the root user (its just my personal account).

It says Account so sounds like its not a user/permissions issue? Do I have to enable my account for this service? I don't see any such option.

Also if login as a user (rather than root) - will I be able to see everything that was earlier created as root. I have now created a user and assigned admin and eks* permissions. I checked this when I sign in as the user - I can see everything.

The aws cli was setup with root credentials (I think) - so do I have to go back and undo fix all this and just use this user.

Update 1
I redid/restarted everything including user and awscli configure - just to make sure. But still the issue did not get resolved.

There is an option to create the file manually - that finally worked.

And I was able to : kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 443/TCP 48m

KUBECONFIG: I had setup the env:KUBECONFIG

$env:KUBECONFIG="C:\Users\sbaha\.kube\config-EKS-nginixClstr"
$Env:KUBECONFIG
C:\Users\sbaha\.kube\config-EKS-nginixClstr
kubectl config get-contexts
CURRENT   NAME   CLUSTER      AUTHINFO   NAMESPACE
*         aws    kubernetes   aws
kubectl config current-context
aws

My understanding is is I should see both the aws and my EKS-nginixClstr contexts but I only see aws - is this (also) a issue?

Next Step is to create and add worker nodes. I updated the node arn correctly in the .yaml file: kubectl apply -f ~\.kube\aws-auth-cm.yaml
configmap/aws-auth configured So this perhaps worked.

But next it fails:

kubectl get nodes No resources found in default namespace.

On AWS Console NodeGrp shows- Create Completed. Also on CLI kubectl get nodes --watch - it does not even return.

So this this has to be debugged next- (it never ends)

aws-auth-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::arn:aws:iam::xxxxx:role/Nginix-NodeGrpClstr-NodeInstanceRole-1SK61JHT0JE4
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
-- Sam-T
amazon-eks
amazon-iam
amazon-web-services
kubectl
kubernetes

1 Answer

10/28/2019

This problem was related to not having the correct version of eksctl - it must be at least 0.7.0. The documentation states this and I knew this, but initially whatever I did could not get beyond 0.6.0. The way you get it is to configure your AWS CLI to a region that supports EKS. Once you get 0.7.0 this issue gets resolved.
Overall to make EKS work - you must have the same user both on console and CLI, and you must work on a region that supports EKS, and have correct eksctl version 0.7.0.

-- Sam-T
Source: StackOverflow