Nodes are not joining in aws eks

12/4/2018

I have launched cluster using aws eks successfully and applied aws-auth but nodes are not joining to cluster. I checked log message of a node and found this -

Dec  4 08:09:02 ip-10-0-8-187 kubelet: E1204 08:09:02.760634    3542 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized
Dec  4 08:09:03 ip-10-0-8-187 kubelet: W1204 08:09:03.296102    3542 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Dec  4 08:09:03 ip-10-0-8-187 kubelet: E1204 08:09:03.296217    3542 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec  4 08:09:03 ip-10-0-8-187 kubelet: E1204 08:09:03.459361    3542 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized`

I am not sure about this. I have attached eks full access to these instance node roles.

-- trojanOps
amazon-ecs
amazon-eks
amazon-web-services
devops
kubernetes

2 Answers

12/26/2018

if you have followed aws white paper there is easy way to connect the all worker node and join them with EKS cluster.

Link : https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

as per my thinking you forget to edit config map with instance role profile ARN.

-- Harsh Manvar
Source: StackOverflow

4/1/2019

If you are using terraform, or modifying tags and name variables, make sure the cluster name matches in the tags!

Node must be "owned" by a certain cluster. The nodes will only join a cluster they're supposed to. I overlooked this, but there isn't a lot of documentation to go on when using terraform. Make sure variables match. This is the node tag naming parent cluster to join:

  tag {
    key = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
    value = "owned"
    propagate_at_launch = true
  }
-- Eltron B
Source: StackOverflow