coreos kube-aws K8s cluster nodes must have a Tag named "KubernetesCluster" to join

11/24/2016

I have been experimenting with the script and it seems that if the cluster (not sure if just nodes or controller and nodes) doesn't have the aws tag "KubernetesCluster" with a unique value in it, the nodes do not seem to be added to the cluster, you get the following error instead as a loop when tailing the nodes journalctl -f:

  ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:13.577214    2195 kubelet_node_status.go:293] Unable to update node status: update node status exceeds retry count
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:14.308473    2195 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node 'ip-11-0-0-70.eu-west-1.compute.internal' not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:15.694850    2195 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:15.694877    2195 kubelet_node_status.go:245] Adding node label from cloud provider: beta.kubernetes.io/instance-type=r3.xlarge
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:15.694886    2195 kubelet_node_status.go:256] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=eu-west-1a
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:15.694893    2195 kubelet_node_status.go:260] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=eu-west-1
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: W1124 08:04:15.698242    2195 kubelet.go:1788] Deleting mirror pod "kube-proxy-ip-11-0-0-70.eu-west-1.compute.internal_kube-system(85ec9262-b21c-11e6-9b4e-066c9da0e3d3)" because it is outdated
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:16.587224    2195 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:16.587252    2195 kubelet_node_status.go:245] Adding node label from cloud provider: beta.kubernetes.io/instance-type=r3.xlarge
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:16.587261    2195 kubelet_node_status.go:256] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=eu-west-1a
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: I1124 08:04:16.587268    2195 kubelet_node_status.go:260] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=eu-west-1
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:23.592830    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:23.594352    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:23.595353    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:23.597125    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:23.598994    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:23.599012    2195 kubelet_node_status.go:293] Unable to update node status: update node status exceeds retry count
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:24.344163    2195 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node 'ip-11-0-0-70.eu-west-1.compute.internal' not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:33.627869    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:33.628985    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:33.630883    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:33.632268    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found
    ip-11-0-0-70.eu-west-1.compute.internal kubelet-wrapper[2195]: E1124 08:04:33.633703    2195 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-11-0-0-70.eu-west-1.compute.internal": nodes "ip-11-0-0-70.eu-west-1.compute.internal" not found

Can anyone explain this behaviour?.

kube-aws version v0.8.3

Thanks

-- Gleeb
coreos
kube-aws
kubernetes

2 Answers

11/25/2016

The KubernetesCluster tag, with a unique value for each cluster, is mandatory for AWS resources so that the cluster can provide the cloud functionality like creating ELBs.

This document goes into the details: https://github.com/kubernetes/kubernetes/blob/master/docs/design/aws_under_the_hood.md

-- manojlds
Source: StackOverflow

2/16/2017

kube-aws does create the tags on the machines (including KubernetesCluster) perhaps try the latest version 0.9.4-rc2

-- deploycat
Source: StackOverflow