I am working with kops to maintain kubernetes cluster, but got a little bit confused about how it join the new node to cluster.
At first, I think it should be using ELB dns name to configure the kubelet in each node as the api server. but I cannot find anything support.
Then, I find that in userdata of the new created instance, there are some config items for kubelet:
`kubeconfigPath: /var/lib/kubelet/kubeconfig`
when I login the instance and find the server configured in the kubeconfig file is another dns name instead of my ELB, try to resolve it but failed. After tried to apply this kubeconfig to another kubelet, it shows error with: Unable to connect to the server.
So how it work in kops when we add new nodes? I can't find any document..
Actually, you are right. ELB of masters is added to /var/lib/kubelet/kubeconfig
of each node and it really resolves to it (I've checked it right now).
# cat /var/lib/kubelet/kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ...
server: https://api.internal.prod.mycluster.com
name: local
contexts:
- context:
cluster: local
user: kubelet
name: service-account-context
current-context: service-account-context
kind: Config
users:
- name: kubelet
user:
client-certificate-data: ...
# ping api.internal.prod.mycluster.com
PING api.internal.prod.mycluster.com (172.28.11.5) 56(84) bytes of data.
That's about masters: you don't need to reconfigure nodes because ELB address is the same. Even easier with nodes: you need to register them only.
Anyway If you are still curious about that - try to discover Auto Scaling Group/Launch configuration (User data):
PS. Don't just belive me - check that.