I have successfully deployed a multi master Kubernetes cluster using the repo https://github.com/kubernetes-sigs/kubespray and everything works fine. But when I stop/terminate a node in the cluster, new node is not joining to the cluster.I had deployed kubernetes using KOPS, but the nodes were created automatically, when one deletes. Is this the expected behaviour in kubespray? Please help..
It is expected behavior because kubespray doesn't create any ASGs, which are AWS-specific resources. One will observe that kubespray only deals with existing machines; they do offer some terraform toys in their repo for provisioning machines, but kubespray itself does not get into that business.
You have a few options available to you:
scale.yml
etcd
machines (presumably so kubespray can issue etcd certificates for the new Nodescale.yml
playbookYou may enjoy AWX in support of that.
kubeadm join
This is the mechanism I use for my clusters, FWIW
Create a kubeadm join token using kubeadm token create --ttl 0
(or whatever TTL you feel comfortable using)
You'll only need to do this once, or perhaps once per ASG, depending on your security tolerances
Use the cloud-init mechanism to ensure that docker
, kubeadm
, and kubelet
binaries are present on the machine
You are welcome to use an AMI for doing that, too, if you enjoy building AMIs
Then invoke kubeadm join
as described here: https://kubernetes.io/docs/setup/independent/high-availability/#install-workers
There are plenty of "machine controller" components that aim to use custom controllers inside Kubernetes to manage your node pools declaratively. I don't have experience with them, but I believe they do work. That link was just the first one that came to mind, but there are others, too
Our friends over at Kubedex have an entire page devoted to this question