I am trying to achieve the k8s high availability using kubeadm. I am following the document k8s HA using kubeadm
In the official document, it is recommended to have the failover mechanism/load balancer for the kube-apiserver. I tried keepalived but, in case of setup on aws/gcp instaces, it lands in split brain situation as multicast is not supported and so I am not allowed to use it. Is there any way out for this?
No, you need a loadbalancer to have HA with kubeadm.
If you're using AWS/GCP, why not consider using the native loadbalancers for those environments, like ELB or a Google Cloud Load Balancer?
You definnitely need nginx/haproxy + keepalived for failover and High availability
Kubernetes is a container-orchestration system for automating deployment, scaling, and management of containerized applications. Kubernetes play best in High Available and Load Balancing environments.
As @jaxxstorm mentioned, cloud providers give you a possibility to use native load balancers, and I also suggest it is a good pole position with High Availability attempt. You may be interested in GCP documentation.
Kubeadm on Kubernetes homebrewed environment requires some additional work, and from my point of view, it is good to set up Kubernetes The Hardway then starts to play with Kubeadm.
OK, I assume servers for the installation are ready. To create a not complex installation of multi-master cluster, you need 3 masters node (10.0.0.50-52) and Load Balancer (10.0.0.200).
Generate token and save the output to file:
kubeadm token generate
Create a kubeadm config file:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- "http://10.0.0.50:2379"
- "http://10.0.0.51:2379"
- "http://10.0.0.52:2379"
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- "10.0.0.50"
- "10.0.0.51"
- "10.0.0.52"
- "10.0.0.200"
- "127.0.0.1"
token: "YOUR KUBEADM TOKEN"
tokenTTL: "0"
Copy the config file to all nodes.
Do initialization on the first master instance:
kubeadm init --config /path/to/config.yaml
The new master instance, will have all the certificates and keys necessary for our master cluster.
Copy directory structure /etc/kubernetes/pki
to other masters to the same location.
On other master servers:
kubeadm init --config /path/to/config.yaml
Now let’s start to set up load balancer:
Copy /etc/kubernetes/admin.conf
into $HOME/.kube/config
next, edit $HOME/.config
and replace
server:10.0.0.50
with
server:10.0.0.200
Check if nodes are working fine:
kubectl get nodes
On all workers execute:
kubeadm join --token YOUR_CLUSTER_TOKEN 10.0.0.200:6443 --discovery-token-ca-cert-hash sha256:89870e4215b92262c5093b3f4f6d57be8580c3442ed6c8b00b0b30822c41e5b3
And that’s it! If everything was setup cleanly, you should now have a highly available cluster.
I found "HA Kubernetes cluster via Kubeadm" tutorial useful, thank you @Nate Baker for inspiration.