kubernetes convert single master to multi master

7/9/2018

I have created the single master kubernetes v1.9.0 cluster using kubeadm command in bare metal server. Now I want to add two more master and make it multi master.

Is it possible to convert to multi master configuration? Is there a document available for this type of conversation?

I have found this link for Kops not sure same steps will work for other environment also.

https://github.com/kubernetes/kops/blob/master/docs/single-to-multi-master.md

Thanks SR

-- sfgroups
kubernetes

1 Answer

7/9/2018

Yes, it's possible, but you may need to break your master setup temporarily. You'll need to follow the instructions here

In a nutshell:

Create a kubeadm config file. In that kubeadm config file you'll need to include the SAN for the loadbalancer you'll use. Example:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
    controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://CP0_IP:2379"
      advertise-client-urls: "https://CP0_IP:2379"
      listen-peer-urls: "https://CP0_IP:2380"
      initial-advertise-peer-urls: "https://CP0_IP:2380"
      initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380"
    serverCertSANs:
      - CP0_HOSTNAME
      - CP0_IP
    peerCertSANs:
      - CP0_HOSTNAME
      - CP0_IP
networking:
    # This CIDR is a Calico default. Substitute or remove for your CNI provider.
    podSubnet: "192.168.0.0/16" 

Copy the certificates created to your new nodes. All the certs under /etc/kubernetes/pki/ should be copied

Copy the admin.conf from /etc/kubernetes/admin.conf to the new nodes

Example:

USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
    scp /etc/kubernetes/admin.conf "${USER}"@$host:
done

Create your second kubeadm config file for the second node:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
    controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://CP1_IP:2379"
      advertise-client-urls: "https://CP1_IP:2379"
      listen-peer-urls: "https://CP1_IP:2380"
      initial-advertise-peer-urls: "https://CP1_IP:2380"
      initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380"
      initial-cluster-state: existing
    serverCertSANs:
      - CP1_HOSTNAME
      - CP1_IP
    peerCertSANs:
      - CP1_HOSTNAME
      - CP1_IP
networking:
    # This CIDR is a calico default. Substitute or remove for your CNI provider.
    podSubnet: "192.168.0.0/16"

Replace the following variables with the correct addresses for this node:

LOAD_BALANCER_DNS

LOAD_BALANCER_PORT

CP0_HOSTNAME

CP0_IP

CP1_HOSTNAME

CP1_IP

Move the copied certs to the correct location

USER=ubuntu # customizable
  mkdir -p /etc/kubernetes/pki/etcd
  mv /home/${USER}/ca.crt /etc/kubernetes/pki/
  mv /home/${USER}/ca.key /etc/kubernetes/pki/
  mv /home/${USER}/sa.pub /etc/kubernetes/pki/
  mv /home/${USER}/sa.key /etc/kubernetes/pki/
  mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
  mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
  mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
  mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
  mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

Now, you can start adding the master using kubeadm

  kubeadm alpha phase certs all --config kubeadm-config.yaml
  kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml
  kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml
  kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml
  systemctl start kubelet

Join the node to the etcd cluster:

  CP0_IP=10.0.0.7
  CP0_HOSTNAME=cp0
  CP1_IP=10.0.0.8
  CP1_HOSTNAME=cp1

  KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
  kubeadm alpha phase etcd local --config kubeadm-config.yaml

and then finally, add the controlplane:

  kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
  kubeadm alpha phase controlplane all --config kubeadm-config.yaml
  kubeadm alpha phase mark-master --config kubeadm-config.yaml

Repeat these steps for the third master, and you should be good.

-- jaxxstorm
Source: StackOverflow