I am following https://v1-12.docs.kubernetes.io/docs/setup/independent/high-availability/ to setup a high avaliablity cluster
three masters : 10.240.0.4 (kb8-master1), 10.240.0.33 (kb8-master2), 10.240.0.75 (kb8-master3) LB: 10.240.0.16 ( haproxy)
I have setup the kb8-master1 and copied the following files to rest of the masters ( kb8-master2 and kb8-master3) as instructed
In the kb8-master2
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
After that I started to follow following commands in the kb8-master2
> `sudo kubeadm alpha phase certs all --config kubeadm-config.yaml`
Output:-
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kb8-master2 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kb8-master2 localhost] and IPs [10.240.0.33 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kb8-master2 kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.240.0.33]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
>`sudo kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml`
Output:-
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
>`sudo kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml`
Output:-
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
>`sudo kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml`
Output:-
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
>`sudo systemctl start kubelet`
>`export KUBECONFIG=/etc/kubernetes/admin.conf`
>`sudo kubectl exec -n kube-system etcd-kb8-master1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=protocol://10.240.0.4:2379 member add kb8-master2 https://10.240.0.33:2380`
Output:- The connection to the server localhost:8080 was refused - did you specify the right host or port?
Note: I can now run kubectl get po -n kube-system in the kb8-master2 to see the pods
sudo kubeadm alpha phase etcd local --config kubeadm-config.yaml
No output
sudo kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
Output:-
a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong API Server URL
I am really stuck here. Further
Below the kubeadm-config.yaml file I am using in the kb8-master2
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
kubernetesVersion: v1.12.2
apiServerCertSANs:
- "10.240.0.16"
controlPlaneEndpoint: "10.240.0.16:6443"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://10.240.0.33:2379"
advertise-client-urls: "https://10.240.0.33:2379"
listen-peer-urls: "https://10.240.0.33:2380"
initial-advertise-peer-urls: "https://10.240.0.33:2380"
initial-cluster: "kb8-master1=https://10.240.0.4:2380,kb8-master2=https://10.240.0.33:2380"
initial-cluster-state: existing
serverCertSANs:
- kb8-master2
- 10.240.0.33
peerCertSANs:
- kb8-master2
- 10.240.0.33
networking:
podSubnet: "10.244.0.0/16"
Have anyone faced the same issue. I am completely got stuck here
Is there any reason you're individually executing all of the init and join tasks rather than just using init and join outright? Kubeadm is supposed to be extremely trivial to use.
Create your initConfiguration
and clusterConfiguraton
manifests and drop them in the same file on your Master. Then create a nodeConfiguration
manifest and drop it on a file on your nodes. Then on your master run kubeadm init --config=/location/master.yml
and then on your nodes run kubeadm join --token 1.2.3.4:6443
Rather than stepping through the docs for how init and join work specifically with their subtasks, step through this document to use their automation to build a cluster much more easily.