Kubernetes Multi Master setup

6/6/2018

[SOLVED] flannel dont work with that I changed to weave net. If you dont want to provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml

I want to make a multi master setup with kubernetes and tried alot of different ways. Even the last way I take don´t work. The problem is that the dns and the flannel network plugin don´t want to start. They get every time the CrashLoopBackOff status. The way I do it is listed below.

First create a external etcd cluster with this command on every node (only the adresses changed)

nohup etcd --name kube1 --initial-advertise-peer-urls http://192.168.100.110:2380 \
  --listen-peer-urls http://192.168.100.110:2380 \
  --listen-client-urls http://192.168.100.110:2379,http://127.0.0.1:2379 \
  --advertise-client-urls http://192.168.100.110:2379 \
  --initial-cluster-token etcd-cluster-1 \
  --initial-cluster kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380 \
  --initial-cluster-state new &

Then I created a config.yaml file for the kubeadm init command.

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 192.168.100.110
etcd:
  endpoints:
  - "http://192.168.100.110:2379"
  - "http://192.168.100.108:2379"
  - "http://192.168.100.104:2379"
apiServerExtraArgs:
  apiserver-count: "3"
apiServerCertSANs:
- "192.168.100.110"
- "192.168.100.108"
- "192.168.100.104"
- "127.0.0.1"
token: "64bhyh.1vjuhruuayzgtykv"
tokenTTL: "0"

Start command: kubeadm init --config /root/config.yaml

So now copy the /etc/kubernetes/pki on the other nodes and the config and start the other master nodes the same way. But it doesn´t work.

So what is the right way to initialize a multi master kubernetes cluster or why does my flannel network not start?

Status from a flannel pod:

Events:
  Type     Reason                 Age               From            Message
  ----     ------                 ----              ----            -------
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "run"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "cni"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "flannel-token-swdhl"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "flannel-cfg"
  Normal   Pulling                8m                kubelet, kube2  pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
  Normal   Pulled                 8m                kubelet, kube2  Successfully pulled image "quay.io/coreos/flannel:v0.10.0-amd64"
  Normal   Created                8m                kubelet, kube2  Created container
  Normal   Started                8m                kubelet, kube2  Started container
  Normal   Pulled                 8m (x4 over 8m)   kubelet, kube2  Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
  Normal   Created                8m (x4 over 8m)   kubelet, kube2  Created container
  Normal   Started                8m (x4 over 8m)   kubelet, kube2  Started container
  Warning  BackOff                3m (x23 over 8m)  kubelet, kube2  Back-off restarting failed container

etcd version

etcd --version
etcd Version: 3.3.6
Git SHA: 932c3c01f
Go Version: go1.9.6
Go OS/Arch: linux/amd64

 kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Last lines in nohup from etcd

2018-06-06 19:44:28.441304 I | etcdserver: name = kube1
2018-06-06 19:44:28.441327 I | etcdserver: data dir = kube1.etcd
2018-06-06 19:44:28.441331 I | etcdserver: member dir = kube1.etcd/member
2018-06-06 19:44:28.441334 I | etcdserver: heartbeat = 100ms
2018-06-06 19:44:28.441336 I | etcdserver: election = 1000ms
2018-06-06 19:44:28.441338 I | etcdserver: snapshot count = 100000
2018-06-06 19:44:28.441343 I | etcdserver: advertise client URLs = http://192.168.100.110:2379
2018-06-06 19:44:28.441346 I | etcdserver: initial advertise peer URLs = http://192.168.100.110:2380
2018-06-06 19:44:28.441352 I | etcdserver: initial cluster = kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380
2018-06-06 19:44:28.443825 I | etcdserver: starting member a4df4f699dd66909 in cluster 73f203cf831df407
2018-06-06 19:44:28.443843 I | raft: a4df4f699dd66909 became follower at term 0
2018-06-06 19:44:28.443848 I | raft: newRaft a4df4f699dd66909 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-06-06 19:44:28.443850 I | raft: a4df4f699dd66909 became follower at term 1
2018-06-06 19:44:28.447834 W | auth: simple token is not cryptographically signed
2018-06-06 19:44:28.448857 I | rafthttp: starting peer 9e0f381e79b9b9dc...
2018-06-06 19:44:28.448869 I | rafthttp: started HTTP pipelining with peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450791 I | rafthttp: started peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450803 I | rafthttp: added peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450809 I | rafthttp: starting peer fc9c29e972d01e69...
2018-06-06 19:44:28.450816 I | rafthttp: started HTTP pipelining with peer fc9c29e972d01e69
2018-06-06 19:44:28.453543 I | rafthttp: started peer fc9c29e972d01e69
2018-06-06 19:44:28.453559 I | rafthttp: added peer fc9c29e972d01e69
2018-06-06 19:44:28.453570 I | etcdserver: starting server... [version: 3.3.6, cluster version: to_be_decided]
2018-06-06 19:44:28.455414 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455431 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455445 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream MsgApp v2 reader)
2018-06-06 19:44:28.455578 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream Message reader)
2018-06-06 19:44:28.455697 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
2018-06-06 19:44:28.455704 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
@
-- Theroth
docker
kubeadm
kubectl
kubernetes
orchestration

2 Answers

6/7/2018

If you do not have any hosting preferences and if you are ok with creating cluster on AWS then it can be done very easily using KOPS.

https://github.com/kubernetes/kops

Via KOPS you can easily configure the autoscaling group for master and can specify the number of master and nodes required for your cluster.

-- Amit Meena
Source: StackOverflow

6/7/2018

Flannel dont work with that so I changed to weave net. If you dont want to use provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml

-- Theroth
Source: StackOverflow