with the help of kubespray I got a running kubernetes cluster on 3 machines. 2 of them (node1, node2) are master nodes and all of them (node1, node2, node3) are worker nodes. Therefore it should meet my requirement to be a high available cluster. I wanted to test the availability when some nodes are down and how they're reacting.
The problem: I'm bringing down node2 and node3, so there is just node1 running. When I try to kubectl get nodes
on node1 it returns The connection to the server 10.1.1.44:6443 was refused - did you specify the right host or port?
What's strange: When node1 and node2 (all masters) are running, the api works as it should. But when just one master is down, the api returns the message above.
I expect that node1 should work without the other master/worker nodes. Am I missing something over here?
Used: kubespray v2.11.0
Edited group_vars/all/all.yml
## Internal loadbalancers for apiservers
loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
loadbalancer_apiserver_type: nginx
# valid values "nginx" or "haproxy"
My hosts.yml
all:
hosts:
node1:
ansible_host: 10.1.1.44
ip: 10.1.1.44
access_ip: 10.1.1.44
node2:
ansible_host: 10.1.1.45
ip: 10.1.1.45
access_ip: 10.1.1.45
node3:
ansible_host: 10.1.1.46
ip: 10.1.1.46
access_ip: 10.1.1.46
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}