I deployed a cluster in AWS 3 AZ, I want to start one master on each AZ. Everything else works except I cannot start one master in one AZ.
Here is my validation:
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
bastions Bastion t2.micro 1 1 utility-us-east-1a,utility-us-east-1c,utility-us-east-1d
master-us-east-1a Master m3.medium 1 1 us-east-1a
master-us-east-1c Master m3.medium 2 2 us-east-1c
master-us-east-1d Master m3.medium 1 1 us-east-1d
nodes Node m4.xlarge 3 3 us-east-1a,us-east-1c,us-east-1d
workers Node m4.2xlarge 2 2 us-east-1a,us-east-1c,us-east-1d
NODE STATUS
NAME ROLE READY
ip-10-0-100-34.ec2.internal node True
ip-10-0-107-127.ec2.internal master True
ip-10-0-120-160.ec2.internal node True
ip-10-0-35-184.ec2.internal node True
ip-10-0-39-224.ec2.internal master True
ip-10-0-59-109.ec2.internal node True
ip-10-0-87-169.ec2.internal node True
VALIDATION ERRORS
KIND NAME MESSAGE
InstanceGroup master-us-east-1c InstanceGroup "master-us-east-1c" did not have enough nodes 0 vs 2
Validation Failed
And if I use rolling update, it shows there is one master not started:
NAME STATUS NEEDUPDATE READY MIN MAX NODES
bastions Ready 0 1 1 1 0
master-us-east-1a Ready 0 1 1 1 1
master-us-east-1c Ready 0 0 1 1 0
master-us-east-1d Ready 0 1 1 1 1
nodes Ready 0 3 3 3 3
workers Ready 0 2 2 2 2
What shall I do to bring that machine up?
I solved this problem. It is because the m3.medium
type (the default one in kops for master) is no longer available in that AZ. Change it to m4.large
makes it work.