How to assign existing elastic IP to master nodes of kops cluster in AWS

10/12/2020

I am trying to deploy KOPS cluster in AWS environment without using Route53 DNS configuration. I am quite new to KOPS and do not have enough knowledge about network topology. In my cluster, there will have 3 master nodes.

According to my requirement, I need to access the services running inside this KOPS cluster from clients(outside of KOPS cluster). So, I would like to assign pre-created elastic IPs to all master nodes, therefore I can use those pre-created elastic IPs from clients to access the services running inside KOPS cluster.

My question is how can I assign pre-created elastic IPs to all master nodes during KOPS cluster creation?

Below is my currently using command for creating KOPS cluster -

kops create cluster \
    --state=${KOPS_STATE_STORE} \
    --master-zones=${MASTER_ZONES} \
    --zones=${ZONES} \
    --name=test-kops.k8s.local \
    --vpc=${VPC_ID} \
    --image="099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200907" \
    --master-volume-size=40 \
    --master-count=${Master_Count} \
    --node-volume-size=40 \
    --node-count=${Node_Count} \
    --networking=amazon-vpc-routed-eni \
    --subnets=${SUBNET_IDS} \
    --utility-subnets=${SUBNET_IDS} \
    --network-cidr=${NETWORK_CIDR} \
    --ssh-public-key=~/.ssh/id_rsa.pub \
    --dry-run -oyaml > cluster.yaml

kops create -f cluster.yaml

kops create secret --name ${NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub

kops update cluster test-kops.k8s.local --yes

cluster.yaml

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  name: test-kops.k8s.local
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://{s3url}
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-ap-southeast-1a-1
      name: "1"
    - instanceGroup: master-ap-southeast-1a-2
      name: "2"
    - instanceGroup: master-ap-southeast-1a-3
      name: "3"
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-ap-southeast-1a-1
      name: "1"
    - instanceGroup: master-ap-southeast-1a-2
      name: "2"
    - instanceGroup: master-ap-southeast-1a-3
      name: "3"
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.17.12
  masterPublicName: api.test-kops.k8s.local
  networkCIDR: {vpcCIDR}
  networkID: {vpcID}
  networking:
    amazonvpc: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: {subnetCIDR}
    id: {subnetID}
    name: ap-southeast-1a
    type: Public
    zone: ap-southeast-1a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: test-kops.k8s.local
  name: master-ap-southeast-1a-1
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200907
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-ap-southeast-1a-1
  role: Master
  rootVolumeSize: 40
  subnets:
  - ap-southeast-1a
  additionalSecurityGroups:
  - {securityGroup}

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: test-kops.k8s.local
  name: master-ap-southeast-1a-2
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200907
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-ap-southeast-1a-2
  role: Master
  rootVolumeSize: 40
  subnets:
  - ap-southeast-1a
  additionalSecurityGroups:
  - {securityGroup}

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: test-kops.k8s.local
  name: master-ap-southeast-1a-3
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200907
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-ap-southeast-1a-3
  role: Master
  rootVolumeSize: 40
  subnets:
  - ap-southeast-1a
  additionalSecurityGroups:
  - {securityGroup}

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: test-kops.k8s.local
  name: nodes
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200907
  machineType: t3.medium
  maxSize: 5
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  rootVolumeSize: 40
  subnets:
  - ap-southeast-1a
  additionalSecurityGroups:
  - {securityGroup}
-- Kyaw Min Thu L
amazon-web-services
kops
kubernetes

1 Answer

12/27/2020

Since the control plane nodes are running in ASGs you cannot assign elastic IPs directly to the EC2 instances. You have to go through the ELB to access them. The ELB also cannot have elastic IPs.

An alternative is to use a DNS record, but as you are also using gossip above, this doesn't apply here.

In kOps 1.19, one can use an NLB for the control plane, but at the moment, kOps does not support specifying an EIP for it. Since you want to use the IPs for accessing other services than the API, this is also probably not what you want though.

-- Ole Markus With
Source: StackOverflow