How to change/set k8s master node internal-ip or public-ip?

11/13/2019

I have installed k3s on a Cloud VM. (k3s is very similar to k8s. )

k3s server start as a master node.

And the master node's label shows internal-ip is 192.168.xxx.xxx. And the master node's annotations shows public-ip is also 192.168.xxx.xxx.

But the real public-ip of CloudVM is 49.xx.xx.xx. So agent from annother machine cannot connecting this master node. Because agent always tries to connect proxy "wss://192.168.xxx.xxx:6443/...".

If I run ifconfig on the Cloud VM, public-ip(49.xx.xx.xx) does not show. So k3s not find the right internal-ip or public-ip.

I try to start k3s with --bind-address=49.xx.xx.xx , but start fail. I guess no NIC bind with this ip-address.

How to resolve this problem, If I try to create a virtual netcard with address 49.xx.xx.xx ?

-- alen
k3s
kubernetes

2 Answers

3/18/2020

I also had the same problem, and finally found a solution. You can start your server with --node-external-ip, like this sudo k3s server --node-external-ip 49.xx.xx.xx, and agent need config env or start with sudo k3s agent --server https://49.xx.xx.xx:6443 --token ${K3S_TOKEN}, then your local device (edge node) from private IP can connect public cloud .

The flag usage is (listener) IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip)

This picture shows my computer connect remote server, I test it, raspberry 4B also success.

enter image description here

The load balancer not switch public IP to private IP, and when I use git blame get the flag feature add time is 2019.10.26

-- Papandadj
Source: StackOverflow

11/15/2019

The best option to connect Kubernetes master and nodes is using private network.

How to setup K3S master and single node cluster:

Prerequisites:

  • All the machines need to be inside the same private network. For example 192.168.0.0/24
  • All the machines need to communicate with each other. You can ping them with: $ ping IP_ADDRESS

In this example there are 2 virtual machines:

  • Master node (k3s) with private ip of 10.156.0.13
  • Worker node (k3s-2) with private ip of 10.156.0.8

enter image description here

Establish connection between VM's

The most important thing is to check if the machines can connect with each other. As I said, the best way would be just to ping them.

Provision master node

To install K3S on master node you need to invoke command from root user:

$ curl -sfL https://get.k3s.io | sh -

The output of this command should be like this:

[INFO]  Finding latest release
[INFO]  Using v0.10.2 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Check if master node is working:

$ kubectl get nodes

Output of above command should be like this:

NAME   STATUS   ROLES    AGE     VERSION
k3s    Ready    master   2m14s   v1.16.2-k3s.1

Retrieve the IMPORTANT_TOKEN from master node with command:

$ cat /var/lib/rancher/k3s/server/node-token

This token will be used to connect agent node to master node. Copy it

Connect agent node to master node

Ensure that node can communicate with master. After that you can invoke command from root user:

$ curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_NODE_IP:6443 K3S_TOKEN=IMPORTANT_TOKEN sh -

Paste your IMPORTANT_TOKEN into this command.

In this case the MASTER_NODE_IP is the 10.156.0.13.

Output of this command should look like this:

[INFO]  Finding latest release
[INFO]  Using v0.10.2 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

Test

Invoke command on master node to check if agent connected successfully:

$ kubectl get nodes

Node which you added earlier should be visible here:

NAME    STATUS   ROLES    AGE     VERSION
k3s     Ready    master   15m     v1.16.2-k3s.1
k3s-2   Ready    <none>   3m19s   v1.16.2-k3s.1

Above output concludes that the provisioning has happened correctly.

EDIT1: From this point you can deploy pods and expose them into public IP space.

EDIT2:

You can connect the K3S master and worker nodes on public IP network but there are some prerequisites.

Prerequsities:

  • Master node need to have port 6443/TCP open
  • Ensure that master node has reserved static IP address
  • Ensure that firewall rules are configured to allow access only by IP address of worker nodes (static ip addresses for nodes can help with that)

Provisioning of master node

The deployment of master node is the same as above. The only difference is that you need to get his public ip address.

Your master node does not need to show your public IP in commands like:

  • $ ip a
  • $ ifconfig

Provisioning worker nodes

The deployment of worker nodes is different only in manner of changing IP address of master node from private one to public one. Invoke this command from root account:
curl -sfL https://get.k3s.io | K3S_URL=https://PUBLIC_IP_OF_MASTER_NODE:6443 K3S_TOKEN=IMPORTANT_TOKEN sh -

Testing the cluster

To ensure that nodes are connected properly you need to invoke command:

$ kubectl get nodes

The output should be something like this:

NAME    STATUS   ROLES    AGE   VERSION
k3s-4   Ready    <none>   68m   v1.16.2-k3s.1
k3s-1   Ready    master   69m   v1.16.2-k3s.1
k3s-3   Ready    <none>   69m   v1.16.2-k3s.1
k3s-2   Ready    <none>   68m   v1.16.2-k3s.1

All of the nodes should be visible here.

-- Dawid Kruk
Source: StackOverflow