Can join the cluster, but unable to fetch kubeadm-config

3/15/2019

I am following with the answer here step 6th. To make my own local minikube cluster of single master and 2 nodes.

master names minikube.

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubelet --version
Kubernetes v1.13.3

login to the minikube console by minikube ssh

Then check the ip addresses with ifconfig

$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:0E:E5:B4:9C
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:eff:fee5:b49c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18727 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21337 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1621416 (1.5 MiB)  TX bytes:6858635 (6.5 MiB)

eth0      Link encap:Ethernet  HWaddr 08:00:27:04:9E:5F
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe04:9e5f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:139646 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11964 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:202559446 (193.1 MiB)  TX bytes:996669 (973.3 KiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:10:7A:A5
          inet addr:192.168.99.105  Bcast:192.168.99.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe10:7aa5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2317 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2231 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:197781 (193.1 KiB)  TX bytes:199788 (195.1 KiB)

Therefore my minikube ip address is 192.168.99.105

On my VM node. I have checked that they are using the same network. Networks are

  1. NAT

  2. Host-only Adapter Names:vboxnet0`

Here is the nmap proof that no firewall against connection port enter image description here

Execute the kubeadm join to join the cluster. If it get the exact output from cli. It is even worse. Because the command output is calling the localhost and when it comes to the executor it means it calls itself which is wrong and therefore after execute it. Terminial will show me timeout error

kubeadm join 192.168.99.105:8443 --token 856tch.tpccuji4nnc2zq5g --discovery-token-ca-cert-hash sha256:cfbb7a0f9ed7fca018b45fdfecb753a88aec64d4e46b5ac9ceb6d04bbb0a46a6

kubeadm show me localhost back! enter image description here

Surly I did not get any node

$ kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
minikube   Ready    master   104m   v1.13.3

Question:

  1. How to let kubeadm follow my given ip address in the cli correctly?

  2. How to prevent localhost come back during the process?

-- Sarit
kubeadm
kubectl
kubernetes
localhost
minikube

2 Answers

3/16/2019

On step 2, you should run this command:

kubeadm token create --print-join-command

That should provide the exact syntax you need to add a worker node to your cluster. Don't change anything

-- tenbosch
Source: StackOverflow

3/21/2019

This seems to be an issue with current Minikube code, which I guess changed since the post was made. Take a look at https://github.com/kubernetes/minikube/issues/3916. I've managed to join a second node by DNATting 127.0.0.1:8443 to the original minikube master.

Just FTR, I added a /etc/rc.local at the second node with: (replace LOCAL_IF, MASTER_IP and WORKER_IP with sensible data)

#!/bin/sh
echo 1 > /proc/sys/net/ipv4/conf/<LOCAL_IF>/route_localnet
/sbin/iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --destination-port
8443 -j DNAT --to-destination <MASTER_IP>:8443
/sbin/iptables -t nat -A POSTROUTING -p tcp -s 127.0.0.1 -d <MASTER_IP>
--dport 8443 -j SNAT --to <WORKER_IP>

But problems did not end there. Installing flannel with:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

worked (after allocating node CIDRs via controller manager), but my second node somehow had a different kubelet installation, that installed cni as network plugin, and ended up creating a new bridge (cni0) that clashed with docker network.

There are many things that have to work together for this to fly.

-- Carlos Mendioroz
Source: StackOverflow