kubernetes join failed hostname "" a DNS-1123

7/21/2017

I am new to Kubernetes, so some of my questions may be basic.

My setup: 2 VM (running Ubuntu 16.04.2)

Kubernetes Version: 1.7.1 on both Master Node(kube4local) and Slave Node(kube5local) My Steps: 1. On both Master and Slave Nodes, installed the required kubernetes(kubelet kubeadm kubectl kubernetes-cni) packages and docker(docker.io) packages. On the Master Node: 1.

vagrant@kube4local:~$ sudo kubeadm init  
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube4local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1051.552012 seconds
[token] Using token: 3c68b6.8c3f8d5a0a29a3ac
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443

vagrant@kube4local:~$ mkdir -p $HOME/.kube
vagrant@kube4local:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
vagrant@kube4local:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
vagrant@kube4local:~$ sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created

On the Slave Node:

Note: I am able to do a basic ping test, and ssh, scp commands between the master node running in VM1 and slave node running in VM2 works fine.

Ran the join command. Output of join command in slave node:

vagrant@kube5local:~$ sudo kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
[preflight] Some fatal errors occurred:
        hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`

Why i get this error, my /etc/hosts correct:

[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host

Output of Status Commands On the Master Node:

vagrant@kube4local:~$ sudo kubectl cluster-info
Kubernetes master is running at https://10.0.2.15:6443

vagrant@kube4local:~$ sudo kubectl get nodes
NAME         STATUS    AGE       VERSION
kube4local   Ready     26m       v1.7.1

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Output of ifconfig on Master Node(kube4local):

vagrant@kube4local:~$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:3a:c4:00:50
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

enp0s3    Link encap:Ethernet  HWaddr 08:00:27:19:2c:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:260314 errors:0 dropped:0 overruns:0 frame:0
          TX packets:58921 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:334293914 (334.2 MB)  TX bytes:3918136 (3.9 MB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:b8:ef:b6
          inet addr:192.168.56.104  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feb8:efb6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:247 errors:0 dropped:0 overruns:0 frame:0
          TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:36412 (36.4 KB)  TX bytes:25999 (25.9 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:19922 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19922 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:1996565 (1.9 MB)  TX bytes:1996565 (1.9 MB)

Output of /etc/hosts on Master Node(kube4local):

vagrant@kube4local:~$ cat /etc/hosts
192.168.56.104 kube4local   kube4local
192.168.56.105 kube5local   kube5local
127.0.1.1       vagrant.vm      vagrant
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Output of ifconfig on Slave Node(kube5local):

vagrant@kube5local:~$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:bb:37:ab:35
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

enp0s3    Link encap:Ethernet  HWaddr 08:00:27:19:2c:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:163514 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39792 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:207478954 (207.4 MB)  TX bytes:2660902 (2.6 MB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:6a:f0:51
          inet addr:192.168.56.105  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6a:f051/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:195 errors:0 dropped:0 overruns:0 frame:0
          TX packets:151 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:30463 (30.4 KB)  TX bytes:26737 (26.7 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Output of /etc/hosts on Slave Node(kube4local):

vagrant@kube5local:~$ cat /etc/hosts
192.168.56.104 kube4local   kube4local
192.168.56.105 kube5local   kube5local
127.0.1.1       vagrant.vm      vagrant
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
-- nat
kubernetes

1 Answer

7/21/2017

Nat this is bug in version v1.7.1. you can use v1.7.0 version or skip the pre-flight check.

kubeadm join --skip-preflight-checks 

you can refer this thread for more details.

kubernets v1.7.1 kubeadm join hostname "" could not be reached error

-- sfgroups
Source: StackOverflow