Kubernetes master node is not receiving internal IP

3/28/2019

I have followed the official guides and started a simple 3 node cluster, however the command kubeadm get nodes -o wide prints this result:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP  
node2   Ready    master   12h   v1.13.4   <none>
node3   Ready    <none>   12h   v1.13.4   192.168.1.47 
node4   Ready    <none>   12h   v1.13.4   192.168.1.48

Please not the INTERNAL-IP of node2 (which is master node).

Because of that, pods that are on node2 do not receive an IP, even though all of them are system pods.

Environment:

  1. Network: VirtualBox bridged adabter no NAT whatsoever
  2. Network plugin: Flannel
  3. OS: Ubuntu 18.04 LTS

Update

Here is the output of kubectl get pods -n kube-system as requested in comments:

NAME                            STATUS          IP             NODE 
coredns-86c58d9df4-d2dv7        Running      10.244.0.52    node2
coredns-86c58d9df4-zwmzg        Running      10.244.0.51    node2
etcd-node2                      Running      <none>         node2
kube-apiserver-node2            Running      <none>         node2
kube-controller-manr-node2      Running      <none>         node2
kube-flannel-ds-amd64-5dpr9     Running      192.168.1.47   node3
kube-flannel-ds-amd64-97h5q     Running      <none>         node2
kube-flannel-ds-amd64-zwlxh     Running      192.168.1.48   node4
kube-proxy-4qlpc                Running      <none>         node2
kube-proxy-c28q9                Running      192.168.1.48   node4
kube-proxy-ntdxj                Running      192.168.1.47   node3
kube-scheduler-node2            Running      <none>         node2

pods on master are also getting <none>.

Also i have created a gist for kubectl describe node node2 here

-- SHM
flannel
kubernetes

0 Answers