Kubernetes with flannel: CNI config uninitialized

11/12/2018

I am new with Kubernetes and am trying to setup a Kubernetes cluster on local machines. Bare metal. No OpenStack, No Maas or something.

After kubeadm init ... on the master node, kubeadm join ... on the slave nodes and applying flannel at the master I get the message from the slaves:

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Can anyone tell me what I have done wrong or missed any steps? Should flannel be applied to all the slave nodes as well? If yes, they do not have a admin.conf...

Thanks a lot!

PS. All the nodes do not have internet access. That means all files have to be copied manually via ssh.

-- Matthias
cni
flannel
kubernetes

3 Answers

11/12/2018

Usually flannel is deployed as daemonset. Meaning on all worker nodes.

-- Bal Chua
Source: StackOverflow

11/12/2018

The problem was the missing internet connection. After loading the Docker images manually to the worker nodes they appear to be ready.

Unfortunately I did not found a helpful error message around this.

-- Matthias
Source: StackOverflow

1/22/2019

I think this problem cause by kuberadm first init coredns but not init flannel,so it throw "network plugin is not ready: cni config uninitialized".
Solution:
1. Install flannel by kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
2. Reset the coredns pod
kubectl delete coredns-xx-xx
3. Then run kubectl get pods to see if it works.

if you see this error "cni0" already has an IP address different from 10.244.1.1/24". follow this:

ifconfig  cni0 down
brctl delbr cni0
ip link delete flannel.1

if you see this error "Back-off restarting failed container", and you can get the log by

root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system
.:53
2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6
2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
 [FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1599094102175870692.6819166615156126341.".

Then you can see the file "/etc/resolv.conf" on the failed node, if the nameserver is localhost there will be a loopback.Change to:

#nameserver 127.0.1.1
nameserver 8.8.8.8
-- dahohu527
Source: StackOverflow