Kubernetes Worker Node in Status NotReady

5/1/2018

I have been trying to setup a K8s cluster on a set of Raspberry Pi's. Here is a link to my GitHub page that describes the whole set up:

https://github.com/joesan/plant-infra/blob/master/pi/README.md

I'm now stuck with the last step where I join my worker nodes with the master. I did issue the join command on the worker node, but after that I check the nodes in the master and I get to see the following:

pi@k8s-master-01:~ $ kubectl get nodes
NAME            STATUS     ROLES     AGE       VERSION
k8s-master-01   Ready      master    56m       v1.9.6
k8s-worker-01   NotReady   <none>    26m       v1.9.6
k8s-worker-02   NotReady   <none>    6m        v1.9.6

The question is, do I need to install the container network like weave also on the worker nodes?

Here is the log file from the worker node:

pi@k8s-worker-02:~ $ journalctl -u kubelet
-- Logs begin at Thu 2016-11-03 17:16:42 UTC, end at Tue 2018-05-01 11:35:54 UTC. --
May 01 11:27:28 k8s-worker-02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 01 11:27:30 k8s-worker-02 kubelet[334]: I0501 11:27:30.995549     334 feature_gate.go:226] feature gates: &{{} map[]}
May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.005491     334 controller.go:114] kubelet config controller: starting controller
May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.005584     334 controller.go:118] kubelet config controller: validating combination of defaults and flags
May 01 11:27:31 k8s-worker-02 kubelet[334]: W0501 11:27:31.052134     334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.084480     334 server.go:182] Version: v1.9.6
May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.085670     334 feature_gate.go:226] feature gates: &{{} map[]}
May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.092807     334 plugins.go:101] No cloud provider specified.
May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.110132     334 certificate_store.go:130] Loading cert/key pair from ("/var/lib/kubelet/pki/kubelet-client.crt", "/var/lib/
May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.905417     334 machine.go:194] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no suc
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.911993     334 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.914203     334 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.914272     334 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.914895     334 container_manager_linux.go:266] Creating device plugin manager: false
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.919031     334 kubelet.go:291] Adding manifest path: /etc/kubernetes/manifests
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.919197     334 kubelet.go:316] Watching apiserver
May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.935754     334 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https:/
May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.937449     334 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:480: Failed to list *v1.Node: Get https://192.16
May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.937492     334 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:471: Failed to list *v1.Service: Get https://192
May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.948764     334 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back t
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.949871     334 kubelet.go:577] Hairpin mode set to "hairpin-veth"
May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.951008     334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.952122     334 client.go:80] Connecting to docker on unix:///var/run/docker.sock
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.952976     334 client.go:109] Start docker client with request timeout=2m0s
May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.959045     334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.971616     334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.971765     334 docker_service.go:232] Docker cri networking managed by cni
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.002411     334 docker_service.go:237] Docker Info: &{ID:25GN:65LU:UXAR:CUUY:DOQH:ST4A:IQOE:PIDR:BKYC:UVJH:LI5H:HQSG Contai
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.002766     334 docker_service.go:250] Setting cgroupDriver to cgroupfs
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.058142     334 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.098202     334 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.04.0-ce, apiVersion: 1.37.0
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.110512     334 server.go:755] Started kubelet
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.112242     334 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.114014     334 server.go:129] Starting to listen on 0.0.0.0:10250
May 01 11:27:40 k8s-worker-02 kubelet[334]: E0501 11:27:40.114962     334 kubelet.go:1281] Image garbage collection failed once. Stats initialization may not have completed yet: fai
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.133665     334 server.go:299] Adding debug handlers to kubelet server.
May 01 11:27:40 k8s-worker-02 kubelet[334]: E0501 11:27:40.141790     334 event.go:209] Unable to write event: 'Post https://192.168.0.101:6443/api/v1/namespaces/default/events: dia
May 01 11:27:40 k8s-worker-02 kubelet[334]: E0501 11:27:40.175654     334 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.175765     334 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.176241     334 volume_manager.go:247] Starting Kubelet Volume Manager
lines 1-41

Any idea as to why my worker nodes show up as NotReady?

EDIT: I traced the error with the kubectl describe nodes command:

Name:               k8s-worker-02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-worker-02
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Tue, 01 May 2018 11:26:50 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Tue, 01 May 2018 11:40:17 +0000   Tue, 01 May 2018 11:26:43 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Tue, 01 May 2018 11:40:17 +0000   Tue, 01 May 2018 11:26:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 01 May 2018 11:40:17 +0000   Tue, 01 May 2018 11:26:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            False   Tue, 01 May 2018 11:40:17 +0000   Tue, 01 May 2018 11:26:43 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported

How can I solve this?

-- sparkr
kubernetes

5 Answers

5/1/2018

I managed to fix this! This is how I did it:

$sudo nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

After that I just commented the line that contained the KUBELET_NETWORK_ARGS

After that I just rebooted the system and I can see the node being in Ready status!

-- sparkr
Source: StackOverflow

5/2/2018

You can also install Flannel CNI plugin as below

git clone https://github.com/containernetworking/cni
cd cni 
git checkout v0.5.2
./build.sh
cp bin/* /opt/cni/bin
mkdir -p /etc/cni/net.d

Download kube-flannel.yml and kube-flannel-rbac.yml from here

then try this and your node should be Ready

kubectl apply -f kube-flannel.yml -f kube-flannel-rbac.yml

-- Aditya Pawaskar
Source: StackOverflow

1/31/2019
  1. Go to your master node in master node go to /etc/cni/net.d
  2. In that folder you will find a cni config file
  3. Copy that file and upload the same file in /etc/cni/net.d of your worker node
  4. your worker node will be ready in 1 to 2 mins
  5. if this is not working add a comment
-- jaya rohith
Source: StackOverflow

7/29/2018

I had the same issue and like some I have the kiss of death when it comes to installs on standard perfectly normal equipment so NONE of the items anywhere helped until I rejoined the worker nodes to the master.

My install is on three physical machines. One master and two workers. All needed reboots.

I wasn't expecting it to work but it did. Probably won't work for you but if nothing else works, maybe give this a shot. you will need your join token, which you probably don't have, but I'll show you how to get it so you DON'T have to go to another set of pages and search for it:

sudo kubeadm token list

copy the TOKEN field data, the output looks like this (no, that's not my real one):

TOKEN ow3v08ddddgmgzfkdkdkd7 18h 2018-07-30T12:39:53-05:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

THEN join the cluster here. Master node IP is the real IP address of your machine:

sudo kubeadm join --token <YOUR TOKEN HASH> <MASTER_NODE_IP>:6443 --discovery-token-unsafe-skip-ca-verification
-- texasdave
Source: StackOverflow

5/2/2018

You need to try the below solutions and see if any one of the them would be helpful

  1. Check your firewalld status.If it is Running Stop it.
  2. Check your Kube-dns status.Sometimes it may be down or throwing some error
  3. Try to reload and Restart your Kubelet
-- Dinesh
Source: StackOverflow