I m having below error while installing Kubernetes cluster in Ubunutu 18.04. Kubernetes master is ready. I' m using flannel as pod network. I m going to add my first node to the cluster using join command.
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletNotReady Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Update:
I noticed below in worker node
root@worker02:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-12-11 06:47:41 UTC; 27s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 14247 (kubelet)
Tasks: 14 (limit: 2295)
CGroup: /system.slice/kubelet.service
└─14247 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driv
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.085292 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kuber
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.086115 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-nbss2" (UniqueName
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.087975 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubern
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.088104 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kube
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.088153 14247 reconciler.go:156] Reconciler: start to sync state
Dec 11 06:47:45 worker02 kubelet[14247]: E1211 06:47:45.130889 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:48 worker02 kubelet[14247]: E1211 06:47:48.134042 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:50 worker02 kubelet[14247]: E1211 06:47:50.538096 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:53 worker02 kubelet[14247]: E1211 06:47:53.131425 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:56 worker02 kubelet[14247]: E1211 06:47:56.840529 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Please let me know how to fix this?
I have the same issue, however, my situation is that I have a running k8s cluster, and suddenly, the CISNodeInfo issue happened on some of my k8s node, and the node not in the cluster anymore. Search with google for some days, finally get the answer from Node cannot join #86094
Just editing /var/lib/kubelet/config.yaml to add:
featureGates:
CSIMigration: false
... at the end of the file seemed to allow the cluster to start as expected.
This link is only for 1 cluster with 1 master node. If you want to added worker nodes. You need to specify your IP and name of the machine in /etc/hosts file of your master node. Then instatiate your kubernetes master. Once it is started. Then join your worker nodes to the master. Make sure you install kubectl, and docker in your worker node. If you want the master only to manage kubernetes cluster then please skip step 26 of the link shared.