kubectl get nodes shows NotReady

10/29/2018

I have installed two nodes kubernetes 1.12.1 in cloud VMs, both behind internet proxy. Each VMs have floating IPs associated to connect over SSH, kube-01 is a master and kube-02 is a node. Executed export:

no_proxy=127.0.0.1,localhost,10.157.255.185,192.168.0.153,kube-02,192.168.0.25,kube-01

before running kubeadm init, but I am getting the following status for kubectl get nodes:

NAME      STATUS     ROLES    AGE   VERSION
kube-01   NotReady   master   89m   v1.12.1
kube-02   NotReady   <none>   29s   v1.12.2

Am I missing any configuration? Do I need to add 192.168.0.153 and 192.168.0.25 in respective VM's /etc/hosts?

-- Sandeep Nag
kubeadm
kubectl
kubernetes

3 Answers

10/29/2018

Try with this

Your coredns is in pending state check with the networking plugin you have used and check the proper addons are added

check kubernates troubleshooting guide

https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#coredns-or-kube-dns-is-stuck-in-the-pending-state

https://kubernetes.io/docs/concepts/cluster-administration/addons/

And install the following with those

And check

kubectl get pods -n kube-system
-- Javeed Shakeel
Source: StackOverflow

2/6/2020

On the off chance it might be the same for someone else, in my case, I was using the wrong AMI image to create the nodegroup.

-- user1394
Source: StackOverflow

10/29/2018

Looks like pod network is not installed yet on your cluster . You can install weave for example with below command

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.

You can install pod networks of your choice . Here is a list

after this check

$ kubectl describe nodes

check all is fine like below

Conditions:
  Type              Status
  ----              ------
  OutOfDisk         False
  MemoryPressure    False
  DiskPressure      False
  Ready             True
Capacity:
 cpu:       2
 memory:    2052588Ki
 pods:      110
Allocatable:
 cpu:       2
 memory:    1950188Ki
 pods:      110

next ssh to the pod which is not ready and observe kubelet logs. Most likely errors can be of certificates and authentication.

You can also use journalctl on systemd to check kubelet errors.

$ journalctl -u kubelet
-- Shashank Pai
Source: StackOverflow