How to fix kubernetes taint node.kubernetes.io/not-ready: NoSchedule

5/10/2021

I am trying to run local development kubernetes cluster which runs in Docker Desktop context. But its just keeps having following taint: node.kubernetes.io/not-ready:NoSchedule.

Manally removing taints, ie kubectl taint nodes --all node.kubernetes.io/not-ready-, doesn't help, because it comes back right away

kubectl describe node, is:

Name:               docker-desktop
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=docker-desktop
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 07 May 2021 11:00:31 +0100
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  docker-desktop
  AcquireTime:     <unset>
  RenewTime:       Fri, 07 May 2021 16:14:19 +0100
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 11:00:31 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 11:00:31 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 11:00:31 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 16:11:05 +0100   KubeletNotReady              PLEG is not healthy: pleg was last seen active 6m22.485400578s ago; threshold is 3m0s
Addresses:
  InternalIP:  192.168.65.4
  Hostname:    docker-desktop
Capacity:
  cpu:                5
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             18954344Ki
  pods:               110
Allocatable:
  cpu:                5
  ephemeral-storage:  56453061334
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             18851944Ki
  pods:               110
System Info:
  Machine ID:                 f4da8f67-6e48-47f4-94f7-0a827259b845
  System UUID:                d07e4b6a-0000-0000-b65f-2398524d39c2
  Boot ID:                    431e1681-fdef-43db-9924-cb019ff53848
  Kernel Version:             5.10.25-linuxkit
  OS Image:                   Docker Desktop
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.6
  Kubelet Version:            v1.19.7
  Kube-Proxy Version:         v1.19.7
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                1160m (23%)      1260m (25%)
  memory             1301775360 (6%)  13288969216 (68%)
  ephemeral-storage  0 (0%)           0 (0%)
  hugepages-1Gi      0 (0%)           0 (0%)
  hugepages-2Mi      0 (0%)           0 (0%)
Events:
  Type    Reason                   Age                  From        Message
  ----    ------                   ----                 ----        -------
  Normal  NodeNotReady             86m (x2 over 90m)    kubelet     Node docker-desktop status is now: NodeNotReady
  Normal  NodeReady                85m (x3 over 5h13m)  kubelet     Node docker-desktop status is now: NodeReady
  Normal  Starting                 61m                  kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  61m                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  61m (x8 over 61m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    61m (x7 over 61m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     61m (x8 over 61m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  Starting                 60m                  kube-proxy  Starting kube-proxy.
  Normal  NodeNotReady             55m                  kubelet     Node docker-desktop status is now: NodeNotReady
  Normal  Starting                 49m                  kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  49m                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     49m (x7 over 49m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  49m (x8 over 49m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    49m (x8 over 49m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  Starting                 48m                  kube-proxy  Starting kube-proxy.
  Normal  NodeNotReady             41m                  kubelet     Node docker-desktop status is now: NodeNotReady
  Normal  Starting                 37m                  kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  37m                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     37m (x7 over 37m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  NodeHasNoDiskPressure    37m (x8 over 37m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientMemory  37m (x8 over 37m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  Starting                 36m                  kube-proxy  Starting kube-proxy.
  Normal  NodeAllocatableEnforced  21m                  kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 21m                  kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  21m (x8 over 21m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  NodeHasSufficientPID     21m (x7 over 21m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  Starting                 21m                  kube-proxy  Starting kube-proxy.
  Normal  NodeReady                6m16s (x2 over 14m)  kubelet     Node docker-desktop status is now: NodeReady
  Normal  NodeNotReady             3m16s (x3 over 15m)  kubelet     Node docker-desktop status is now: NodeNotReady

Allocated resources are quite significant, because the cluster is huge as well

CPU: 5GB
Memory: 18GB
SWAP: 1GB
Disk Image: 60GB

Machine: Mac Core i7, 32GB RAM, 512 GB SSD

I can see that the problem is with PLEG, but I need to understand what caused Pod Lifecycle Event Generator to result an error. Whether it's not sufficient allocated node resources or something else.

Any ideas?

-- Timothy
docker
kubernetes

1 Answer

5/10/2021

In my case the problem was some super resource-hungry pods. Thus I had to downscale some deployments to be able to have a stable environment

-- Timothy
Source: StackOverflow