There is no ephemeral-storage resource on worker node of kubernetes

12/2/2020

I tried to set worker node of kubernetes on board using arm64 arch. This worker node was not changed to Ready status from NotReady status.

I checked Conditions log using below command:

$ kubectl describe nodes

...
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletNotReady              [container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage]
...

Capacity:
  cpu:     8
  memory:  7770600Ki
  pods:    110
Allocatable:
  cpu:     8
  memory:  7668200Ki
  pods:    110
...

This worker node seems not have ephemeral-storage resource, so this log seems be created

"container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage"

but root filesystem is mounted on / like follows,

$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/root             23602256   6617628  15945856  30% /
devtmpfs               3634432         0   3634432   0% /dev
tmpfs                  3885312         0   3885312   0% /dev/shm
tmpfs                  3885312    100256   3785056   3% /run
tmpfs                  3885312         0   3885312   0% /sys/fs/cgroup
tmpfs                   524288     25476    498812   5% /tmp
tmpfs                   524288       212    524076   1% /var/volatile
tmpfs                   777060         0    777060   0% /run/user/1000
/dev/sde4               122816     49088     73728  40% /firmware
/dev/sde5                65488       608     64880   1% /bt_firmware
/dev/sde7                28144     20048      7444  73% /dsp

How can detect ephemeral-storage resource on worker node of kubernetes?

=======================================================================

I added full log of $ kubectl get nodes and $ kubectl describe nodes

$ kubectl get nodes
NAME             STATUS     ROLES    AGE     VERSION
raas-linux       Ready      master   6m25s   v1.19.4
robot-dd9f6aaa   NotReady   <none>   5m16s   v1.16.2-dirty
$
$ kubectl describe nodes
Name:               raas-linux
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=raas-linux
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"a6:a1:0b:43:38:29"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.3.106
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 04 Dec 2020 09:54:49 +0900
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  raas-linux
  AcquireTime:     <unset>
  RenewTime:       Fri, 04 Dec 2020 10:00:19 +0900
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 04 Dec 2020 09:55:14 +0900   Fri, 04 Dec 2020 09:55:14 +0900   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:54:45 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:54:45 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:54:45 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:55:19 +0900   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.3.106
  Hostname:    raas-linux
Capacity:
  cpu:                8
  ephemeral-storage:  122546800Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8066548Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  112939130694
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7964148Ki
  pods:               110
System Info:
  Machine ID:                 5aa3b32d7e9e409091929e7cba2d558b
  System UUID:                a930a228-a79a-11e5-9e9a-147517224400
  Boot ID:                    4e6dd5d2-bcc4-433b-8c4d-df56c33a9442
  Kernel Version:             5.4.0-53-generic
  OS Image:                   Ubuntu 18.04.5 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.10
  Kubelet Version:            v1.19.4
  Kube-Proxy Version:         v1.19.4
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                  ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-f9fd979d6-h7hd5               100m (1%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m9s
  kube-system                 coredns-f9fd979d6-hbkbl               100m (1%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m9s
  kube-system                 etcd-raas-linux                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
  kube-system                 kube-apiserver-raas-linux             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m20s
  kube-system                 kube-controller-manager-raas-linux    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m20s
  kube-system                 kube-flannel-ds-k8b2d                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m9s
  kube-system                 kube-proxy-wgn4l                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
  kube-system                 kube-scheduler-raas-linux             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m20s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (10%)  100m (1%)
  memory             190Mi (2%)  390Mi (5%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age    From        Message
  ----    ------                   ----   ----        -------
  Normal  Starting                 5m20s  kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  5m20s  kubelet     Node raas-linux status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    5m20s  kubelet     Node raas-linux status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     5m20s  kubelet     Node raas-linux status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  5m20s  kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 5m8s   kube-proxy  Starting kube-proxy.
  Normal  NodeReady                5m     kubelet     Node raas-linux status is now: NodeReady


Name:               robot-dd9f6aaa
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=robot-dd9f6aaa
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 04 Dec 2020 09:55:58 +0900
Taints:             node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  robot-dd9f6aaa
  AcquireTime:     <unset>
  RenewTime:       Fri, 04 Dec 2020 10:00:16 +0900
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletNotReady              [container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage]
Addresses:
  InternalIP:  192.168.3.102
  Hostname:    robot-dd9f6aaa
Capacity:
  cpu:     8
  memory:  7770620Ki
  pods:    110
Allocatable:
  cpu:     8
  memory:  7668220Ki
  pods:    110
System Info:
  Machine ID:                 de6c58c435a543de8e13ce6a76477fa0
  System UUID:                de6c58c435a543de8e13ce6a76477fa0
  Boot ID:                    d0999dd7-ab7d-4459-b0cd-9b25f5a50ae4
  Kernel Version:             4.9.103-sda845-smp
  OS Image:                   Kairos - Smart Machine Platform 1.0
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  docker://19.3.2
  Kubelet Version:            v1.16.2-dirty
  Kube-Proxy Version:         v1.16.2-dirty
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (2 in total)
  Namespace                   Name                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                     ------------  ----------  ---------------  -------------  ---
  kube-system                 kube-flannel-ds-9xc6n    100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m21s
  kube-system                 kube-proxy-4dk7f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (1%)  100m (1%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type    Reason                   Age    From     Message
  ----    ------                   ----   ----     -------
  Normal  Starting                 4m22s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  4m21s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m21s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m21s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 4m10s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  4m10s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m10s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m10s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 3m59s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m59s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3m59s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     3m59s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 3m48s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m48s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasSufficientPID     3m48s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  NodeHasNoDiskPressure    3m48s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  Starting                 3m37s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m36s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3m36s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     3m36s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 3m25s  kubelet  Starting kubelet.
  Normal  Starting                 3m14s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m3s   kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 3m3s   kubelet  Starting kubelet.
  Normal  Starting                 2m52s  kubelet  Starting kubelet.
  Normal  Starting                 2m40s  kubelet  Starting kubelet.
  Normal  Starting                 2m29s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  2m29s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m29s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m29s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  2m18s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m18s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m18s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 2m18s  kubelet  Starting kubelet.
  Normal  Starting                 2m7s   kubelet  Starting kubelet.
  Normal  Starting                 115s   kubelet  Starting kubelet.
  Normal  NodeHasNoDiskPressure    104s   kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  Starting                 104s   kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  104s   kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 93s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  93s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    93s    kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     93s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 82s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  82s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 71s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  70s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    70s    kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     70s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 59s    kubelet  Starting kubelet.
  Normal  Starting                 48s    kubelet  Starting kubelet.
  Normal  Starting                 37s    kubelet  Starting kubelet.
  Normal  Starting                 26s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  25s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 15s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  14s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 3s     kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3s     kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3s     kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
-- dofmind
kubernetes
ready
storage
worker

2 Answers

1/11/2021

Step 1: kubectl get mutatingwebhookconfigurations -oyaml > mutating.txt

Step 2: Kubectl delete -f mutating.txt

Step 3: Restart the node

Step 4: You should be able to see the node is ready

Step 5: Install the mutatingwebhookconfiguration back

-- RITESH SANJAY MAHAJAN
Source: StackOverflow

12/10/2020
  1. Delete /etc/docker/daemon.json file and reboot
  2. Install CNI plugins binaries in /opt/cni/bin directory https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-arm64-v0.8.7.tgz
-- dofmind
Source: StackOverflow