kubelet.service: Main process exited, code=exited, status=255/n/a

3/6/2019

I am making test Cluster following this instructions: https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/ and

https://kubernetes.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/ unfortunately when I check my nodes following occurs:

kubectl get no
NAME                        STATUS     ROLES     AGE       VERSION
pccshost2.lan.proficom.de   NotReady   <none>    19h       v1.10.3
pccshost3.lan.proficom.de   NotReady   <none>    19h       v1.10.3

so far as I get this problem is connected with not working kubelet.service on master-node:

systemctl status kubelet.service

kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Wed 2019-03-06 10:38:30 CET; 32min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 14057 ExecStart=/usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KU>
 Main PID: 14057 (code=exited, status=255)
      CPU: 271ms

Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Consumed 271ms CPU time
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Service RestartSec=100ms expired, scheduling restart.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: Stopped Kubernetes Kubelet Server.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Consumed 271ms CPU time
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Start request repeated too quickly.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: Failed to start Kubernetes Kubelet Server.
~kubectl describe node

 Normal  Starting                 9s    kubelet, pccshost2.lan.proficom.de  Starting kubelet.
  Normal  NodeHasSufficientDisk    9s    kubelet, pccshost2.lan.proficom.de  Node pccshost2.lan.proficom.de status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  9s    kubelet, pccshost2.lan.proficom.de  Node pccshost2.lan.proficom.de status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    9s    kubelet, pccshost2.lan.proficom.de  Node pccshost2.lan.proficom.de status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     9s    kubelet, pccshost2.lan.proficom.de  Node pccshost2.lan.proficom.de status is now: NodeHasSufficientPID

can somebody give an advice what is happening here and how can I fix it? Thx

-- Roger
fedora
kubelet
kubernetes

3 Answers

10/7/2019

I ran into the same issue, and found a solution here.

Essentially, I had to run the following commands:

swapoff -a
kubeadm reset
kudeadm init
systemctl status kubelet

Then I simply had to follow the on-screen instructions. My setup used weave-net for the pod network, so I also had to run kubectl apply -f weave-net.yaml.

-- Xelron
Source: StackOverflow

3/2/2020

I had the same issue. Could not start kubelet service in master node.

Running the below command fixed my problem :

$ sudo swapoff -a

$ sudo systemctl restart kubelet.service

$ systemctl status kubelet

-- shreyas.k
Source: StackOverflow

3/6/2019

solved problem with kubelet adding --fail-swap-on=false" to KUBELET_ARGS= in Kubelet config file. But the problem with nodes stays same - status NotReady

-- Roger
Source: StackOverflow