Node status changes to unknown on a high resource requirement pod

11/12/2018

I have a Jenkins deployment pipeline which involves kubernetes plugin. Using kubernetes plugin I create a slave pod for building a node application using yarn. The requests and limits for CPU and Memory are set.

When the Jenkins master schedules the slave, sometimes (as I haven’t seen a pattern, as of now), the pod makes the entire node unreachable and changes the status of node to be Unknown. On careful inspection in Grafana, the CPU and Memory Resources seem to be well within the range with no visible spike. The only spike that occurs is with the Disk I/O, which peaks to ~ 4 MiB.

I am not sure if that is the reason for the node unable to address itself as a cluster member. I would be needing help in a few things here:

a) How to diagnose in depth the reasons for node leaving the cluster.

b) If, the reason is Disk IOPS, is there any default requests, limits for IOPS at Kubernetes level?

PS: I am using EBS (gp2)

-- Arpit Goyal
aws-ebs
jenkins-plugins
kubernetes

2 Answers

11/12/2018

As per the docs, for the node to be 'Ready':

True if the node is healthy and ready to accept pods, False if the node is not healthy and is not accepting pods, and Unknown if the node controller has not heard from the node in the last node-monitor-grace-period (default is 40 seconds)

If would seem that when you run your workloads your kube-apiserver doesn't hear from your node (kubelet) in 40 seconds. There could be multiple reasons, some things that you can try:

  • To see the 'Events' in your node run:

    $ kubectl describe node <node-name>
  • To see if you see anything unusual on your kube-apiserver. On your active master run:

    $ docker logs <container-id-of-kube-apiserver>
  • To see if you see anything unusual on your kube-controller-manager when your node goes into 'Unknown' state. On your active master run:

    $ docker logs <container-id-of-kube-controller-manager>
  • Increase the --node-monitor-grace-period option in your kube-controller-manager. You can add it to the command line in the /etc/kubernetes/manifests/kube-controller-manager.yaml and restart the kube-controller-manager container.

  • When the node is in the 'Unknown' state can you ssh into it and see if you can reach the kubeapi-server? Both on <master-ip>:6443 and also the kubernetes.default.svc.cluster.local:443 endpoints.

-- Rico
Source: StackOverflow

5/15/2019

Considering that the node was previously working and recently stopped showing the ready status restart your kubelet service. Just ssh into the affected node and execute:

/etc/init.d/kubelet restart

Back on your master node run kubectl get nodes to check if the node is working now

-- Prateek Sen
Source: StackOverflow