Kubernetes can't detect unhealthy node

8/20/2018

I am shutting down my k8s node manually to see if this affect the master.

After shutdown I check status of nodes:

kubectl get nodes

The node which went down is still seen Ready in Status. As a consequence k8s still tries to schedule pods on this node but actually cannot. And even worst it doesn't reschedule pods on other healthy nodes.

After a while (5-10 mins) k8s notices the node has gone.

Is that expected behavior? If not how can I fix this?

I did research do find out how K8s checks node health, I couldn't find anything valuable.

-- Barry Scott
kubernetes
kubernetes-health-check

1 Answer

8/21/2018

I found the problem myself.

I was cutting connection at network layer with firewall rules. Since kubelet opened a session before new deny rules node was seen Ready. As it was ready it was receiving traffic. And the traffic would be blocked by the new rules since they have no open session.

So this inconsistency happens only when you change firewall rules.

-- Barry Scott
Source: StackOverflow