Kubernetes: Why would the scheduler move a pod

2/20/2018

I have an app, deployed via a Deployment type with a replica of 1. The Scheduler keeps moving the app to different nodes:

I0220 08:28:44.884808 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"production", Name:"app1-production-77c79bdc85-ddjfb", UID:"109fa057-1618-11e8-bfb0-005056946b20", APIVersion:"v1", ResourceVersion:"6017223", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned app1-production-77c79bdc85-ddjfb to node2

type is Normal and reason is Scheduled. What does "Scheduled" mean? Is there any way to find out exactly why it rescheduled the pod?

Also, if I wanted this pod to stay on a node for a long period of time - Statefulset is my friend, correct?

-- matt
kubernetes

2 Answers

2/21/2018

Alright, so if I look at all logs from the schedulers:

kubectl logs kube-scheduler-master2 -n kube-system

and then find the previous pod rescheduling event. I was able to describe that pod and in that output was the reason:

Status:         Failed
Reason:         Evicted
Message:        The node was low on resource: nodefs.

Low disk space!

I don't know how long K8s will keep that record for (it's now unavailable to me but it was around long enough to help at least :)

-- matt
Source: StackOverflow

2/20/2018

My guess would be that your kubelet is evicting the pod for some reason, making the HA design of Deployment kick in inside scheduler to recover from it. Try to find the reason for which the kubelet is evicting your Pod. StatefulSet will not help you on this at all, as it is specificly designed to retain stuff like network identity, name etc. without the need to schedule on the same physical node (which can disappear at any time in typical cloud setup).

-- Radek 'Goblin' Pieczonka
Source: StackOverflow