How long will kube-controller-manager migrate one pod from a shutdown node to another health node

6/20/2016

My /etc/kubernetes/config as below:

KUBE_LOGTOSTDERR="--logtostderr=false"
KUBE_LOG_LEVEL="--v=5"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080 --log-dir=/var/log/kubernetes --stderrthreshold=1"

/etc/kubernetes/controller-manager like this:"

KUBE_CONTROLLER_MANAGER_ARGS="--port=10252 --node-monitor-grace-period=10s --pod-eviction-timeout=10s --cluster-name=op-k8s"

I created one deployment:"dep1" which have only one pod, and this pod deployed in a health node: "test1", then in test1 exec:

systemctl stop kubelet.service ;systemctl stop kube-proxy.service ;systemctl stop docker

to make an "node shutdown error". About 30s ago node: test1 become NotReady status(kubectl get node in kubemaster machine). But after five minutes that deployment: dep1's pod transfer to a another node. So I have two questions:

  1. How can I control the time when one node shutdown the pod in that node transfer to another health node?

2.-pod-eviction-timeout=10s, it seems that there is useless of this parameter cause kubelet is down no one can delete that pod. Thanks!

-- workhardcc
docker
kubernetes
kubernetes-health-check

1 Answer

6/21/2016

It's not --node-monitor-grace-period=10s --pod-eviction-timeout=10s useless, the point is "controller-manager" didn't load those parameters! I use command /bin/systemctl restart kube-controller-manager.service to start kube-controller-manager, cat "/usr/lib/systemd/system/kube-controller-manager.service" as below:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=root
ExecStart=/usr/bin/kube-controller-manager --port=10252 --master=http://127.0.0.1:8080 
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

cat /etc/kubernetes/controller-manager as below:

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --node-monitor-period=5s --pod-eviction-timeout=5m0s"

If I add those parameters in /usr/lib/systemd/system/kube-controller-manager.service like:

ExecStart=/usr/bin/kube-controller-manager --port=10252 --master=http://127.0.0.1:8080 --node-monitor-grace-period=10s --node-monitor-period=5s --pod-eviction-timeout=10s

It's work! So I don't know why controller-manager didn't load the config file /etc/kubernetes/controller-manager.

-- workhardcc
Source: StackOverflow