I have created a kubernetes cluster by kubeadm
following this official tutorial. Each of the control panel components (apiserver,control manager, kube-scheduler) is a running pod. I learned that kube-scheduler will be using some default scheduling policies (defined here) when it is created by kubeadm
. These default policies are a subset of all available policies (listed here)
How can I restart the kube-scheduler pod with a new configuration(different policy list)?
The kube-scheduler is a static pod managed by kubelet on the master node. So updating the kube-scheduler manifest file (/etc/kubernetes/manifests/kube-scheduler.yaml
) will trigger the kube-scheduler to restart.