Changing the default behavior of Kubernetes

9/17/2018

I have setup a K8S cluster (1 master and 2 slaves) using Kubeadm on my laptop.

  • Deployed 6 replicas of a pod. 3 of them got deployed to each of the slaves.
  • Did a shutdown of one of the slave.
  • It took ~6 minutes for the 3 pods to be scheduled on the running node.

Initially, I thought that it had to do something with the K8S setup. After some digging found out, it's because of the defaults in the K8S for Controller Manager and Kubelet as mentioned here. It made sense. I checked out the K8S documentation on where to change the configuration properties and also checked the configuration files on the cluster node, but couldn't figure it out.

kubelet: node-status-update-frequency=4s (from 10s)
controller-manager: node-monitor-period=2s (from 5s)
controller-manager: node-monitor-grace-period=16s (from 40s)
controller-manager: pod-eviction-timeout=30s (from 5m)

Could someone point out what needs to be done to make the above-mentioned configuration changes permanent and also the different options for the same?

-- Praveen Sripati
configuration
kubernetes

1 Answer

9/17/2018

On the kubelet change this file on all your nodes:

/var/lib/kubelet/kubeadm-flags.env

Add the option at the end or anywhere on this line:

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin 
--cni-conf-dir=/etc/cni/net.d --network-plugin=cni 
--resolv-conf=/run/systemd/resolve/resolv.conf 
--node-status-update-frequency=10s <== add this

On your kube-controller-manager change on the master the following file:

/etc/kubernetes/manifests/kube-controller-manager.yaml

In this section:

  containers:
  - command:
    - kube-controller-manager
    - --address=127.0.0.1
    - --allocate-node-cidrs=true
    - --cloud-provider=aws
    - --cluster-cidr=192.168.0.0/16
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --use-service-account-credentials=true
    - –-node-monitor-period=5s  <== add this line

On your master do a sudo systemctl restart docker On all your nodes do a sudo systemctl restart kubelet

You should have the new configs take effect.

Hope it helps.

-- Rico
Source: StackOverflow