Configuring kubelet on live and future nodes

3/15/2019

I have a live Kubernetes cluster, version 1.12, in which I need to change the default pod hard eviction values for every kubelet. I've read through https://kubernetes.io/docs/setup/independent/kubelet-integration/ but it falls short for my particular use case. In that article, it seems to implicitly assumes a static set of nodes in the cluster. In my case I have a cluster autoscaler managing several AWS autoscale groups. I need a way to reconfigure the kubelet on each live node as well as any future nodes which are dynamically started (via kubeadm join).

My thought is to manually edit the kubelet-config-1.12 configmap to change the eviction thresholds then update the live nodes using the method in the article listed above.

Is there any issue with manually editing th the kubelet-config-1.12 configmap? Will the edits get carried over to the 1.13 version when the cluster is upgraded to that version?

Or if anyone has a better solution I'd like to hear it.

-- Michael Albers
kubernetes

2 Answers

4/19/2019

In addition to what aurelius wrote in his answer,

DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it.

As mentioned in the documentation (but can be missed easily):

  • The Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.

and by kubelet -h

--dynamic-config-dir string
The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health.
The Kubelet will create this directory if it does not already exist.
The path may be absolute or relative; relative paths start at the Kubelet's current working directory.
Providing this flag enables dynamic Kubelet configuration.
The DynamicKubeletConfig feature gate must be enabled to pass this flag; this gate currently defaults to true because the feature is beta.

The best place to set this flag (for Ubuntu) is to add it to /etc/defaults/kubelet:

$KUBELET_EXTRA_ARGS=--dynamic-config-dir=/var/lib/kubelet-dynamic

Restart kubelet service after that:

$ sudo systemctl restart kubelet

$ ps aux | grep kubelet
root      8610  4.1  1.1 1115992 90652 ?       Ssl  14:57   0:46 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf  
--kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml  
--cgroup-driver=cgroupfs --network-plugin=cni  
--pod-infra-container-image=k8s.gcr.io/pause:3.1  
--dynamic-config-dir=/var/lib/kubelet-dynamic

After that kubelet creates directory tree under this directory to maintain checkpoints:

$ sudo tree /var/lib/kubelet-dynamic/
/var/lib/kubelet-dynamic/
└── store
    ├── checkpoints
    │   └── 009e03e7-62ad-11e9-9043-42010a9c0003
    │       └── 12399979
    │           └── kubelet
    └── meta
        ├── assigned
        └── last-known-good

From this point everything should work as mentioned in documentation.

-- VAS
Source: StackOverflow

4/19/2019

Seems like what you are looking for is already available and you can find it in official documentation.

The basic workflow for configuring a Kubelet in a live cluster is as follows:

  • Write a YAML or JSON configuration file containing the Kubelet’s configuration.
  • Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
    • Update the Kubelet’s corresponding Node object to use this ConfigMap.

There are some limitations thought:

While it is possible to change the configuration by updating the ConfigMap in-place, this causes all Kubelets configured with that ConfigMap to update simultaneously. It is much safer to treat ConfigMaps as immutable by convention, aided by kubectl’s --append-hash option, and incrementally roll out updates to Node.Spec.ConfigSource.

For your auto-scaling nodes it would have to be confirmed if they would use this updated ConfigMaps by default, but even if they do not by default it can be probably easy achieved with some tinkering. I can try to confirm that soon, if this answer does not solve the problem you have.

-- aurelius
Source: StackOverflow