https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ Hi, I got some troubles in following this tutorial. It is recommended to reconfigure the kubelet with configmap, but when i use "kubectl edit node " to modify the kubelet configuration, nothing changed though the output showing "node *** edited", and the ConfigOk status did not show up in the node status. For more info, the configmap, corresponding role and rolebinding can be all created successfully. Is there anything i have missed or this tutorial needs to be updated? I have tried so many times and it stucked in the Observe that the Node begins using the new configuration step. I wanna know if it's needed to set the node into the system:node group or anything else did i miss? Thanks!
For a node bough up by Kubeadm, Dynamic configuration - https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ does not work as of now (v1.17.0)
Also see https://github.com/kubernetes/kubernetes/issues/67580
There is an option to enable this via kubeadm
kubeadm alpha kubelet config enable-dynamic --node-name azuretest-2 --kubelet-version 1.17.0
[kubelet] Enabling Dynamic Kubelet Config for Node "azuretest-2"; config sourced from ConfigMap "kubelet-config-1.17" in namespace kube-system
[kubelet] WARNING: The Dynamic Kubelet Config feature is beta, but off by default. It hasn't been well-tested yet at this stage, use with caution.
However, this does not work; Even after setting this and following
kubectl edit node ${NODE_NAME}
....
spec:
configSource:
configMap:
kubeletConfigKey: kubelet
name: my-node-config-dtghm9gbd2
namespace: kube-system
[centos@azuretest-1 ~]$ kubectl get no ${NODE_NAME} -o json | jq '.status.config'
null
What worked - Manually editing in all worker nodes add the config you want to edit (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/)
[root@azuretest-2 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --container-runtime=remote --container-runtime-endpoint=/var/run/containerd/containerd.sock --resolv-conf=/etc/resolv.conf --max-pods=700"
systmectl daemon-reload
systemctl restart kubelet