Getting patched (no change) on Kubernetes (AWS EKS) node patching

4/16/2020

My goal to override the default Kubelet configuration in the running cluster

"imageGCHighThresholdPercent": 85,
"imageGCLowThresholdPercent": 80,

to

 "imageGCHighThresholdPercent": 60,
 "imageGCLowThresholdPercent": 40,

The possible option is to apply the node patch for each node.

I'm using the following command to get the kubelet config via kubeclt proxy

curl -sSL "http://localhost:8001/api/v1/nodes/ip-172-31-20-135.eu-west-1.compute.internal/proxy/configz" | python3 -m json.tool

The output is

{
  "kubeletconfig": {

     ....

    "imageGCHighThresholdPercent": 85,
    "imageGCLowThresholdPercent": 80,

     .....
    }
}

here is the command I'm using to update these two values

kubectl patch node ip-172-31-20-135.eu-west-1.compute.internal -p '{"kubeletconfig":{"imageGCHighThresholdPercent":60,"imageGCLowThresholdPercent":40}}'

Unfortunately the kubeclt returns me

node/ip-172-31-20-135.eu-west-1.compute.internal patched (no change)

As a result the change has no effect.

Any thought what I'm doing wrong.

Thanks

-- Denis Voloshin
kubectl
kubernetes

2 Answers

4/17/2020

While you are using EKS you have to configure An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.

First create folder /var/lib/kubelet, and put kubeconfig template file into it, the content as below:

apiVersion: v1  
kind: Config  
clusters:  
- cluster:  
    certificate-authority: CERTIFICATE_AUTHORITY_FILE  
    server: MASTER_ENDPOINT  
  name: kubernetes  
contexts:  
- context:  
    cluster: kubernetes  
    user: kubelet  
  name: kubelet  
current-context: kubelet  
users:  
- name: kubelet  
  user:  
    exec:  
      apiVersion: client.authentication.k8s.io/v1alpha1  
      command: /usr/bin/heptio-authenticator-aws  
      args:  
        - "token"  
        - "-i"  
        - "CLUSTER_NAME"

Then create template file /etc/systemd/system/kubelet.service, the content as below:

[Unit]  
Description=Kubernetes Kubelet  
Documentation=[https://github.com/kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)  
After=docker.service  
Requires=docker.service

[Service]  
ExecStart=/usr/bin/kubelet \  
   --address=0.0.0.0 \  
   --authentication-token-webhook \  
   --authorization-mode=Webhook \  
   --allow-privileged=true \  
   --cloud-provider=aws \  
   --cluster-dns=DNS_CLUSTER_IP \  
   --cluster-domain=cluster.local \  
   --cni-bin-dir=/opt/cni/bin \  
   --cni-conf-dir=/etc/cni/net.d \  
   --container-runtime=docker \  
   --max-pods=MAX_PODS \  
   --node-ip=INTERNAL_IP \  
  --network-plugin=cni \  
   --pod-infra-container-image=602401143452.dkr.ecr.REGION.amazonaws.com/eks/pause-amd64:3.1 \  
   --cgroup-driver=cgroupfs \  
   --register-node=true \  
   --kubeconfig=/var/lib/kubelet/kubeconfig \  
   --feature-gates=RotateKubeletServerCertificate=true \  
   --anonymous-auth=false \  
   --client-ca-file=CLIENT_CA_FILE  \
   --image-gc-high-threshold=60  \
   --image-gc-low-threshold=40

Restart=on-failure  
RestartSec=5

[Install]  
WantedBy=multi-user.target

You have to add flags image-gc-high-threshold and image-gc-low-threshold and specify proper values.

--image-gc-high-threshold int32    The percent of disk usage after which image garbage collection is always run. (default 85)
--image-gc-low-threshold int32     The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. (default 80)

Please take a look: eks-worker-node-ami.

-- MaggieO
Source: StackOverflow

4/16/2020

Patching node object is not woking because those configurations are not part of node object.

The way to achieve this would be by updating the kubelet config file in the kubernetes nodes and restarting kubelet process. systemctl status kubelet should tell if kubelet was started with a config file and the location of the file.

root@kind-control-plane:/var/lib/kubelet# systemctl status kubelet
  kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/kind/systemd/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Tue 2020-04-14 08:43:14 UTC; 2 days ago
     Docs: http://kubernetes.io/docs/
 Main PID: 639 (kubelet)
    Tasks: 20 (limit: 2346)
   Memory: 59.6M
   CGroup: /docker/f01f57e1ef7aa7a1a8197e0e79be15415c580da33a7d048512e22418a88e0317/system.slice/kubelet.service
           └─639 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --c
ontainer-runtime-endpoint=/run/containerd/containerd.sock --fail-swap-on=false --node-ip=172.17.0.2 --fail-swap-on=false

As it can seen above in a cluster setup by kubeadm kubelet was started with a config file located at /var/lib/kubelet/config.yaml

Edit the configfile to add

ImageGCHighThresholdPercent: 60
ImageGCHighThresholdPercent: 40

Restart kubelet using systemctl restart kubelet.service

In case the cluster was not started with a kubelet config file then create a new config file and pass the config file while starting kubelet.

-- Arghya Sadhu
Source: StackOverflow