K8s cluster is healthy, but kubelet display unusual message repeatedly every minute

2/10/2022

My k8s-env1 is on-premise and running well. I can create/get/describe/delete any k8s object.

However, I found kubelet display following message every minute, but it does not present in my another k8s-env2. Is this message ok on k8s-env1?

Feb 10 10:39:09 k8s-env1 kubelet[15461]: I0210 10:39:09.808611   15461 kubelet_getters.go:172] status for pod kube-controller-manager-k8s-env1 updated to {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  }]    192.168.1.2 192.168.1.2 [{192.168.1.2}] 2022-02-10 09:32:58 +0800 CST [] [{kube-controller-manager {nil &ContainerStateRunning{StartedAt:2022-02-10 09:40:36 +0800 CST,} nil} {nil nil &ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2022-02-10 09:23:08 +0800 CST,FinishedAt:2022-02-10 09:38:13 +0800 CST,ContainerID:docker://1cc6e402be458374d365b6e379e7205267279c4da554c2207baca11cc1609be9,}} true 202 k8s.gcr.io/kube-controller-manager:v1.16.15 docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:da7ac5487dc7b6eddfb4fbdf39af92bc065416a7dac147a937a39aff72716fe9 docker://7d0aa1e7ae3e3463347d68c644f45f97933ca819a4231b79a7fedcb5f8792dc6 0xc00189beb6}] Burstable []}
Feb 10 10:39:09 k8s-env1 kubelet[15461]: I0210 10:39:09.808770   15461 kubelet_getters.go:172] status for pod kube-scheduler-k8s-env1 updated to {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-02-10 09:32:58 +0800 CST  }]    192.168.1.2 192.168.1.2 [{192.168.1.2}] 2022-02-10 09:32:58 +0800 CST [] [{kube-scheduler {nil &ContainerStateRunning{StartedAt:2022-02-10 09:40:40 +0800 CST,} nil} {nil nil &ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2022-02-10 09:23:06 +0800 CST,FinishedAt:2022-02-10 09:38:14 +0800 CST,ContainerID:docker://94860e938310683e1a478d681256e649c00ba74570e70963b76804f60480b7a0,}} true 201 k8s.gcr.io/kube-scheduler:v1.16.15 docker-pullable://k8s.gcr.io/kube-scheduler@sha256:d9156baf649cd356bad6be119a62cf137b73956957604275ab8e3008bee96c8f docker://d626ea52253994ca2ee7d5b61ead84dacb0b99fd8f21b21268d92d53451e09af 0xc001c91189}] Burstable []}
Feb 10 10:39:09 k8s-env1 kubelet[15461]: I0210 10:39:09.808814   15461 kubelet_getters.go:172] status for pod kube-apiserver-k8s-env1 updated to {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-01-17 00:27:28 +0800 CST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-01-28 13:17:18 +0800 CST  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-01-17 00:38:58 +0800 CST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-01-17 00:27:28 +0800 CST  }]    192.168.1.2 192.168.1.2 [{192.168.1.2}] 2022-01-17 00:27:28 +0800 CST [] [{kube-apiserver {nil &ContainerStateRunning{StartedAt:2022-02-10 09:40:33 +0800 CST,} nil} {nil nil &ContainerStateTerminated{ExitCode:137,Signal:0,Reason:Error,Message:,StartedAt:2022-02-10 09:23:05 +0800 CST,FinishedAt:2022-02-10 09:38:24 +0800 CST,ContainerID:docker://fcc0d48a9398656adc3e071b37f0f502ab45f0730e0d9ad51401db2b856fe1f3,}} true 16 k8s.gcr.io/kube-apiserver:v1.16.15 docker-pullable://k8s.gcr.io/kube-apiserver@sha256:58075c15f5978b4b73e58b004bb3a1856ad58a9061ac3075ef860207ba00ac75 docker://95a526b063d00b5eb497dc3280a0cf4610fec31a072685eb7279f5207dcc27b1 0xc00156e74c}] Burstable []}
Feb 10 10:39:10 k8s-env1 kubelet[15461]: I0210 10:39:10.064116   15461 kubelet_network_linux.go:141] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it
-- Spark1231
kubelet
kubernetes

1 Answer

2/10/2022

In theory this shouldn't affect any functionality, but to.get rid of this you can try installing a iptables version(1.6.2) which has support to the random fully. Something like this should do

Run the following inside the kubelet and reboot once done

        apt remove --purge iptables && \
        apt autoremove -y && \
        clean-install libip4tc0=1.6.2-1.1~bpo9+1 \
        libip6tc0=1.6.2-1.1~bpo9+1 \
        libiptc0=1.6.2-1.1~bpo9+1 \
        libxtables12=1.6.2-1.1~bpo9+1 \
        iptables=1.6.2-1.1~bpo9+1
-- Varadharajan Raghavendran
Source: StackOverflow