Troubles understanding dmesg output on nodes with a Kubernetes Cluster on ESX VMs

1/21/2020

I configured a Kubernetes test cluster running on VMs on a ESX thanks to kubespray. In the configuration file, I told Kubespray to configure my cluster with calico as CNI with the default CIDR.

When I access the VM using vSphere, or I run the command dmesg on my VMs I have this output on my master :

[2866556.027837] IPVS: rr: TCP 10.233.13.12:443 - no destination available
[2866556.857464] IPVS: rr: TCP 10.233.13.12:443 - no destination available
[2866557.029471] IPVS: rr: TCP 10.233.13.12:443 - no destination available
[2866688.881160] IPVS: __ip_vs_del_service: enter
[2866689.018851] IPVS: __ip_vs_del_service: enter
[2866689.023030] IPVS: __ip_vs_del_service: enter
[2866689.188072] IPVS: __ip_vs_del_service: enter
[2866689.416153] IPVS: __ip_vs_del_service: enter
[2866689.420334] IPVS: __ip_vs_del_service: enter
[2866692.005599] IPVS: __ip_vs_del_service: enter
[2866692.010260] IPVS: __ip_vs_del_service: enter
[2866692.257045] IPVS: __ip_vs_del_service: enter
[2866692.265034] IPVS: __ip_vs_del_service: enter
[2866692.267455] IPVS: __ip_vs_del_service: enter
[2866692.267493] IPVS: __ip_vs_del_service: enter
[2866916.815472] IPVS: rr: TCP 10.233.49.127:443 - no destination available
[2866916.820841] IPVS: rr: TCP 10.233.49.127:443 - no destination available
[2866916.823418] IPVS: rr: TCP 10.233.49.127:443 - no destination available
[2866916.824167] IPVS: rr: TCP 10.233.49.127:443 - no destination available
[2866916.826243] IPVS: rr: TCP 10.233.49.127:443 - no destination available

and this output on my worker

[1207664.350374] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207664.422584] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207667.108560] net_ratelimit: 13 callbacks suppressed
[1207667.108567] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207667.217235] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207667.274593] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207667.331658] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207668.218597] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207668.334613] IPVS: rr: TCP 10.233.3.61:8080 - no destination available
[1207675.500914] IPVS: rr: TCP 10.233.49.141:8086 - no destination available
[1207676.502566] IPVS: rr: TCP 10.233.49.141:8086 - no destination available
[1207676.628377] IPVS: rr: TCP 10.233.49.141:8086 - no destination available
[1208009.456587] blk_update_request: I/O error, dev fd0, sector 0
[1208009.924355] blk_update_request: I/O error, dev fd0, sector 0
[1208058.699578] blk_update_request: I/O error, dev fd0, sector 0
[1208240.706522] IPVS: Creating netns size=2048 id=289
[1208241.432437] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[1208241.445496] IPv6: ADDRCONF(NETDEV_UP): cali6ef7aa1f11f: link is not ready
[1208241.447406] IPv6: ADDRCONF(NETDEV_CHANGE): cali6ef7aa1f11f: link becomes ready
[1208241.447469] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

I really have trouble to understand those logs. It seems to be linked to calico for the one with IPVS : rr messages and every ip adress correspond to a service IP in the cluster. But I didn't configured any nodes on my inventory file under [calico-rr] as it is optionnal and it is here to improve BGP communication between large clusters.

[all]
m1 ansible_host=x.x.x.x ip=x.x.x.x
m2 ansible_host=x.x.x.x ip=x.x.x.x
w1 ansible_host=x.x.x.x ip=x.x.x.x
w2 ansible_host=x.x.x.x ip=x.x.x.x
w3 ansible_host=x.x.x.x ip=x.x.x.x

[kube-master]
m1
m2

[etcd]
m1
m2
w1

[kube-node]
w1
w2
w3

[calico-rr]

[k8s-cluster:children]
kube-master
kube-node
calico-rr

For what I understood, this is an output that appears during the configuration of new pods and services, when I apply yaml files to install linkerd. Is this linked to the readiness probe ? Messages popping until the service / pods are ready ?

The real problem is that those logs are spamming the console on vSphere and I really don't know how to get rid of them.

I searched for more information on other threads but what I found didn't help much.

UPDATE :

I have more insight about IPVS : rr errors. https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/ It is linked to the IPVS load balancing used by kube-proxy.

But I still don't know how to not have those logs on my ESX console through vSphere.

UPDATE 2 :

for the Kubernetes Installation with Kubespray, I just followed the guide below and changed the inventory file as describe above.

VM OS : Centos 7.7 1908

Kubernetes version : 1.16.3

Kubespray version : release-2.12

Kubespray Getting Started Guide : https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md

-- Ryctus
calico
kubernetes
kubespray
vmware
vsphere

0 Answers