Failed to make sure ip set: &{{KUBE-NODE-PORT-TCP ...} exist, error: error creating ipset KUBE-NODE-PORT-TCP, error: exit status 2

7/4/2018

kubernetes version: v1.11.0

I run kube-proxy with ipvs mode, got this errors:

7月 03 21:55:08 docker02 kube-proxy[13003]: E0703 21:55:08.316098   13003 ipset.go:156] Failed to make sure ip set: &{{KUBE-NODE-PORT-TCP bitmap:port inet 1024 65536 0-65535 Kubernetes nodeport TCP port for masquerade purpose} map[] 0xc4205e5e40} exist, error: error creating ipset KUBE-NODE-PORT-TCP, error: exit status 2
7月 03 21:55:13 docker02 kube-proxy[13003]: E0703 21:55:13.205413   13003 ipset.go:156] Failed to make sure ip set: &{{KUBE-NODE-PORT-UDP bitmap:port inet 1024 65536 0-65535 Kubernetes nodeport UDP port for masquerade purpose} map[] 0xc4205e5e40} exist, error: error creating ipset KUBE-NODE-PORT-UDP, error: exit status 2
7月 03 21:55:18 docker02 kube-proxy[13003]: E0703 21:55:18.233756   13003 ipset.go:156] Failed to make sure ip set: &{{KUBE-LOAD-BALANCER-LOCAL hash:ip,port inet 1024 65536 0-65535 Kubernetes service load balancer ip + port with externalTrafficPolicy=local} map[] 0xc4205e5e40} exist, error: error creating ipset KUBE-LOAD-BALANCER-LOCAL, error: exit status 2
7月 03 21:55:23 docker02 kube-proxy[13003]: E0703 21:55:23.256248   13003 ipset.go:156] Failed to make sure ip set: &{{KUBE-CLUSTER-IP hash:ip,port inet 1024 65536 0-65535 Kubernetes service cluster ip + port for masquerade purpose} map[] 0xc4205e5e40} exist, error: error creating ipset KUBE-CLUSTER-IP, error: exit status 2
7月 03 21:55:28 docker02 kube-proxy[13003]: E0703 21:55:28.271973   13003 ipset.go:156] Failed to make sure ip set: &{{KUBE-LOAD-BALANCER-SOURCE-CIDR hash:ip,port,net inet 1024 65536 0-65535 Kubernetes service load balancer ip + port + source cidr for packet filter purpose} map[] 0xc4205e5e40} exist, error: error creating ipset KUBE-LOAD-BALANCER-SOURCE-CIDR, error: exit status 2
7月 03 21:55:33 docker02 kube-proxy[13003]: E0703 21:55:33.285863   13003 ipset.go:156] Failed to make sure ip set: &{{KUBE-LOAD-BALANCER-SOURCE-CIDR hash:ip,port,net inet 1024 65536 0-65535 Kubernetes service load balancer ip + port + source cidr for packet filter purpose} map[] 0xc4205e5e40} exist, error: error creating ipset KUBE-LOAD-BALANCER-SOURCE-CIDR, error: exit status 2
7月 03 21:55:36 docker02 kube-proxy[13003]: I0703 21:55:36.485507   13003 proxier.go:701] Stale udp service kube-system/kube-dns:dns -> 10.254.0.2
7月 03 21:55:36 docker02 kube-proxy[13003]: E0703 21:55:36.535070   13003 ipset.go:156] Failed to make sure ip set: &{{KUBE-NODE-PORT-LOCAL-UDP bitmap:port inet 1024 65536 0-65535 Kubernetes nodeport UDP port with externalTrafficPolicy=local} map[] 0xc4205e5e40} exist, error: error creating ipset KUBE-NODE-PORT-LOCAL-UDP, error: exit status 2

According to the source code, I constructed the command manually, e.g:

sudo ipset create KUBE-LOAD-BALANCE-LOCAL hash:ip,port family inet hashsize 1024 maxelem 65535 -exist

and it's ok,and I can get ipset infos, e.g:

[k8s@docker02 ds]$ sudo ipset list
Name: KUBE-LOAD-BALANCE-LOCAL
Type: hash:ip,port
Revision: 2
Header: family inet hashsize 1024 maxelem 65535
Size in memory: 16528
References: 0
Members:

I have no idea about this problem.

-- HikoQiu
kube-proxy
kubernetes

1 Answer

7/5/2018

After starting the kube-proxy in ipvs mode, appropriate ipset entries will be created automatically.

Looks like ipset with the KUBE-LOAD-BALANCE-LOCAL name already exists in the system when you're trying to run kube-proxy in ipvs mode.

Try to delete KUBE-LOAD-BALANCE-LOCAL ipset entry and after that run kube-proxy in ipvs mode.

To delete ipset entry you can use the below command:

ipset destroy KUBE-LOAD-BALANCE-LOCAL

Update:

@DaveMcNeill is right. This is a known bug in RedHat/CentOS systems.

It has been fixed in Kubernetes by the below commit:

fix ipset creation fails on centos. issue 65461

In this case, you should wait for the release with this commit included, or use another OS (Debian for example).

-- Akar
Source: StackOverflow