containers with ipv6 addresses can't connect to outside in k8s/calico environment

5/12/2020

I am trying to test ipv6 connectivity in k8s environment, and installed calico network plugin; the issue is that the container can't ping to the ipv6 gateway or other addresses of the cluster nodes, the version of k8s and calico is v1.18.2 and calico v1.12(also tried v1.13); the configurations as followings:

centos7, kernel is 4.4(upgraded)
opened ipv6 forwarding
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.forwarding = 1

calico config:

[root@k8s-master-01 ~]# calicoctl get ipp -owide
NAME                  CIDR            NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR   
default-ipv4-ippool   10.244.0.0/16   true   Never      Never       false      all()      
default-ipv6-ippool   fc00:f00::/24   true   Never      Never       false      all()      

within the pod, can see ipv6 address is allocated
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet 10.244.36.196  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::a8c6:c1ff:fe61:258c  prefixlen 64  scopeid 0x20<link>
        inet6 fc00:fd8:4bce:9a48:4ab7:a333:5ec8:c684  prefixlen 128  scopeid 0x0<global>
        ether aa:c6:c1:61:25:8c  txqueuelen 0  (Ethernet)
        RX packets 23026  bytes 3522721 (3.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24249  bytes 3598501 (3.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-worker-01 ~]# ip -6 route show
fc00:fd8:4bce:9a48:4ab7:a333:5ec8:c684 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::ecee:eeff:feee:eeee dev eth0 metric 1024 pref medium

actually, I captured the messages with tcpdump from the host, and can see some icmp requests came in to the like cali66e9f9aafee interface, but looks no furthur processing, I checked ip6tables and saw that no any packages came into the right CHAIN of masqurade

[root@k8s-worker-01 ~]# ip6tables -t nat -vnL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    80 cali-PREROUTING  all      *      *       ::/0                 ::/0                 /* cali:6gwbT8clXdHdC1b1 */

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 791 packets, 63280 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-OUTPUT  all      *      *       ::/0                 ::/0                 /* cali:tVnHkvAo15HuiPy0 */

Chain POSTROUTING (policy ACCEPT 791 packets, 63280 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-POSTROUTING  all      *      *       ::/0                 ::/0                 /* cali:O3lYWMrLQYEMJtB5 */

Chain cali-OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-fip-dnat  all      *      *       ::/0                 ::/0                 /* cali:GBTAv2p5CwevEyJm */

Chain cali-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-fip-snat  all      *      *       ::/0                 ::/0                 /* cali:Z-c7XtVd2Bq7s_hA */
  796 63680 cali-nat-outgoing  all      *      *       ::/0                 ::/0                 /* cali:nYKhEzDlr11Jccal */

Chain cali-PREROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    1    80 cali-fip-dnat  all      *      *       ::/0                 ::/0                 /* cali:r6XmIziWUJsdOK6Z */

Chain cali-fip-dnat (2 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain cali-fip-snat (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain cali-nat-outgoing (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all      *      *       ::/0                 ::/0                 /* cali:Ir_z2t1P6-CxTDof */ match-set cali60masq-ipam-pools src ! match-set cali60all-ipam-pools dst

tried lots of times, but failed, did i missed something?

regards

-- techer
calico
containers
ipv6
kubernetes
networking

2 Answers

5/28/2020

I had exactily the same issue with a similar CentOS7 setup.

Besides following the instruction on the calico website and securing that all nodes had ipv6 forwarding enabled the solution was setting the environment variable CALICO_IPV6POOL_NAT_OUTGOING to true for the install-cni in the initContainers section and for the calico-node in the containers section.

In my case I also had to set the IP_AUTODETECTION_METHOD to my actual interface with the public v6 IP address.

I also explicitly added --proxy-mode=iptables to the kube-proxy parameters (which I'm not sure if it is default).

I hope this helps.

-- Kimses
Source: StackOverflow

5/18/2020

Enabling ipv6 on your cluster isn't as simple as you did. Just configuring ipv6 in your network isn't going to work with Kubernetes.

First and most important topic in this mater is that IPv4/IPv6 dual-stack is an alpha feature. As any alpha feature it may not work as expected.

Please go through this document to understand better the requisites to make it work in your cluster (you have to add a feature-gate).

There is also some work being done to make it possible to bootstrap a Kubernetes cluster with Dual Stack using Kubeadm, but it's not usable yet and there is no ETA for it.

There are some examples of IPv6 and dual-stack setups with other networking plugins in this repository.

This project serves two primary purposes: (i) study and validate ipv6 support in kubernetes and associated plugins (ii) provide a dev environment for implementing and testing additional functionality (e.g.dual-stack)

-- mWatney
Source: StackOverflow