cilium configuration in IPv6 mode

1/21/2019

I am using cilium in Kubernetes 1.12 in Direct Routing mode. It is working fine in IPv4 mode. We are using cilium/cilium:no-routes image and cloudnativelabs/kube-router to advertise the routes through BGP.

Now I would like to configure the same in IPv6 only Kubernetes cluster. But I found that kube-router pod is crashing and not creating the route entries for the --pod-network-cidr.

Following is the lab details -

  • master node: IPv6 private IP -fd0c:6493:12bf:2942::ac18:1164
  • Work node: IPv6 private IP -fd0c:6493:12bf:2942::ac18:1165
  • Public IP for both the nodes are IPv4 as i don't have IPv6 public IP.

IPv6 only K8s cluster is created as

master:

sudo kubeadm init --kubernetes-version v1.13.2 --pod-network-cidr=2001:2::/64 --apiserver-advertise-address=fd0c:6493:12bf:2942::ac18:1164 --token-ttl 0

worker:

sudo kubeadm join [fd0c:6493:12bf:2942::ac18:1164]:6443 --token 9k9sdq.el298rka0sjqy0ha --discovery-token-ca-cert-hash sha256:b830c22dc21561c9e9287275ecc675ec6de012662fabde3bd1aba03be66562eb

kubectl get nodes -o wide
NAME      STATUS     ROLES    AGE   VERSION   INTERNAL-IP                      
EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
master    NotReady   master   38h   v1.13.2   fd0c:6493:12bf:2942::ac18:1164   
<none>        Ubuntu 18.10   4.18.0-13-generic   docker://18.6.0
worker1   Ready      <none>   38h   v1.13.2   fd0c:6493:12bf:2942::ac18:1165   
<none>        Ubuntu 18.10   4.18.0-10-generic   docker://18.6.0

master node is not ready as cni is not configured yet and codedns pods are not up yet.

Now install the cilium in Ipv6.

1. Run the etcd in master node.

sudo docker run -d --network=host \
--name "cilium-etcd" k8s.gcr.io/etcd:3.2.24 \
etcd -name etcd0 \
-advertise-client-urls http://[fd0c:6493:12bf:2942::ac18:1164]:4001 \
-listen-client-urls http://[fd0c:6493:12bf:2942::ac18:1164]:4001 \
-initial-advertise-peer-urls http://[fd0c:6493:12bf:2942::ac18:1164]:2382 \
-listen-peer-urls http://[fd0c:6493:12bf:2942::ac18:1164]:2382 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster etcd0=http://[fd0c:6493:12bf:2942::ac18:1164]:2382 \
-initial-cluster-state new

Here [fd0c:6493:12bf:2942::ac18:1164] is master node ipv6 ip.

2. sudo mount bpffs /sys/fs/bpf -t bpf

3. Run the kuberouter.

Expected Result:

Kube-router adds the routing entry for the POD-CIDR corresponding to the each of the other nodes in the cluster. Node public IP will be set as GW. Following result is obtained for IPv4. For IPv4, routing entry is created in node-1 for node-2 ( public IP 10.40.139.196 and POD CIDR 10.244.1.0/24). Device is the interface where public IP is bound.

$ ip route show

10.244.1.0/24 via 10.40.139.196 dev ens4f0.116 proto 17

Note: For IPv6 only Kubernetes, --pod-network-cidr=2001:2::/64

Actual result -

kubectl get pods -n kube-system
NAME                             READY   STATUS              RESTARTS   AGE
coredns-86c58d9df4-g7nvf         0/1     ContainerCreating   0          22h
coredns-86c58d9df4-rrtgp         0/1     ContainerCreating   0          38h
etcd-master                      1/1     Running             0          38h
kube-apiserver-master            1/1     Running             0          38h
kube-controller-manager-master   1/1     Running             0          38h
kube-proxy-9xb2c                 1/1     Running             0          38h
kube-proxy-jfv2m                 1/1     Running             0          38h
kube-router-5xjv4                0/1     CrashLoopBackOff    15         73m
kube-scheduler-master            1/1     Running             0          38h

Question -

Can kuberouter use private IPv6 address which is used by the Kubernetes cluster instead of using the public IP which in our case isenter code here IPv4.

-- user2639661
cilium
cni
kubernetes

0 Answers