VPN to access cluster services / pods : cannot ping anything except openvpn server

12/1/2019

I'm trying to setup a VPN to access my cluster's workloads without setting public endpoints.

Service is deployed using the OpenVPN helm chart, and kubernetes using Rancher v2.3.2

  • replacing L4 loadbalacer with a simple service discovery
  • edit configMap to allow TCP to go through the loadbalancer and reach the VPN

What does / doesn't work:

  • OpenVPN client can connect successfully
  • Cannot ping public servers
  • Cannot ping Kubernetes services or pods
  • Can ping openvpn cluster IP "10.42.2.11"

My files

vars.yml

---
replicaCount: 1
nodeSelector:
  openvpn: "true"
openvpn:
  OVPN_K8S_POD_NETWORK: "10.42.0.0"
  OVPN_K8S_POD_SUBNET: "255.255.0.0"
  OVPN_K8S_SVC_NETWORK: "10.43.0.0"
  OVPN_K8S_SVC_SUBNET: "255.255.0.0"
persistence:
  storageClass: "local-path"
service:
  externalPort: 444

Connection works, but I'm not able to hit any ip inside my cluster. The only ip I'm able to reach is the openvpn cluster ip.

openvpn.conf:

server 10.240.0.0 255.255.0.0
verb 3

key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem



key-direction 0
keepalive 10 60
persist-key
persist-tun

proto tcp
port  443
dev tun0
status /tmp/openvpn-status.log

user nobody
group nogroup

push "route 10.42.2.11 255.255.255.255"

push "route 10.42.0.0 255.255.0.0"


push "route 10.43.0.0 255.255.0.0"



push "dhcp-option DOMAIN-SEARCH openvpn.svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH cluster.local"

client.ovpn

client
nobind
dev tun

remote xxxx xxx tcp
CERTS CERTS

dhcp-option DOMAIN openvpn.svc.cluster.local
dhcp-option DOMAIN svc.cluster.local
dhcp-option DOMAIN cluster.local
dhcp-option DOMAIN online.net

I don't really know how to debug this.

I'm using windows

route command from client

Destination     Gateway         Genmask         Flags Metric Ref    Use Ifac
0.0.0.0         livebox.home    255.255.255.255 U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     256    0        0 eth0
192.168.1.17    0.0.0.0         255.255.255.255 U     256    0        0 eth0
192.168.1.255   0.0.0.0         255.255.255.255 U     256    0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 eth0
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 eth1
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 eth1
0.0.0.0         10.240.0.5      255.255.255.255 U     0      0        0 eth1
10.42.2.11      10.240.0.5      255.255.255.255 U     0      0        0 eth1
10.42.0.0       10.240.0.5      255.255.0.0     U     0      0        0 eth1
10.43.0.0       10.240.0.5      255.255.0.0     U     0      0        0 eth1
10.240.0.1      10.240.0.5      255.255.255.255 U     0      0        0 eth1
127.0.0.0       0.0.0.0         255.0.0.0       U     256    0        0 lo  
127.0.0.1       0.0.0.0         255.255.255.255 U     256    0        0 lo  
127.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 lo  
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 lo  
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 lo  

And finally ifconfig

        inet 192.168.1.17  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 2a01:cb00:90c:5300:603c:f8:703e:a876  prefixlen 64  scopeid 0x0<global>
        inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2  prefixlen 128  scopeid 0x0<global>
        inet6 fe80::603c:f8:703e:a876  prefixlen 64  scopeid 0xfd<compat,link,site,host>
        ether 00:d8:61:31:22:32  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.240.0.6  netmask 255.255.255.252  broadcast 10.240.0.7
        inet6 fe80::b9cf:39cc:f60a:9db2  prefixlen 64  scopeid 0xfd<compat,link,site,host>
        ether 00:ff:42:04:53:4d  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 1500
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0xfe<compat,link,site,host>
        loop  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
-- ogdabou
kubernetes
rancher
vpn

2 Answers

4/26/2020

For anybody looking for a working sample, this is going to go into your openvpn deployment along side your container definition:

initContainers:
- args:
  - -w
  - net.ipv4.ip_forward=1
  command:
  - sysctl
  image: busybox
  name: openvpn-sidecar
  securityContext:
    privileged: true
-- John Fedoruk
Source: StackOverflow

12/4/2019

Don't know if it is the RIGHT answer.

But I got it to work by adding a sidecar to my pods to execute net.ipv4.ip_forward=1

which solved the issue

-- ogdabou
Source: StackOverflow