How can I enable external access to my Kubernetes service via the master with Calico on GCP?

3/7/2019

I have a one master and one worker Kubernetes cluster with Calico deployed from here with no changes to the manifests. The master has an internal IP address of 10.132.0.30 and I am trying to expose my service (running on the worker) on the master as follows:

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: ClusterIP
  externalIPs: [10.132.0.30]
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

curl http://10.132.0.30 from the master works as expected, but curling the master from my laptop on its external IP address hangs, even though I can see connections using tcpdump:

# tcpdump -i eth0 dst 10.132.0.30 and dst port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
22:52:16.692059 IP cpc102582-walt20-2-0-cust85.13-2.cable.virginm.net.62882 > fail5-luke-master.c.jetstack-luke.internal.http: Flags [S], seq 3014275997, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 181016377 ecr 0,sackOK,eol], length 0
...

Running other tcpdump commands on other interfaces seems that packets are reaching nginx but not returning (cali3561fc118c0 is the interface of my nginx Pod on the worker in the root network namespace and 192.168.1.4 is its assigned IP):

# tcpdump -i cali3561fc118c0 dst 192.168.1.4
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cali3561fc118c0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:00:23.626170 IP 192.168.0.1.65495 > 192.168.1.4.http-alt: Flags [S], seq 2616662679, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 181480911 ecr 0,sackOK,eol], length 0
...

I guess there are many possible problems but is there anything obvious I am missing?

EDIT: I have followed the advice from the Calico docs here with no luck

Kubernetes version: 1.13.4

-- dippynark
google-cloud-platform
kubernetes
project-calico

1 Answer

3/8/2019

I hadn't set the --cluster-cidr on kube-proxy. Setting this meant that kube-proxy knew to masquerade external traffic as otherwise the return path would be asymmetric (it was always bouncing off-node to the worker): https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L841-L845

-- dippynark
Source: StackOverflow