setup kubernetes worker node behind NAT

11/12/2018

I have setup a kubernetes cluster using kubeadm.

Environment

  1. Master node installed in a PC with public IP.
  2. Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)

Status

The worker node is able to join the cluster and running

kubectl get nodes

the status of the node is ready.

Kubernetes can deploy and run pods on that node.

Problem

The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run

kubectl logs <pod-name>

where pod-name is the name of a pod deployed on the worker node, I have this error:

Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout

because it is trying to use the local IP 192.168.0.17, which is not accessable externally.

I have seen that the node had this annotation:

flannel.alpha.coreos.com/public-ip: 192.168.0.17

So, I have tried to modify the annotation, setting the external IP, in this way:

flannel.alpha.coreos.com/public-ip: <my_externeal_ip>

and I see that the node is correctly annotated, but it is still using 192.168.0.17.

Is there something else that I have to setup in the worker node or in the cluster configuration?

-- Davide
kubernetes

1 Answer

11/13/2018

there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate

Is there something else that I have to setup in the worker node or in the cluster configuration?

No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use kubectl exec nor kubectl logs (and I think port-forward, too) since those commands do not send traffic through the API server, rather it directly contacts the kubelet port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.

I think it is theoretically possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.

I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.

-- mdaniel
Source: StackOverflow