Kubernetes on VPS with public and private NICs

9/13/2018

I have 3 VPS that all have 2 NICs each, one public and one private. I want cluster communication to use the private subnet but expose containers through to the public. When I configure the cluster using the —apiserver-advertise-address with the private ip but the nodes all show their public IPs when running kubectl get pods —all-namespaces -o wide

Output from command:

NAMESPACE         NAME                                 READY     STATUS    RESTARTS   AGE       IP               NODE          NOMINATED NODE
heptio-sonobuoy   sonobuoy                             1/3       Error     0          1d        10.244.2.2       k8s-worker2   <none>
kube-system       calico-node-47j4q                    2/2       Running   0          1d        95.179.192.7     k8s-worker1   <none>
kube-system       calico-node-8ttn6                    2/2       Running   2          1d        45.76.143.32     k8s-master    <none>
kube-system       calico-node-dh2d9                    2/2       Running   0          1d        95.179.192.128   k8s-worker2   <none>
kube-system       coredns-78fcdf6894-cjf6p             1/1       Running   1          1d        10.244.0.11      k8s-master    <none>
kube-system       coredns-78fcdf6894-q6zzb             1/1       Running   1          1d        10.244.0.12      k8s-master    <none>
kube-system       etcd-k8s-master                      1/1       Running   1          1d        45.76.143.32     k8s-master    <none>
kube-system       kube-apiserver-k8s-master            1/1       Running   2          1d        45.76.143.32     k8s-master    <none>
kube-system       kube-controller-manager-k8s-master   1/1       Running   2          1d        45.76.143.32     k8s-master    <none>
kube-system       kube-proxy-j58cv                     1/1       Running   0          1d        95.179.192.128   k8s-worker2   <none>
kube-system       kube-proxy-pbnpl                     1/1       Running   1          1d        45.76.143.32     k8s-master    <none>
kube-system       kube-proxy-z7cmm                     1/1       Running   0          1d        95.179.192.7     k8s-worker1   <none>
kube-system       kube-scheduler-k8s-master            1/1       Running   2          1d        45.76.143.32     k8s-master    <none>
-- Luke
kubernetes

2 Answers

9/21/2018

Usually kubelet and apiserver listens to all interfaces, so advertisement on "public" interface will work out of the box:

tcp6       0      0 :::10250      :::*        LISTEN      -  # kubelet
tcp6       0      0 :::6443       :::*        LISTEN      -  # kubeapi 
tcp6       0      0 :::30000      :::*        LISTEN      -  # NodePort service
tcp6       0      0 :::10256      :::*        LISTEN      -  # kubeproxy

You may need to restrict access to the cluster on the edge security appliance if you use public IP addresses for the cluster nodes.

Inside the cluster the traffic between the apiserver and the cluster nodes goes in the subnet specified by the apiserver option --apiserver-advertise-address.

The following part of the answer is about how the kubelet selects an IP address for the node representation.
You didn't mention the version of your cluster so I selected the version v1.11 that I have on my cluster right now:

There is an issue on GitHub related to this kubelet behaviour: kubelet reports wrong IP address #44702

At the end of the discussion, yujuhong explained why this happens:

kubelet uses the IP address reported by the cloud provider if it exists, or the first non-loopback ipv4 address (code here) if there is no cloud provider. In addition, it could be overwritten by kubelet flags.

I've updated the links in the quote to v1.11. Here what is mentioned in the code comments for v1.11:

    // 1) Use nodeIP if set
    // 2) If the user has specified an IP to HostnameOverride, use it
    // 3) Lookup the IP from node name by DNS and use the first valid IPv4 address.
    //    If the node does not have a valid IPv4 address, use the first valid IPv6 address.
    // 4) Try to get the IP from the network interface used as default gateway

Options of kubelet mentioned in code comments are copied from kubelet documentation:

  1. --node-ip string - IP address of the node. If set, kubelet will use this IP address for the node
  2. --hostname-override string - If non-empty, will use this string as identification instead of the actual hostname.
-- VAS
Source: StackOverflow

9/13/2018

Check the routes on your nodes. You can see your routes on your nodes like this:

 ip route # or
 netstat -r

If your nodes joined the cluster using the master(s) private address you should be fine and all your Kubernetes traffic between the nodes-masters should be flowing through your private network.

Hope it helps.

-- Rico
Source: StackOverflow