Exposing bare metal kubernetes cluster to internet

1/14/2019

I am trying to setup own single-node kubernetes cluster on bare metal dedicated server. I am not that experienced in dev-ops but I need some service to be deployed for my own project. I already did a cluster setup with juju and conjure-up kubernetes over LXD. I have running cluster pretty fine.

$ juju status

Model                         Controller                Cloud/Region         Version  SLA          Timestamp
conjure-canonical-kubern-3b3  conjure-up-localhost-db9  localhost/localhost  2.4.3    unsupported  23:49:09Z

App                    Version  Status  Scale  Charm                  Store       Rev  OS      Notes
easyrsa                3.0.1    active      1  easyrsa                jujucharms  195  ubuntu
etcd                   3.2.10   active      3  etcd                   jujucharms  338  ubuntu
flannel                0.10.0   active      2  flannel                jujucharms  351  ubuntu
kubeapi-load-balancer  1.14.0   active      1  kubeapi-load-balancer  jujucharms  525  ubuntu  exposed
kubernetes-master      1.13.1   active      1  kubernetes-master      jujucharms  542  ubuntu
kubernetes-worker      1.13.1   active      1  kubernetes-worker      jujucharms  398  ubuntu  exposed

Unit                      Workload  Agent  Machine  Public address  Ports           Message
easyrsa/0*                active    idle   0        10.213.117.66                   Certificate Authority connected.
etcd/0*                   active    idle   1        10.213.117.171  2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   2        10.213.117.10   2379/tcp        Healthy with 3 known peers
etcd/2                    active    idle   3        10.213.117.238  2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4        10.213.117.123  443/tcp         Loadbalancer ready.
kubernetes-master/0*      active    idle   5        10.213.117.172  6443/tcp        Kubernetes master running.
  flannel/1*              active    idle            10.213.117.172                  Flannel subnet 10.1.83.1/24
kubernetes-worker/0*      active    idle   7        10.213.117.136  80/tcp,443/tcp  Kubernetes worker running.
  flannel/4               active    idle            10.213.117.136                  Flannel subnet 10.1.27.1/24

Entity  Meter status  Message
model   amber         user verification pending

Machine  State    DNS             Inst id        Series  AZ  Message
0        started  10.213.117.66   juju-b03445-0  bionic      Running
1        started  10.213.117.171  juju-b03445-1  bionic      Running
2        started  10.213.117.10   juju-b03445-2  bionic      Running
3        started  10.213.117.238  juju-b03445-3  bionic      Running
4        started  10.213.117.123  juju-b03445-4  bionic      Running
5        started  10.213.117.172  juju-b03445-5  bionic      Running
7        started  10.213.117.136  juju-b03445-7  bionic      Running

I also deployed Hello world application to output some hello on port 8080 inside the pod and nginx-ingress for it to re-route the traffic to this service on specified host.

NAME                               READY   STATUS    RESTARTS   AGE
pod/hello-world-696b6b59bd-fznwr   1/1     Running   1          176m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/example-service   NodePort    10.152.183.53   <none>        8080:30450/TCP   176m
service/kubernetes        ClusterIP   10.152.183.1    <none>        443/TCP          10h

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello-world   1/1     1            1           176m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/hello-world-696b6b59bd   1         1         1       176m

When I do curl localhost as expected I have connection refused, which looks still fine as it's not exposed to cluster. when I curl the kubernetes-worker/0 with public address 10.213.117.136 on port 30450 (which I get from kubectl get all)

$ curl 10.213.117.136:30450
Hello Kubernetes!

Everything works like a charm (which is obvious). When I do

curl -H "Host: testhost.com" 10.213.117.136
Hello Kubernetes!

It works again like charm! That means ingress controller is successfully routing port 80 based on host rule to correct services. At this point I am 100% sure that cluster works as it should.

Now I am trying to access this service over the internet externally. When I load <server_ip> obviously nothing loads as it's living inside own lxd subnet. Therefore I was thinking forward port 80 from server eth0 to this IP. So I added this rule to iptables

sudo iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination 10.213.117.136 (For the sake of example let's route everything not only port 80). Now when I open on my computer http://<server_ip> it loads!

So the real question is how to do that on production? Should I setup this forwarding rule in iptables? Is that normal approach or hacky solution and there is something "standard" which I am missing? The thing is to add this rule with static worker node will make the cluster completely static. IP eventually change, I can remove/add units to workers and it will stop working. I was thinking about writing script which will obtain this IP address from juju like this:

$ juju status kubernetes-worker/0 --format=json | jq '.machines["7"]."dns-name"'
"10.213.117.136"

and add it to IP-tables, which is more okay-ish solution than hardcoded IP but still I feel it's a tricky and there must be a better way.

As last idea I get to run HAProxy outside of the cluster, directly on the machine and just do forwarding of traffic to all available workers. This might eventually also work. But still I don't know the answer what is the correct solution and what is usually used in this case. Thank you!

-- Milos Mosovsky
bare-metal-server
juju
kubernetes

1 Answer

1/14/2019

So the real question is how to do that on production?

The normal way to do this in a production system is to use a Service.

The simplest case is when you just want your application to be accessible from outside on your node(s). In that case you can use a Type NodePort Service. This would create the iptables rules necessary to forward the traffic from the host IP address to the pod(s) providing the service.

If you have a single node (which is not recommended in production!), you're ready at this point.

If you have multiple nodes in your Kubernetes cluster, all of them would be configured by Kubernetes to provide access to the service (your clients could use any of them to access the service). Though, you'd have to solve the problem of how the clients would know which nodes are available to be contacted...

There are several ways to handle this:

  • use a protocol understood by the client to publish the currently available IP addresses (for example DNS),

  • use a floating (failover, virtual, HA) IP address managed by some software on your Kubernetes nodes (for example pacemaker/corosync), and direct the clients to this address,

  • use an external load-balancer, configured separately, to forward traffic to some of the operating nodes,

  • use an external load-balancer, configured automatically by Kubernetes using a cloud provider integration script (by using a Type LoadBalancer Service), to forward traffic to some of the operating nodes.

-- Laszlo Valko
Source: StackOverflow