How to expose kubernetes service on prem using 443/80

4/29/2019

Is it possible to expose Kubernetes service using port 443/80 on-premise?

I know some ways to expose services in Kubernetes:

1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.

2. Host network - Force the pod to use the host’s network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.

3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option. MetalLB which allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, is not yet stable enough.

Do you know any other way to expose service in Kubernetes using port 443/80 on-premise? I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.

Thanks.

-- Shainberg
kubernetes
kubernetes-ingress

4 Answers

4/29/2019

Idea with hostNetwork proxy is actually not bad, Openshift Router uses that for example. You define two or three nodes to run proxy and use DNS load balancing in front of them.

And you can still use kube-dns with hostNetwork, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

-- Vasily Angapov
Source: StackOverflow

10/11/2019

You are probably running a kubeadm on-premise Kubernetes setup with a nginx ingress controller on unix/linux hosts and can't safely expose ports in the restricted system port range (0-1023).

You either need to set up your own dedicated load balancer pair (e.g. a Linux boxes with HA-Proxy running) or alternatively use an existing load balancers if you are lucky engough being in a corporate environment that already provides load balancing (e.g. F5 LB).

Then you will be able to set the load balancers to forward your 443/80 requests to your cluster node's 30443/30080 ports that are handled by your cluster's ingress controller.

-- Martin Peter
Source: StackOverflow

4/29/2019

IMHO ingress is the best way to do this on prem.

We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.

Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding ingress.

Here's an example of the daemonset manifest:

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: nginx-ingress-controller
spec:
  selector:
    matchLabels:
      component: ingress-controller
  template:
    metadata:
      labels:
        component: ingress-controller
    spec:
      restartPolicy: Always
      hostNetwork: true
      containers:
        - name: nginx-ingress-lb
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          ports:
            - name: http
              hostPort: 80
              containerPort: 80
              protocol: TCP
            - name: https
              hostPort: 443
              containerPort: 443
              protocol: TCP
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          args:
            - /nginx-ingress-controller
            - '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
-- switchboard.op
Source: StackOverflow

4/29/2019

Use ingress controller as an entrypoint to a services in kubernetes cluster. Run ingress controller on port 80 or 443. You need to define ingress rules for each backend service that you want to access from outside. Ingress controller should be able to allow client to access the services based on the paths defined in the ingress rules.

If you need to allow access over https then you need to get the dns certificates, load them into secrets and bind them in the ingress rules

Most popular one is nginx ingress controller. Traefik and ha proxy ingress controllers are also other alternate solutions

-- P Ekambaram
Source: StackOverflow