Expose deployment via service type Node Port on digital Ocean Kubernetes

6/19/2019

I'm implementing a solution in Kubernetes for several clients, and I want to monitoring my cluster with Prometheus. However, because this can scale quickly, and I want to reduce costs, I will use Federation for Prometheus, to scrape different clusters of Kubernetes, but I need to expose my Prometheus deployment.

I already have that working with a service type LoadBalancer exposing my Prometheus deployment, but this approach add this extra expense to my infra structure (Digital Ocean LB).

Is it possible to make this using a service type NodePort, exposing a port to my Cluster IP, something like this:

XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090

Where I can use this URL to my master Prometheus scrappe all "slaves" Prometheus instances?

I already tried, but I can't reach my cluster port. Something is blocking. I also delete my firewall, to ensure that nothing is interferes in this implementation but nothing.

This is my service:

Name:                     my-nodeport-service
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector:                 app=nginx
Type:                     NodePort
IP:                       10.245.162.125
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  30800/TCP
Endpoints:                10.244.2.220:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>


Can anybody help me please?

---

This is my service: 

```kubectl describe service my-nodeport-service
Name:                     my-nodeport-service
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector:                 app=nginx
Type:                     NodePort
IP:                       10.245.162.125
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  30800/TCP
Endpoints:                10.244.2.220:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
-- Luís Serra
digital-ocean
kubernetes
kubernetes-service

1 Answer

7/3/2019

You can then set up host XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090 to act as your load balancer with Nginx.

Try setup Nginx TCP load balancer.

Note: You will be using Nginx stream and if you want to use open source Nginx and not Nginx Plus, then you might have to compile your own Nginx with -with-stream option.

Example config file:

events {
    worker_connections  1024;
}

stream {
    upstream stream_backend {
        server dhcp-180.example.com:446;
        server dhcp-185.example.com:446;
        server dhcp-186.example.com:446;
        server dhcp-187.example.com:446;
    }

    server {
        listen     446;
        proxy_pass stream_backend;
    }

After runing Nginx, test results should be like:

enter image description here

Host lb.example.com acts as load balancer with Nginx.

In this example Ngnix is configured to use round-robin and as you can see, every time a new connection ends up to a different host/container.

Note: the container hostname is same as the node hostname this is due to the hostNetwork.

There are some drawbacks of this solution like:

  • defining hostNetwork reserves the host’s port(s) for all the containers running in the pod
  • creating one load balancer you have single point of failure
  • every time a new node is added or removed to the cluster, the load balancer should be updated

This way, one could set up a kubernetes cluster to Ingress-Egress TCP connections routing from/to outside of the cluster.

Useful post: load-balancer-tcp.

NodePort documentation: nodePort.

-- MaggieO
Source: StackOverflow