Does NodePort requestbalance between deployments?

9/27/2018

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:

Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)

So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.

The services (Apache and App) are in both cases a NodePort service.

What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.

App service yaml looks like this:

apiVersion: v1
kind: Service
metadata:
  name: preprocessor-service
  labels:
    app: preprocessor
spec:
  type: NodePort
  selector:
    app: preprocessor
  ports:
  - port: 80
    targetPort: 8081
-- darkownage
google-cloud-platform
kubernetes

1 Answer

9/27/2018

If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.

When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:

$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
http-svc     NodePort    10.109.87.179    <none>        80:30723/TCP     5d
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          69d

When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.

I'm not sure in your case, is it possible that you are not using the clusterIP?

-- Rico
Source: StackOverflow