I am trying to set up access to Pods on multiple Nodes with a single Service yaml. The Pods all have the same label (say, label:app
), but are distributed across several Nodes, instead of on a single Node.
As far as I know, I can set up a Service to forward access to a Pod through a NodePort, like:
spec:
type: NodePort
selector:
label: app
ports:
targetPort: 5000
nodePort: 30000
where accessing port 30000 on a node forwards to port 5000 on the pod.
If I have pods on multiple nodes, is there a way a client can access a single endpoint, e.g. the Service itself, to get any pod in round-robin? Or does a client need to access a set of pods on a specific node, using that node's IP, as in xx.xx.xx.xx:30000
?
Although LoadBalancer is an undeniably recommended solution (especially in cloud environment), it's worth mentioning that NodePort also has load balancing capabilities.
The fact that you're accessing your NodePort
Service on a particular node doesn't mean that you are able to access this way only Pods
that have been scheduled on that particular node.
As you can read in NodePort
Service specification:
Each node proxies that port (the same port number on every Node) into your
Service
.
So by accessing port 30080
on one particular node your request is not going directly to some random Pod
, scheduled on that node. It is proxied to the Service
object which is an abstraction that spans across all nodes. And this is probably the key point here as your NodePort
Service isn't tied in any way to the node, IP of which you use to access your pods.
Therefore NodePort
Service is able to route client requests to all pods across the cluster using simple round robin algorithm.
You can verify it easily using the following Deployment
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo $MY_NODE_NAME > /usr/share/nginx/html/index.html"]
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
This will allow you to test to which node your http request is going to. You may additionally need to scale a bit this Deployment
to make sure that all nodes are used:
kubectl scale deployment nginx-deployment --replicas=9
Then verify that your pods are scheduled on different nodes:
kubectl get pods -o wide
List all your nodes:
kubectl get nodes -o wide
and pick the IP address of a node that you want to use to access your pods.
Now you can expose the Deployment
by running:
kubectl expose deployment nginx-deployment --type NodePort --port 80 --target-port 80
or if you want to specify the port number by yourself e.g. as 30080
, apply the following NodePort
Service definition as kubectl expose
doesn't allow you to specify the exact nodePort
value:
apiVersion: v1
kind: Service
metadata:
name: nginx-deployment
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
Then try to access your pods exposed via NodePort
Service using IP of the previously chosen node. You may need to try both normal and private/incognito modes or even different browser (simple refresh may not work) but eventually you will see that different requests land on pods scheduled on different nodes.
Keep in mind that if you decide to use NodePort
you won't be able to use well known ports. Actually it might be even feasible as you may change the default port range (30000-32767
) to something like 1-1024
in kube-apiserver configuration by using --service-node-port-range
option but its not recommended as it might lead to some unexpected issues.
If you are looking for single entry point to your application service and it is running in cloud infrastructure then you can use Load balancer service (instead of node port) which will assign a external IP for your service which can be used to access your service from external system.
spec:
ports:
- name: httpsPort
port: 443
protocol: TCP
targetPort: 443
selector:
label: app
type: LoadBalancer
If you are having multiple service within same cluster which needs to be accessed from external system then you can use Ingress.
Thanks Kiruba