Kubernetes Max number of pods with NodePort

9/29/2017

I am deploying multiple pods, each one has to be exposed. I learned that I can expose those using a service with type NodePort.

The problem I am facing here is that a port is assigned per pod, cluster-wide. This limits the number of pods I can expose to +/- 65000 (number of linux ports).

I wonder if there was a possibility to expose a pod with a port only on the worker node where it is running on, e.g. 10.10.0.30:30001. In this way, the port 30001 would be available on other worker nodes for other pods. I understand that if the pod was to die and re-scheduled elsewhere it would have to be allocated a free port again.

Is this even possible? Are there otherwise an alternative how to expose a large number of pods (200000+)

Thanks!

-- Nuriel Shem-Tov
kubernetes

2 Answers

9/29/2017

You can use externalIP in service yaml, like below, here x.x.x.x can be IP of any kubernetes nodes and 8080 will be expose in that node. spec: ports: - name: http port: 80 targetPort: 8080 selector: app: nginx type: LoadBalancer externalIPs: - x.x.x.x

-- Pawan Kumar
Source: StackOverflow

9/29/2017

There is already a limit on the supported configurations, both for the number of pods to 100 per node, and 150,000 pods per cluster - see docs about building large clusters.

What you're trying to achieve isn't possible at the moment.

-- Robert Lacok
Source: StackOverflow