I have a very simple web server that listens on port 8080 (node.js app). That server goes inside a container. I can't have more than a few of these servers fit on a machine, because they allocate a lot of memory.
I want to deploy multiple instances of that container (say around 100, maybe more) and then have them be exposed on the same externally visible IP address, at different ports. So say ip_address:10314, ip_address:12605, ip_address:23040, etc.
Can this kind of thing be architected in Kubernetes?
You can use a NodePort
service to expose the containers, if you don't need the service to also provide load balancing (which it sounds like you don't). Here's an example YAML config for NodePort
:
apiVersion: v1
kind: Service
metadata:
name: example-service-name
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 31112
selector:
run: name-of-app
(If you want to expose an app that runs on a port other than 80 inside the container, change port
and targetPort
.)
Rather than hard-coding everything, you could use an automated build tool, or a set of shell scripts, to automatically populate the nodePort
and run
fields. This could let you avoid port collisions.
Well you can use a reverse proxy such as Nginx. It can work as a frontend for all your containers. All the requests will be sent to the Nginx which will forward them to the containers depending on the port, host name or url of the incoming request
you can use a Service of type LoadBalancer in front of each Pod and one container per Pod. Then you want to control the nodePort
like this:
apiVersion: v1
kind: Service
metadata:
name: service-001
spec:
type: LoadBalancer
ports:
- port: 8080
nodePort: 31001
selector:
app: pod-001
The nodePort
is exposed on every node and should work the way you expect it.
Scaling up would then require you to create a new Service+(Deployment)+Pod+Container combination and you'd have to manually make sure that the nodePort
doesn`t collide with other services.
Cheers