How to run multiple stateless containers in one pod?

11/2/2017

I have a couple of tiny containers with a very small memory footprint and little traffic. I think it's overkill and too expensive to have a separate pod for each of them.

I currently deploy containers by simply pushing Docker images to the OpenShift Online Container Registry. OpenShift rebuilds and deploys the application as soon as a new image arrives. It works fine, but I just can't find a way to make OpenShift accept multiple images/containers for the same application/pod.

Does anyone know how to run multiple containers in one application/pod?

-- Rotareti
docker
kubernetes
openshift

2 Answers

11/3/2017

I don't know what kind of disadvantages you have in mind when creating multiple pods. The overhead of a Pod vs a Container is negligible.

But putting multiple applications into a single pod clearly has disadvantages:

  • if you want to restart a single container, you need to restart all of them
  • you cannot scale the containers separately, so you could not have a different count of services (for HA or load distribution)
  • you have to identify the services by port, since the service discovery works per Pod
    • ie. having multiple HTTP services, you could map them all to port 80 and use http://fooservice and http://barservice instead of http://uberpod:8001 and http://uberpod:8002

Again, there is almost no overhead of having multiple Pods.

I have no idea how the Kubernetes integration in OpenShift works, but with plain Kubernetes YAML files you could just add another container to the container list:

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: foo
    image: busybox
    command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
  - name: bar
    image: mycontainer:latest
-- svenwltr
Source: StackOverflow

1/5/2018

Given that each pod or service consume an IP and that in kubernetes there is a recommended limit of 10 pods per CPU core, it is definitely a good idea to have multiple containers per pod for microservices.

And in Openshift Online, your namespace will have a limit on the number of POD.

In addition you can have multiple service for one pod... but I do not recommend it as it also use an IP.... and a certificate if you need one, ....

You can manage the memory of your container by putting request & limit on it. I guess if you do not put limit, the container configure itselfs with default values. But if you put limits, container initialization should take this into account. (this is the case for containers provided by Redhat).

For the scaling: you should group your container in a smart way into pod according to your scaling needs. In some case it may imply one container for one pod. But not if consuming only 0.0001% CPU. If you need scaling, it should mean you need much more CPU than 10% of a core.

To restart your container you may need to restart the POD. Not always and it can be transparent: why would you restart a container?

  • Bug in the container? Liveness check could take care of this for you... And I think it is at container level.
  • When upgrade the container's image, a trigger can restart the pod, which will be transparent for application's user with the help of readiness check.

    So I think this is not a sufficient argument for requiring 1 pod /container.

-- Marc Jadoul
Source: StackOverflow