Kubernetes pods - why are there sometimes multiple processes?

3/29/2019

My understanding is that, if a pod is configured to have one container, that container will run its "main" process as PID 1 and that's pretty much it. My pods have only one container each, and they very often have multiple processes running (always copies of the same process) - why does this happen?

On one cluster I have:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.3  0.4 276668 76076 ?        Ssl  16:50   0:48 python manage.py drive_consumer_worker
root        19  0.0  0.0  34432  2756 ?        Rs   20:28   0:00 ps aux

On another cluster (running the same Deployment), I have:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  1.2  0.3 1269420 104388 ?      Ssl  Mar16 240:12 python manage.py drive_consumer_worker
root        26  0.0  0.2 1044312 84160 ?       S    Mar16   0:01 python manage.py drive_consumer_worker
root        30  0.0  0.0  34440  2872 ?        Rs   20:30   0:00 ps aux

As you can see, the memory size is significant enough to indicate that it's a "real" process, but I don't know what I can do to continue debugging. I don't see any pattern with the number of pod replicas defined and the process count.

Snippet from the deployment definition:

      containers:
      - args:
        - newrelic-admin run-program python manage.py drive_consumer_worker
        command:
        - /bin/bash
        - -c

What is going on here?

-- s g
kubernetes

1 Answer

3/29/2019

It really depends on the parent process, if it doesn't spawn any children the process 1 is all you'll have in the container. In this case, it looks like python manage.py drive_consumer_worker is spawning children processes, so it will be more up to the application to control if it spawns children as more processes in the container.

-- Rico
Source: StackOverflow