Run containers which intentionally exit periodically

8/20/2019

How can I have Kubernates automatically restart a container which purposefully exits in order to get new data from environment variables?

I have a container running on a Kubernates cluster which operates in the following fashion:

  • Container starts, polls for work
  • If it receives a task, it does some work
  • It polls for work again, until ...
  • .. the container has been running for over a certain period of time, after which it exits instead of polling for more work.

It needs to be continually restarted, as it uses environment variables which are populated by Kubernates secrets which are periodically refreshed by another process.

I've tried a Deployment, but it doesn't seem like the right fit as I get CrashLoopBackOff status, which means the worker is scheduled less and less often.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-fonky-worker
  labels:
    app: my-fonky-worker

spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-fonky-worker
  template:
    metadata:
      labels:
        app: my-fonky-worker
    spec:
      containers:
      - name: my-fonky-worker-container
        image: my-fonky-worker:latest
        env:
          - name: NOTSOSECRETSTUFF
            value: cats_are_great
          - name: SECRETSTUFF
            valueFrom:
              secretKeyRef:
                name: secret-name
                key: secret-key

I've also tried a CronJob, but that seems a bit hacky as it could mean that the container is left in the stopped state for several seconds.

-- naxxfish
kubernetes

3 Answers

8/20/2019

What I see as a solution for this would be to run your container as a cronjob. but don't use startingDeadlineSeconds as your container killer.

It runs on its schedule.

In your container you can have it poll for work N times. After N times it exits 0.

-- Josh Beauregard
Source: StackOverflow

8/21/2019

If I understood correctly in your example there are 2 problems:

  1. Restarting container
  2. Updating secret values

In order to keep your secrets up to date you should consider using secrets as described by Amit Kumar Gupta comment and mount secrets as volume instead of environment variable, here is an example.

As per the second problem with restarting container it depends on what is the exit code as described by garlicFrancium

From another point of view you can use init container waiting for new tasks and main container in order to proceed this tasks according to your requirements or create job scheduler.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: complete
  name: complete
spec:
  replicas: 1
  selector:
    matchLabels:
      app: complete
  template:
    metadata:
      labels:
        app: complete
    spec:
      hostname: c1
      containers:
      - name: complete
        command: 
        - "bash"
        args:
        - "-c"
        - "wa=$(shuf -i 15-30 -n 1)&& echo $wa && sleep $wa"
        image: ubuntu
        imagePullPolicy: IfNotPresent
        resources: {}
      initContainers:
      - name: wait-for
        image: ubuntu
        command: ['bash', '-c', 'sleep 30']
  restartPolicy: Always

Please note:

  • When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted secret is fresh on every periodic sync. However, it is using its local cache for getting the current value of the Secret. The type of the cache is configurable using the (ConfigMapAndSecretChangeDetectionStrategy field in KubeletConfiguration struct). It can be either propagated via watch (default), ttl-based, or simply redirecting all requests to directly kube-apiserver. As a result, the total delay from the moment when the Secret is updated to the moment when new keys are projected to the Pod can be as long as kubelet sync period + cache propagation delay, where cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero corespondingly).
  • A container using a Secret as a subPath volume mount will not receive Secret updates.

Please refer also to:

-- Hanx
Source: StackOverflow

8/20/2019

As @Josh said you need to exit with exit 0 else it will be treated as a failed container! Here is the reference
According to the first example there "Pod is running and has one Container. Container exits with success." if your restartPolicy is set to Always (which is default by the way) then the container will restart although the Pod status shows running but if you log the pod then you can see the restart of the container.

It needs to be continually restarted, as it uses environment variables which are populated by Kubernates secrets which are periodically refreshed by another process.

I would take a different approach to this. I would mount the config map as explained here this will automatically refresh the Mounted config maps data Ref. Note: please take care of the " kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet" to manage the refresh rate of configmap data in the Pod.

-- garlicFrancium
Source: StackOverflow