Gracefully killing container and pods in kubernetes with spark exception

5/19/2020

What is the recommended way to gracefully kill a container and driver pods in kubernetes when an application fails or reaches an exception. Currently, when my application runs into an exception my pods and executors continue to run and I noticed that my container doesn't get killed unless an explicit exit 1 is used.For some reason my spark application doesn't cause an exit 1 status or sigterm signal to be sent to the container or pod. Tried to add following to yaml specs based on recommendations but still not getting pod driver and executors to terminate:

spec:
  terminationGracePeriodSeconds: 0
  driver:
    lifecycle:
      preStop:
        exec:
          command:
          - /bin/bash
          - -c
          - touch /var/run/killspark && sleep 65
-- horatio1701d
apache-spark
kubernetes

1 Answer

5/19/2020

The preStop LiveCycle hook you added won't have any effect, since it is only triggered

before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others

https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/

I suspect, what you really have to figure out is why your container main process keeps running despite the exception.

-- Fritz Duchardt
Source: StackOverflow