While scheduling spark job on kubernetes can we use different node pool for driver and executer pods?
I have gone through documentation of spark 2.4.4 but it only takes one node pool selector config. I was planning to use preemtible nodes in GKE node pool but I wonder if node which is running the driver goes down new instance will not re-run the driver and whole job would fail.
current setting as per documentation spark.kubernetes.node.selector.[labelKey]
Unfortunately this is not yet supported, but you can follow the PR which proposes that feature.