Can we schedule spark driver and executer on different node pools on GKE?

11/1/2019

While scheduling spark job on kubernetes can we use different node pool for driver and executer pods?

I have gone through documentation of spark 2.4.4 but it only takes one node pool selector config. I was planning to use preemtible nodes in GKE node pool but I wonder if node which is running the driver goes down new instance will not re-run the driver and whole job would fail.

current setting as per documentation spark.kubernetes.node.selector.[labelKey]

-- Durgesh Choudhary
apache-spark
kubernetes

1 Answer

11/1/2019

Unfortunately this is not yet supported, but you can follow the PR which proposes that feature.

-- Aliaksandr Sasnouskikh
Source: StackOverflow