Resuming spark job on failure of master pod in k8s
3/23/2020
Let's suppose because of some reason if driver pod dies when running a spark job. spark-operator/K8s restart policy creates new driver. Will new driver resumes the job or restarts the job? How?