How do solve this error when running apache spark 2.3.0 on Kubernetes with a jar from a remote source

3/29/2018

Following the instructions here I have been trying to submit a spark job to minikube, using a remote URL:

minikube start bin/spark-submit --master k8s://https://192.168.99.100:8443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=<default-spark-k8s-image-build> --conf spark.kubernetes.namespace=spark <https://remote-location-with-spark-example-jar>

The pod fails and when I describe it I get the error configmaps "spark-pi-ad386ea0f7e4333dbd2a0ad705e94d66-init-config" not found: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned spark-pi-ad386ea0f7e4333dbd2a0ad705e94d66-driver to minikube Warning FailedMount 49s kubelet, minikube MountVolume.SetUp failed for volume "spark-init-properties" : **configmaps "spark-pi-ad386ea0f7e4333dbd2a0ad705e94d66-init-config" not found** Normal SuccessfulMountVolume 49s kubelet, minikube MountVolume.SetUp succeeded for volume "download-jars-volume" Normal SuccessfulMountVolume 49s kubelet, minikube MountVolume.SetUp succeeded for volume "download-files-volume" Normal SuccessfulMountVolume 49s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4ghj8" Normal SuccessfulMountVolume 49s kubelet, minikube MountVolume.SetUp succeeded for volume "spark-init-properties" Normal Pulled 49s kubelet, minikube Container image "timg-spark/spark:latest" already present on machine Normal Created 49s kubelet, minikube Created container Normal Started 48s kubelet, minikube Started container Normal Pulled 43s kubelet, minikube Container image "timmeh/spark:latest" already present on machine Normal Created 43s kubelet, minikube Created container Normal Started 43s kubelet, minikube Started container

However there is no mention in the docs of creating any configmaps, and because the name of the configmap isn't known until you run spark-submit, I can't create one in advance to get more information.

For now my plan is to work around by baking in jar files to the spark docker image, but if anyone knows more on why this is failing that'd be great!

-- Timmeh
apache-spark
kubernetes

0 Answers