I have successfully installed a Kubernetes cluster and can verify this by:
C:\windows\system32>kubectl cluster-info
Kubernetes master is running at https://<ip>:<port>
KubeDNS is running at https://<ip>:<port>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Then I am trying to run the SparkPi with the Spark I downloaded from https://spark.apache.org/downloads.html .
spark-submit --master k8s://https://192.168.99.100:8443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=2 --conf spark.kubernetes.container.image=gettyimages/spark c:\users\<username>\Desktop\spark-2.4.0-bin-hadoop2.7\examples\jars\spark-examples_2.11-2.4.0.jar
I am getting this error:
Error: Master must either be yarn or start with spark, mesos, local
Run with --help for usage help or --verbose for debug output
I tried versions 2.4.0 and 2.3.3. I also tried
spark-submit --help
to see what I can get regarding the --master property. This is what I get:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
According to the documentation [https://spark.apache.org/docs/latest/running-on-kubernetes.html] on running Spark workloads in Kubernetes, spark-submit does not even seem to recognise the k8s value for master. [ included in possible Spark masters: https://spark.apache.org/docs/latest/submitting-applications.html#master-urls ]
Any ideas? What would I be missing here?
Thanks
Issue was my CMD was recognising a previous spark-submit version I had installed(2.2) even though i was running the command from the bin directory of spark installation.