I am reading the blog and tried to run the codes.
$kubectl get po
NAME READY STATUS RESTARTS AGE
spark-master-668325562-w369p 1/1 Running 0 23s
spark-worker-1868749523-xt7hg 1/1 Running 0 23s
Now, the spark cluster is running well on local kubernetes cluster created by minikube. I am trying to submit spark job to it by the following command:
spark-2.1.1-bin-hadoop2.7/bin$ ./spark-submit --master spark://<spark-master>:7077 /home/me/workspace/myproj/myproj.jar
How to know the spark-master IP? I just followed the above steps to do it and cannot find related tutorials about how to know/set the spark-master IP.
Anyone can explain it? Thanks
UPDATE
I tried the following ips, but failed.
$ minikube ip
192.168.42.55
$kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 3h
spark-master 10.0.0.175 <none> 8080/TCP,7077/TCP 42m
Error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at NetworkScanCounter$.main(network-scan-counter.scala:68)
at NetworkScanCounter.main(network-scan-counter.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
kubectl get po -o wide
will show the pod ip address, you need expose using Nodeport, after that you can reach master using minikueip:service port.
The latest apache spark 2.3.0 supports kubernetes .
$ spark-submit --master k8s://https://127.0.0.1:8443 --name cfe8 --deploy-mode cluster --class com.yyy.Application --conf spark.executor.instances=4 --conf spark.kubernetes.container.image=anantpukale/spark_app:1.2 local://CashFlow-0.0.2-SNAPSHOT-shaded.jar
Points to be noted
1.As of now spark 2.3.0 supports only cluster mode of deployment.
2.application jar has to be in either HDFS or docker-image or any remote location accessible through http. In above command local keyword indicates that local to docker container.
3.The ip and port passed in master is found in
$ kubectl cluster-info
Once see the trigger above command, you can see pod and service been created on kubernetes dashboard.