Spark kubernetes client mode (separate driver pod) setup

4/27/2020

I'm trying to get a spark kubernetes install working where the spark driver node resides in its own separate pod (client mode), and uses the SparkSession.builder mechanism to bootstrap the cluster (not using spark-submit).

I'm working from this:

https://spark.apache.org/docs/latest/running-on-kubernetes.html

Here is the code used by the driver to bootstrap the cluster:

val sparkSession = SparkSession.builder
  .master("k8s://https://kubernetes.default.svc:32768")
  .appName("test")
  .config("spark.driver.host", "sparkrunner-0")
  .config("spark.driver.port", "7077")
  .config("spark.driver.blockManager.port", "7078")
  .config("spark.kubernetes.container.image","spark-alluxio")
  .config("fs.alluxio.impl", "alluxio.hadoop.FileSystem")
  .config("fs.alluxio-ft.impl", "alluxio.hadoop.FaultTolerantFileSystem")
  .getOrCreate

The container image (spark-alluxio) was built by adding the alluxio client library to a binary spark distribution (2.4.2).

Here is the kubernetes yaml used to deploy the driver program, which sits inside a StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sparkrunner
  labels:
    app: sparkrunner
spec:
  selector:
    matchLabels:
      app: sparkrunner
  serviceName: sparkrunner
  replicas: 1
  template:
    metadata:
      labels:
        app: sparkrunner
    spec:
      containers:
        - name: sparkrunner
          image: "rb/sparkrunner:latest"
          imagePullPolicy: Never
          ports:
            - name: application 
              containerPort: 9100
            - name: driver-rpc-port
              containerPort: 7077
            - name: blockmanager
              containerPort: 7078

and here is the kubernetes yaml to deploy the services which sit on top of the driver program:

# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: sparkrunner
spec:
  ports:
  - name: driver-rpc-port
    protocol: TCP 
    port: 7077
    targetPort: 7077
  - name: blockmanager
    protocol: TCP 
    port: 7078
    targetPort: 7078
  clusterIP: None
  selector:
    app: sparkrunner

---

# Client service for connecting to any spark instance.
apiVersion: v1
kind: Service
metadata:
  name: sparkdriver
spec:
  type: NodePort
  ports:
  - name: sparkdriver
    port: 9100
  selector:
    app: sparkrunner

When I deploy this to the cluster the driver will start, but when it attempts to find executors things will fail with a socket exception, presumably because the workers can't connect back to the driver, or vice-versa?

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/04/26 20:24:39 INFO SparkContext: Running Spark version 2.4.2
20/04/26 20:24:40 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/04/26 20:24:40 INFO SparkContext: Submitted application: test
20/04/26 20:24:40 INFO SecurityManager: Changing view acls to: root
20/04/26 20:24:40 INFO SecurityManager: Changing modify acls to: root
20/04/26 20:24:40 INFO SecurityManager: Changing view acls groups to: 
20/04/26 20:24:40 INFO SecurityManager: Changing modify acls groups to: 
20/04/26 20:24:40 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
20/04/26 20:24:41 INFO Utils: Successfully started service 'sparkDriver' on port 7077.
20/04/26 20:24:41 INFO SparkEnv: Registering MapOutputTracker
20/04/26 20:24:41 INFO SparkEnv: Registering BlockManagerMaster
20/04/26 20:24:41 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/04/26 20:24:41 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/04/26 20:24:41 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e8aa33ba-26d2-421d-9957-9cba1c9a3b9f
20/04/26 20:24:41 INFO MemoryStore: MemoryStore started with capacity 1150.2 MB
20/04/26 20:24:41 INFO SparkEnv: Registering OutputCommitCoordinator
20/04/26 20:24:41 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/04/26 20:24:41 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://sparkrunner-0:4040
20/04/26 20:24:53 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7078.
20/04/26 20:24:53 INFO NettyBlockTransferService: Server created on sparkrunner-0:7078
20/04/26 20:24:53 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/04/26 20:24:53 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, sparkrunner-0, 7078, None)
20/04/26 20:24:53 INFO BlockManagerMasterEndpoint: Registering block manager sparkrunner-0:7078 with 1150.2 MB RAM, BlockManagerId(driver, sparkrunner-0, 7078, None)
20/04/26 20:24:53 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, sparkrunner-0, 7078, None)
20/04/26 20:24:53 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, sparkrunner-0, 7078, None)
20/04/26 20:24:53 WARN WatchConnectionManager: Exec Failure
java.net.SocketTimeoutException: connect timed out
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at okhttp3.internal.platform.Platform.connectSocket(Platform.java:129)
    at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:246)
    at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:166)
    at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)
    at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135)
    at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114)
    at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
    at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
    at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
    at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
    at io.fabric8.kubernetes.client.utils.BackwardsCompatibilityInterceptor.intercept(BackwardsCompatibilityInterceptor.java:119)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
    at io.fabric8.kubernetes.client.utils.ImpersonatorInterceptor.intercept(ImpersonatorInterceptor.java:68)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
    at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:107)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
    at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
    at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:254)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:200)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

From this I can't really tell what's failing - is the issue with the service definition or the driver itself? I've tried fiddling with selectors and hostnames but nothing seems to work.

-- user7654493
apache-spark
docker
kubernetes

1 Answer

4/29/2020

After more poking and prodding, I found that the address I was using for the k8s service was incorrect:

k8s://https://kubernetes.default.svc:32768

I got this from a kubectl cluster-info, but my minikube instance may be reporting that incorrectly (or proxying for external perhaps). When I replaced with this:

k8s://https://10.96.0.1:443

which is the internal address of the api, things started to work.

-- user7654493
Source: StackOverflow