Spark job on kubernetes fails without specific error

4/20/2020

I'm trying to deploy a Spark job on Kubernetes, using kubectl apply -f <config_file.yml> (after building Docker image based on Dockerfile). The pod is successfuly created on K8s, then quickly stops with a Failed status. Nothing in the logs help understanding where the error comes from. Other jobs have been successfully deployed on the K8s cluster using the same Dockerfile and config file.

The spark job is supposed to read data from a kafka topic, parse it and outout it in console.

Any idea what might be causing the job to fail?

Dockerfile, built using docker build --rm -f "Dockerfile" xxxxxxxx:80/apache/myapp-test . && docker push xxxxxxxx:80/apache/myapp-test :

FROM xxxxxxxx:80/apache/spark:v2.4.4-gcs-prometheus

#USER root

ADD myapp.jar /jars

RUN adduser --no-create-home --system spark

RUN chown -R spark /prometheus /opt/spark

USER spark

config_file.yml :

apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: myapp
  namespace: spark
  labels:
    app: myapp-test
    release: spark-2.4.4
spec:
  type: Java
  mode: cluster
  image: "xxxxxxxx:80/apache/myapp-test"
  imagePullPolicy: Always
  mainClass: spark.jobs.app.streaming.Main
  mainApplicationFile: "local:///jars/myapp.jar"
  sparkVersion: "2.4.4"
  restartPolicy:
    type: OnFailure
    onFailureRetries: 5
    onFailureRetryInterval: 30
    onSubmissionFailureRetries: 0
    onSubmissionFailureRetryInterval: 0
  driver:
    cores: 1
    memory: "1G"
    labels:
      version: 2.4.4
monitoring:
    exposeDriverMetrics: true
    exposeExecutorMetrics: true
    prometheus:
      jmxExporterJar: "/prometheus/jmx_prometheus_javaagent-0.11.0.jar"
      port: 8090
  imagePullSecrets:
  - xxx

Logs :

++ id -u
+ myuid=100
++ id -g
+ mygid=65533
+ set +e
++ getent passwd 100
+ uidentry='spark:x:100:65533:Linux User,,,:/home/spark:/sbin/nologin'
+ set -e
+ '[' -z 'spark:x:100:65533:Linux User,,,:/home/spark:/sbin/nologin' ']'
+ SPARK_K8S_CMD=driver
+ case "$SPARK_K8S_CMD" in
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ + sed sort -t_ 's/[^=]*=\(.*\)/\1/g'-k4 
-n
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -n '' ']'
+ PYSPARK_ARGS=
+ '[' -n '' ']'
+ R_ARGS=
+ '[' -n '' ']'
+ '[' '' == 2 ']'
+ '[' '' == 3 ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
+ exec /sbin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=192.168.225.14 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class spark.jobs.app.streaming.Main spark-internal
20/04/20 09:27:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.apache.spark.deploy.SparkSubmit$anon$2).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Pod events as shown with kubectl describe pod :

Events:
  Type     Reason            Age                From                             Message
  ----     ------            ----               ----                             -------
  Normal   Scheduled         15m                default-scheduler                Successfully assigned spark/myapp-driver to xxxxxxxx.preprod.local
  Warning  FailedMount       15m                kubelet, xxxxxxxx.preprod.local  MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "myapp-1587388343593-driver-conf-map" not found
  Warning  DNSConfigForming  15m (x4 over 15m)  kubelet, xxxxxxxx.preprod.local  Search Line limits were exceeded, some search paths have been omitted, the applied search line is: spark.svc.cluster.local svc.cluster.local cluster.local preprod.local
  Normal   Pulling           15m                kubelet, xxxxxxxx.preprod.local  Pulling image "xxxxxxxx:80/apache/myapp-test"
  Normal   Pulled            15m                kubelet, xxxxxxxx.preprod.local  Successfully pulled image "xxxxxxxx:80/apache/myapp-test"
  Normal   Created           15m                kubelet, xxxxxxxx.preprod.local  Created container spark-kubernetes-driver
  Normal   Started           15m                kubelet, xxxxxxxx.preprod.local  Started container spark-kubernetes-driver 
-- Flxnt
apache-spark
docker
java
kubernetes

1 Answer

4/20/2020

You have to review conf/spark-env.(sh|cmd)

Start by configuring the logging

Spark uses log4j for logging. You can configure it by adding a log4j.properties file in the conf directory. One way to start is to copy the existing log4j.properties.template located there.

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN

# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
-- Hitham S. AlQadheeb
Source: StackOverflow