I submitted sample spark (job provided in Spark code) to k8s cluster due to java.net.UnknownHostException: kubernetes.default.svc. It would be very helpful if you can help me fix this issue.
My environment:
How to reproduce my issue:
$ kubectl cluster-info
Kubernetes master is running at https://10.128.0.10:6443
KubeDNS is running at https://10.128.0.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ bin/spark-submit \
--master k8s://https://10.128.0.10:6443 \
--deploy-mode cluster \
--conf spark.executor.instances=3 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.container.image=yohei1126/spark:v2.3.3 \
--class org.apache.spark.examples.SparkPi \
--name spark-pi \
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.3.jar
error log:
$ kubectl logs spark-pi-67ed1ddda23e32799371677bf1e795c4-driver
...
2019-06-24 08:40:16 INFO SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: External scheduler
cannot be instantiated
...
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation:
[get] for kind: [Pod] with name: [spark-pi-67ed1ddda23e32799371677bf1e795c4-driver]
in namespace: [default] failed.
...
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
How I installed k8s on clean ubuntu:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
$ apt-get update && apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ apt-get update
$ apt-get install -y kubelet kubeadm kubectl
$ apt-mark hold kubelet kubeadm kubectl
I also installed Docker-ce since kubeadm requires it.
$ sudo apt update
$ sudo apt install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt update
$ sudo apt install -y docker-ce
How I initialize cluster:
$ sudo kubeadm init --pod-network-cidr=10.128.0.0/20
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ sudo sysctl net.bridge.bridge-nf-call-iptables=1
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ kubectl taint nodes test-k8s node-role.kubernetes.io/master:NoSchedule-
How I created the docker image:
$ wget http://ftp.meisei-u.ac.jp/mirror/apache/dist/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.7.tgz
$ tar zxvf spark-2.3.3-bin-hadoop2.7.tgz
$ cd spark-2.3.3-bin-hadoop2.7
$ sudo bin/docker-image-tool.sh -r yohei1126 -t v2.3.3 build
$ sudo bin/docker-image-tool.sh -r yohei1126 -t v2.3.3 push
Found a way to build single node Kubernetes cluster on Ubuntsu 18.04 LTS using minikube.
$ sudo apt-get update
$ sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io
$ sudo snap install kubectl --classic
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& chmod +x minikube
$ sudo install minikube /usr/local/bin
$ sudo minikube start --vm-driver=none --cpu 4 --memory 8192
$ sudo mv /home/yohei/.kube /home/yohei/.minikube $HOME
$ sudo chown -R $USER $HOME/.kube $HOME/.minikube
$ kubectl create serviceaccount spark
$ kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default
$ kubectl cluster-info
Kubernetes master is running at https://10.128.0.11:6443
KubeDNS is running at https://10.128.0.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ sudo apt-get install openjdk-8-jdk -y
$ wget https://www-us.apache.org/dist/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.7.tgz
$ tar zxvf spark-2.3.3-bin-hadoop2.7.tgz
$ cd spark-2.3.3-bin-hadoop2.7
$ bin/spark-submit \
--master k8s://https://10.128.0.11:6443 \
--deploy-mode cluster \
--conf spark.executor.instances=3 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.container.image=yohei1126/spark:v2.3.3 \
--class org.apache.spark.examples.SparkPi \
--name spark-pi \
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.3.jar
2019-07-02 08:57:56 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
pod name: spark-pi-e39a8e8f7faf3c9fa861ae024e93b742-driver
namespace: default
labels: spark-app-selector -> spark-d0860239ee0f4118aeb8fee83bd00fa2, spark-role -> driver
pod uid: 01e8f4c0-ae85-4252-92f7-11dbdd2e2b0d
creation time: 2019-07-02T08:57:10Z
service account name: spark
volumes: spark-token-bnm7w
node name: minikube
start time: 2019-07-02T08:57:10Z
container images: yohei1126/spark:v2.3.3
phase: Succeeded
status: [ContainerStatus(containerID=docker://c8c7584c7b704b8f2321943967f84d58267a9ca9d1e1852c2ac9eafb76816dc1, image=yohei1126/spark:v2.3.3, imageID=docker-pullable://yohei11
26/spark@sha256:d3524f24fe199dcb78fd3e1d640261e5337544aefa4aa302ac72523656fe2af1, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=s
park-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=ContainerStateTerminated(containerID=docker://c8c7584c7b704b8f2321943967f84d58267a9ca
9d1e1852c2ac9eafb76816dc1, exitCode=0, finishedAt=Time(time=2019-07-02T08:57:56Z, additionalProperties={}), message=null, reason=Completed, signal=null, startedAt=Time(time=2019-07-02T
08:57:21Z, additionalProperties={}), additionalProperties={}), waiting=null, additionalProperties={}), additionalProperties={})]
2019-07-02 08:57:56 INFO LoggingPodStatusWatcherImpl:54 - Container final statuses:
Container name: spark-kubernetes-driver
Container image: yohei1126/spark:v2.3.3
Container state: Terminated
Exit code: 0
2019-07-02 08:57:56 INFO Client:54 - Application spark-pi finished.
2019-07-02 08:57:56 INFO ShutdownHookManager:54 - Shutdown hook called
2019-07-02 08:57:56 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-9cbaba30-9277-4bc9-9c55-8acf78711e1d
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-pi-e39a8e8f7faf3c9fa861ae024e93b742-driver 0/1 Completed 0 43m
yohei_onishi@test-k8s:~$ kubectl logs spark-pi-e39a8e8f7faf3c9fa861ae024e93b742-driver
...
Pi is roughly 3.141395706978535
...
Can you share us what are the entries related to kube-system logs in Stackdriver Logging according this documentation [1]?. I saw the same issue before and it was related to 403's permissions error or IO timeout.
Also you can try to re-create the nodepool. It can fix the issue.
[1] https://cloud.google.com/monitoring/kubernetes-engine/legacy-stackdriver/logging