Spark executors not able to access ignite nodes inside kubernetes cluster

6/21/2018

I am connecting my spark job with an existing ignite cluster. I use a service account name spark for it. My driver is able to access the ignite pods, but my executors are not able to access that.

This is what executor log looks like

Caused by: java.io.IOException: Server returned HTTP response code: 403 for URL: https://35.192.214.68/api/v1/namespaces/default/endpoints/ignite

I guess it's due to some privileges. Is there a way to explicitly specify service account for executors as well?

Thanks in advance.

-- wadhwasahil
apache-spark
ignite
kubernetes

1 Answer

6/22/2018

The similar issue was discussed here.

Most likely you need to grant more permissions to a service account which is used for running Ignite.

This way you are able to create and bind one more role to the service account:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: ignite
  namespace: default
rules:
- apiGroups:
  - ""
  resources: # Here is resources you can access
  - pods
  - endpoints
  verbs: # That is what you can do with them
  - get
  - list
  - watch

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ignite
roleRef:
  kind: ClusterRole
  name: ignite
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: <service account name>
  namespace: default

Also, if your namespace is not default you need to update that one in yaml-files and specify it in TcpDiscoveryKubernetesIpFinder configuration.

-- Roman Guseinov
Source: StackOverflow