Implement Dask workers on Kubernetes overriding default values

9/15/2021

I'm trying to deploy Dask distributed on Kubernetes using Helm. It works fine, but I need to customize the deployment as described here. What I need is to have the workers access a mounted volume to read/write files. All the workers would have access to the same volume.

The example says that the values below would be used in values.yaml, so if the template doesn't have the volume-related variables there's no way I could make this work. Any ideas how to implement this, and if it's not possible?

This is the values.yaml that does not work:

worker:
  replicas: 3
  volumeMounts:
    - mountPath: "/mount/path"
      name: mypd
  env:
    - name: EXTRA_CONDA_PACKAGES
      value: numba xarray -c conda-forge
    - name: EXTRA_PIP_PACKAGES
      value: s3fs dask-ml --upgrade

jupyter:
  enabled: false
  
volumes:
  - name: mypd
    persistentVolumeClaim:
     claimName: myclaim-2
-- ps0604
dask
dask-distributed
kubernetes

0 Answers