I have a local running cluster deployed with minikube. Spring Cloud Data Flow is deployed according to this tutorial. At this time, I'm able to create a kubernetes task on the SCDF dashboard and launch it. Although I have a task which reads file from file system and I would like to read that file from a shared NFS directory mounted in the POD.
I have a NFS server configured and running in another virtual machine and there is a persistent volume created in my kubernetes cluster pointing to the NFS host. When launching a task, some parameters are provided.
deployer.job-import-access-file.kubernetes.volumes=[
{
name: accessFilesDir,
persistentVolumeClaim: {
claimName: 'apache-volume-claim'
}
},
{
name: processedFilesDir,
persistentVolumeClaim: {
claimName: 'apache-volume-claim'
}
}
]deployer.job-import-access-file.kubernetes.volumeMounts=[
{
name: 'accessFilesDir',
mountPath: '/data/apache/access'
},
{
name: 'processedFilesDir',
mountPath: '/data/apache/processed'
}
]
nfs-volume.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-apache-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
nfs:
server: 10.255.254.10
path: '/var/nfs/apache'
nfs-volume-claim.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: apache-volume-claim
namespace: default
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Application Docker file
FROM openjdk:8-jdk-alpine
COPY target/job-import-access-file-0.1.0.jar /opt/job-import-access-file-0.1.0.jar
VOLUME ["/data/apache/access", "/data/apache/processed"]
ENTRYPOINT ["java","-jar","/opt/job-import-access-file-0.1.0.jar"]
It is expected that my task reads files from the mounted directory. But directory is empty. I mean, it is mounted although there is no sync.
Looks like the actual issue in your case is the name
of the volume you specified in the configuration properties. Since K8s doesn't allow upper case letters for the name (see here), you need to use lowercase for your name
values (Currently there are accessFilesDir and processedFilesDir), etc.,
I tried to pass the similar settings on minikube (without NFS mounting etc.,) just to see if the task launching passes the volume and volume mount K8s deployer properties and they seem to work fine:
dataflow:>task create a1 --definition "timestamp"
dataflow:>task launch a1 --properties "deployer.timestamp.kubernetes.volumes=[{name: accessfilesdir, persistentVolumeClaim: { claimName: 'apache-volume-claim' }},{name: processedfilesdir, persistentVolumeClaim: { claimName: 'apache-volume-claim' }}],deployer.timestamp.kubernetes.volumeMounts=[{name: 'accessfilesdir', mountPath: '/data/apache/access'},{name: 'processedfilesdir', mountPath: '/data/apache/processed'}]"
and, this resulted in the following config when I describe the pod (kubectl describe ) of the launched task:
Volumes:
accessfilesdir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: apache-volume-claim
ReadOnly: false
processedfilesdir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: apache-volume-claim
ReadOnly: false