PersistentVolumeClaim workspace for jenkins slave

4/28/2020

I'm trying to keep my workspace in PersistentVolumeClaim by using kubernetes-plugin

I've created PV and PVC and I've stored my files on local disk. This pipeline has worked fine before but now workspace's are not created anymore on local disk.

Here is my pipeline. Any Idea why don't work?

def podTemplate = """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: maven
    image: maven
    command:
    - sleep
    args:
    - infinity
    volumeMounts:
      - name: workspace-volume
        mountPath: /home/jenkins/agent
    workingDir: "/home/jenkins/agent"
  volumes:
      - name: "workspace-volume"
        persistentVolumeClaim:
          claimName: "jenkins-slave-pvc"
          readOnly: false
"""


pipeline {
    agent none
    stages {
        stage ('maven') {
            agent { 
                kubernetes {
                    yaml podTemplate 
                    defaultContainer 'maven' 
                } 
            }
            stages {
                stage('Nested 1') {                  
                    steps {
                        sh "touch Nested1 && mvn -version"
                    }
                }
                stage('Nested 2') {                  
                    steps {
                        sh "mvn -version 2 && touch Nested2 "
                    }
                }
            }
        }
    }
}

Now Jenkins always mount the volume like this:

volumeMounts:
 - mountPath: "/home/jenkins/agent"
   name: "workspace-volume"
   readOnly: false
volumes:
  - emptyDir:
      medium: ""
    name: "workspace-volume"

My question is: How can I overwrite the default value of emptyDir with my persistentVolumeClaim to be my workspace-volume?

-- airdata
jenkins
jenkins-declarative-pipeline
jenkins-kubernetes
jenkins-pipeline
kubernetes

0 Answers