Offline agent when creating a Kubernetes pod template using Jenkins scripted pipeline

10/4/2018

I have created a shared library and it has a groovy file named myBuildPlugin.groovy:

def label = "worker-${UUID.randomUUID().toString()}"

podTemplate(label: label, yaml: """
          apiVersion: v1
          kind: Pod
          metadata:
            name: my-build
          spec:
            containers:
              - name: jnlp
                image: dtr.myhost.com/test/jenkins-build-agent:latest
                ports:
                  - containerPort: 8080
                  - containerPort: 50000
                resources:
                  limits:
                    cpu : 1
                    memory : 1Gi
                  requests:
                    cpu: 200m
                    memory: 256Mi
                env:
                  - name: JENKINS_URL
                    value: http://jenkins:8080
              - name: mongo
                image: mongo
                ports:
                  - containerPort: 8080
                  - containerPort: 50000
                  - containerPort: 27017
                resources:
                  requests:
                    cpu: 200m
                    memory: 256Mi
                  limits:
                    cpu: 1
                    memory: 512Mi
            imagePullSecrets:
            - name: dtrsecret""")
        {
            node(label) {
                pipelineParams.step1.call([label : label])
        }
    }

When in my project I use myBuildPlugin as below, the log shows it waits for an executor forever. When I look at Jenkins I can see the agent is being created but for some reason it can't talk to it via port 50000 (or perhaps the pod can't talk to the agent!)

Later I tried to remove yaml and instead used the following code:

podTemplate(label: 'mypod', cloud: 'kubernetes', containers: [
        containerTemplate(
                name: 'jnlp',
                image: 'dtr.myhost.com/test/jenkins-build-agent:latest',
                ttyEnabled: true,
                privileged: false,
                alwaysPullImage: false,
                workingDir: '/home/jenkins',
                resourceRequestCpu: '1',
                resourceLimitCpu: '100m',
                resourceRequestMemory: '100Mi',
                resourceLimitMemory: '200Mi',
                envVars: [
                        envVar(key: 'JENKINS_URL', value: 'http://jenkins:8080'),
                ]
        ),
        containerTemplate(name: 'maven', image: 'maven:3.5.0', command: 'cat', ttyEnabled: true),
        containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true)
],
        volumes: [
                emptyDirVolume(mountPath: '/etc/mount1', memory: false),
                hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
        ],
        imagePullSecrets: [ 'dtrsecret' ],
)
        {
            node(label) {
                pipelineParams.step1.call([label : label])
            }
        }

Still no luck. Interestingly if I define all these containers in Jenkins configuration, things work smoothly. This is my configuration:

enter image description here

and this is the pod template configuration:

enter image description here

It appears that if I change the label to something other that jenkins-jenkins-slave the issue happens. This is the case even if it's defined via Jenkins' configuration page. If that's the case, how am I suppose to create multiple Pod template for different type of projects?

Just today, I also tried to use pod inheritance as below without any success:

def label = 'kubepod-test'
podTemplate(label : label, inheritFrom : 'default',
        containers : [
                containerTemplate(name : 'mongodb', image : 'mongo', command : '', ttyEnabled : true)
        ]
)
        {
            node(label) {


            }

        }

Please help me on this issue. Thanks

-- xbmono
jenkins
jenkins-pipeline
kubernetes

1 Answer

10/4/2018

There's something iffy about your pod configuration, you can't have your Jenkins and Mongo containers using the same port 50000. Generally, you want to specify a unique port since pods share the same port space.

In this case looks like you need port 50000 to set up a tunnel to the Jenkins agent. Keep in mind that the Jenkins plugin might be doing other things such as setting up a Kubernetes Service or using the internal Kubernetes DNS.

In the second example, I don't even see port 50000 exposed.

-- Rico
Source: StackOverflow