How to define pod with yaml in jenkinsci/kubernetes-plugin

4/4/2018

I am using the this plugin to run dynamic agents in a Kubernetes cluster, jenkinsci/kubernetes-plugin and so far everything is going great except for when I try and use the feature available for defining slave pods in yaml format.enter image description here

Unfortunately, when I attempt to use this feature things go bad. I changed my Jenkins pipeline script from this:

def label = "kubernetes"
podTemplate(label: label,
  containers: [containerTemplate(name: 'jnlp', image: 'artifactory.baorg.com:5001/sum/coreimage:1', ttyEnabled: true, label: label)],
  imagePullSecrets: [ 'ad-artifactory-cred' ],
  ) {
  node(label) {
    stage('Core') {
      container(name: 'jnlp') {
          stage('building program') {
            sh "echo hello world"
        }
      }
    }
  }
}

To this:

def label = "kubernetes"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
  labels:
    label: label
  spec:
    containers:
    - name: jenkins-slave
      image: artifactory.baorg.com:5001/sum/coreimage:1
      tty: true
"""
) {
  node(label) {
    stage('Core') {
      container(name: 'jnlp') {
          stage('building program') {
            sh "echo hello world"
        }
      }
    }
  }
}

When the pipeline script is written in the former way, everything is working as expected. The slave container is created and the job runs. Unfortunately, when I take these settings and attempt to codify them in YAML format it seems that the configuration isn't even being read or something. When it's working the image is pulled if it doesn't already exist in the cluster and the job runs fine: enter image description here

But when I changed the configuration so that it's done in YAML, the job attempts to pull an image named "jenkins/jnlp-slave:alpine" instead of the one I specify and times out because my cluster doesn't have access to the internet (index.docker...). The reason it's pulling this image is due to a bug in the plugin which occurs when the name of the slave isn't set to "jnlp" (this is not related to my issue anyway). enter image description here

The important observation to be made is that the YAML information isn't being accepted or recognized for some reason and I'm not sure why. Is it because of some bad formatting? Or is this some known issue with this plugin (I find that hard to believe). I already checked to see if I had any extra tabs in the code but I did not.

-- mudejar
jenkins
jenkins-pipeline
jenkins-plugins
jenkins-slave
kubernetes

1 Answer

4/26/2018

Based on a quick look at your pipeline DSL and YAML spec. The following snippet is what a direct translation of your DSL would look like (untested).

apiVersion: v1
kind: Pod
metadata:
  labels:
    label: label
  spec:
    containers:
    - name: jnlp
      image: artifactory.baorg.com:5001/sum/coreimage:1
      tty: true
    imagePullSecrets:
    - name: ad-artifactory-cred

In your original configuration you specified that the default "jnlp" container not be used and instead use the container you specified. In your YAML version, you used the name "jenkins-slave" thereby indicating to the the plugin that you want the default jnlp container (jenkins/jnlp-slave:alpine) to be launched in the pod along with your "jenkins-slave" container.

As for why the pull fails, this is likely a network configuration issue (firewall or proxy) as indicated in the events. If you have access to the node, try doing a docker pull jenkins/jnlp-slave:aline manually to debug.

-- abn
Source: StackOverflow