Managing multi-pod integration tests with kubernetes and jenkins

5/23/2018

I am trying to set up a testing framework for my kubernetes cluster using jenkins and the jenkins kubernetes plugin.

I can get jenkins to provision pods and run basic unit tests, but what is less clear is how I can run tests that involve coordination between multiple pods.

Essentially I want to do something like this:

podTemplate(label: 'pod 1', containers: [ containerTemplate(...)]) {
    node('pod1') {
        container('container1') {
            // start service 1
        }
    }
}
podTemplate(label: 'pod 2', containers[ containerTemplate(...)]) {
    node('pod2') {
        container('container2') {
            // start service 2
        }
}
stage ('Run test') {
    node {
        sh 'run something that causes service 1 to query service 2'
    }
}

I have two main problems:

  1. Pod lifecycle: As soon as the block after the podtemplate is cleared, the pods are terminated. Is there an accepted way to keep the pods alive until a specified condition has been met?

  2. ContainerTemplate from docker image: I am using a docker image to provision the containers inside each kubernetes pod, however the files that should be inside those images do not seem to be visible/accessable inside the 'container' blocks, even though the environments and dependencies installed are correct for the repo. How do I actually get the service defined in the docker image to run in a jenkins provisioned pod?

-- mstorkson
docker
jenkins
kubernetes

1 Answer

12/7/2018

It has been some time since I have asked this question, and in the meantime I have learned some things that let me accomplish what I have been asking, though maybe not as neatly as I would have liked.

The solution to multi-service tests ended up being simply using an pod template that has the google cloud library, and assigning that worker a service-account credential plus a secret key so that it can kubectl commands on the cluster.

Dockerfile for worker, replace "X"s with desired version:

FROM google/cloud-sdk:alpine

// Install some utility functions.
RUN apk add --no-cache \
  git \
  curl \
  bash \
  openssl

// Used to install a custom version of kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.XX.X/bin/linux/amd64/kubectl &&\
  chmod +x ./kubectl &&\
  mv ./kubectl /usr/local/bin/kubectl

// Helm to manage deployments.
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh &&\
  chmod 700 get_helm.sh && ./get_helm.sh --version vX.XX.X

Then in the groovy pipeline:

pipeline {
  agent {
    kubernetes {
      label 'kubectl_helm'
      defaultContainer 'jnlp'
      serviceAccount 'helm'
      yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: gcloud
  image: your-docker-repo-here
  command:
  - cat
  tty: true
"""
    }
  }
  environment {
      GOOGLE_APPLICATION_CREDENTIALS = credentials('google-creds')
  }
  stages {
    stage('Do something') {
      steps {
        container('gcloud') {
          sh 'kubectl apply -f somefile.yaml'
          sh 'helm install something somerepo/somechart'
        }
      }
    }
 }

Now that I can access both helm and kubectl commands, I can bring pods or services up and down at will. It still doesn't solve the problem of being able to use the internal "context" of them to access files, but at least it gives me a way to run integration tests.

NOTE: For this to work properly you will need a service account of the name you use for your service account name, and credentials stored in jenkins credentials store. For the helm commands to work, you will need to make sure Tiller is installed on your kubernetes cluster. Also, do not change the name of the env key from GOOGLE_APPLICATION_CREDENTIALS as the gsutils tools will be looking for that environmental variable.

-- mstorkson
Source: StackOverflow