Running E2E tests in a Kubernetes cluster

7/14/2018

We've a fairly simple FE-BE combo that we deploy to a K8S cluster (Java + Spring Boot for BE, static React-based web app for the FE). We're also working on various E2E scenarios that exercise the whole system (using Nigthmare.js).

In order to make running E2E tests easier in our CI pipeline, I'd like to run the tests themselves in K8S as well. For example, a build on one of the projects would update the images, and trigger the E2E job run, which would then (for example) install the Helm chart into a unique namespace and then run the E2E tests there as well. One of the benefits that I see there is that the cluster can be completely private, without any need for public domain names or any other exposure to the extranet.

What I can't get my head around yet is how to actually run the tests in this setup. One thing I'm thinking about is the Kuberenetes jobs, but I'd like someone to validate that. Also, I'm not quite sure how to collect logs and metrics for each run: something like Prometheus and ElasticSearch on the cluster will work of course, but I also need to forwards the results to the CI/CD pipeline somehow.

Bottom line, what I need is to see the whole picture in my head, more than any technical aspects of it.

Thanks in advance!

-- Ilya Ayzenshtok
e2e-testing
kubernetes
nightmare

2 Answers

7/23/2018

Helm does have the idea of a test, which can be used to run test commands. But it sounds like your question is more about how to get the tests to run inside a container. That example shows a single shell command running in a container. In your case perhaps your tests would be implemented using java or protractor. Then you'd want to build a container containing your test code and invoke a command there to run it (e.g. mvn verify).

Helm also has post-install hooks, which could be another interesting way to invoke tests. You'd need to configure your install to ensure your services are all fully up before the hook runs.

Your idea of creating a new namespace and deploying into sounds quite similar to the Jenkins-X concept of creating a preview environment and testing on it for each pull request. Or perhaps you're thinking of just running tests in a dedicated staging/test namespace. Either way you may want to look at Jenkins-X. The way it uses build pods might be of interest as they execute pipeline steps from within the cluster. I think gitlab has a similar concept. But I appreciate you may have already chosen your CI/CD solution.

Or you might well look at this and decide that it's easier to run the tests from outside the cluster. That means overcoming security hurdles but means you can initiate the tests without having to containerise the tests. I suspect it will depend on your test technologies, your security setup and your CI/CD.

-- Ryan Dawson
Source: StackOverflow

7/14/2018

You should deploy and run tests on git push on either project, so naturally it should be part of your CI.

One way it can work is that on git push you build your image then using your helm chart you deploy the entire stack with this new image then you spin up another container that to run the e2e tests. Depending on how you deploy you may have to supply to your e2e container where your deployment is

Another way is you have a front end deployment that points to an exposed test deployment of your backend. Let's call the backend test deployment and front end integrated test deployment. Code wise front end is master but points to a backend which may not work at all. On git push to backend you build and push container then set the image on the test backend, wait for rollout and run your tests against the integrated test front end. And you kind of have to do the same the other way around

-- Lev Kuznetsov
Source: StackOverflow