CICD Jenkins with Docker Container, Kubernetes & Gitlab

10/7/2019

I have a workflow where

1) Team A & Team B would push changes in an app to a private gitlab (running in docker container) on Server A. 2) Their app should contain Dockerfile or docker-compose.yml 3) Gitlab should trigger jenkins build (jenkins runs in docker container which is also on Server A) and executes the usual build things like test 4) Jenkins should build a new docker image and deploy it.

Question: If Team A needs packages like maven and npm to do web applications but Team B needs other packages like c++ etc, how do i solve this issue?

Because i don't think it is correct for my jenkins container to have all these packages (mvn, npm, yarn, c++ etc) and then execute the jenkins build.

I was thinking that Team A should get a container with packages it needs installed. Similarly for Team B.

I want to make use of Docker, Kubernetes, Jenkins and Gitlab. (As much container technology as possible)

Please advise me on the workflow. Thank you

-- Ninja Dude
devops
docker
gitlab
jenkins
kubernetes

2 Answers

10/7/2019

Whoa, that's a big one and there are many ways to solve this challenge. Here are 2 approaches which you can apply:

Solve it by using different types of Jenkins Slaves

In the long run you should consider running Jenkins workloads on slaves. This is not only desirable for your use case but also scales much better on higher workloads. Because in worst case denser workloads can kill your Jenkins master. See https://wiki.jenkins.io/display/JENKINS/Distributed+builds for reference. When defining slaves in Jenkins (e.g. with the ec2 plugin for AWS integration) you can use different slaves for different workloads. That means you prepare a slave (or slave image, in AWS this would be an AMI) for each specific purpose you've got. One for Java, one for node, you name it.

These defined slaves then can be used within your jobs by checking the Restrict where this project can be run and entering the label of your slave.

enter image description here

Solve it by doing everything within the Docker Environment

The simplest solution in your case would be to just use the Docker infrastructure to build Docker images of any kind which you'll then push to your private Docker Registry (all big cloud providers have such Container Registry services) and then download at deploy time. This would save you pains in installing new technology stacks everytime you change them.

I don't really know how familiar you're with Docker or other containerization technologies, so I roughly outline the solution for you.

  1. Create a Dockerfile with the desired base image as starting point. Just google them or look for them on Dockerhub. Here's a sample for nodeJS:
MAINTAINER devops@yourcompany.biz

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install --silent

#i recommend using a .dockerignore file to not copy unnecessary stuff in your image
COPY . .

CMD [ "npm", "start"]
  1. Adapt the image and create a Jenkinsfile for your CI/CD pipeline. There you should build and push your docker image like this:
stage('build docker image') {
            ...
            steps {
                script {
                    docker.build('your/registry/your-image', '--build-arg NODE_ENV=production .')
                    docker.withRegistry('https://yourregistryurl.com', 'credentials token to access registry') {
                        docker.image('your/registry/your-image').push(env.BRANCH_NAME + '-latest')
                    }
                }
            }
        }
  1. Do your deployment (in my case it's done via Ansible) by downloading and running the previously deployed image during your deployment scripts.

If you'd try to define your questions a little more in detail, you'd get better answers.

-- Patrick Pötz
Source: StackOverflow

10/7/2019

I would like to share a developer's perspective which is different than the presented in the question "operation-centric" state of mind.

For developers the Jenkins is also a tool that can trigger the build of the application. Yes, of course, a built artifact can be a docker image as well but this is not what developers really concerned about. You've referred to this as "the usual build things like test" but developers have entire ecosystems around this "usual build".

For example, mvn that you've mentioned has great dependency resolution capabilities. It can alone resolve the dependencies. Roughly the same assumption holds for other build tools. I'll stick with maven during this answer.

So you don't need to maintain dependencies by yourself but, as a Jenkins maintainer, you should give a technical ability to actually build the product (which is running maven that in turn resolves/downloads all the dependencies and then runs the tests, produces tests results and can even create a docker image or even to deploy the image to some images repository if you wish ;) ).

So developers who use some build technologies maintain their own scripts (declarative as in the case of maven or something like make files in case of C++) should be able to run their own tools.

Now this picture doesn't contradict with the containerization:

The jenkins image can contain maven/make/npm really a small number of tools just to run the build. The actual scripts can be a part of the application source code base (maintained in git).

So when Jenkins gets the notification about the build - it should checkout the source code, run some script (like mvn package), show the test results and then as a separate step or from maven to create an image of your application and upload it to some repository or supply it to the kubernetes cluster depending on your actual devops needs.

Note that during mvn package maven will download all the dependencies (3rd-party packages) into the jenkins workspace, compile everything with Java compiler that you should also obviously need to make available on Jenkins machine.

-- Mark Bramnik
Source: StackOverflow