I have Jenkins running in my GKE cluster and I am trying to deploy some code from my GitHub on the same cluster. I am trying to execute docker commands inside a pod with "docker:19" image. My pipeline configuration is inside the Jenkinsfile in my repository and I'm executing a "Pipeline from SCM" build. However, the console output of the build looks like several pods are being created and terminated constantly.
Build logs
Started by user Aayush
Obtained Jenkinsfile from git https://github.com/AayushPathak/fullstack-app-devops/
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] readTrusted
Obtained buildPod.yaml from git https://github.com/AayushPathak/fullstack-app-devops/
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Created Pod: default/multi-crud-39-lx36n-8src2-xrqmf
default/multi-crud-39-lx36n-8src2-xrqmf Container docker was terminated (Exit Code: 0, Reason: Completed)
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘multi-crud_39-lx36n’
Created Pod: default/multi-crud-39-lx36n-8src2-d8fr9
default/multi-crud-39-lx36n-8src2-d8fr9 Container docker was terminated (Exit Code: 0, Reason: Completed)
Created Pod: default/multi-crud-39-lx36n-8src2-rf4vw
default/multi-crud-39-lx36n-8src2-rf4vw Container docker was terminated (Exit Code: 0, Reason: Completed)
Created Pod: default/multi-crud-39-lx36n-8src2-ndh14
default/multi-crud-39-lx36n-8src2-ndh14 Container docker was terminated (Exit Code: 0, Reason: Completed)
Created Pod: default/multi-crud-39-lx36n-8src2-nbj0f
default/multi-crud-39-lx36n-8src2-nbj0f Container docker was terminated (Exit Code: 0, Reason: Completed)
.
.
.
.
Here's how Jenkins is set up inside the cluster
kubectl setup
aayush_pathak15@cloudshell:~/continuous-deployment-on-kubernetes/jenkins (multi-crud)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cd-jenkins ClusterIP 10.3.248.115 <none> 8080/TCP 2d22h
cd-jenkins-agent ClusterIP 10.3.243.156 <none> 50000/TCP 2d22h
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 2d22h
my-release-ingress-nginx-controller LoadBalancer 10.3.241.83 34.122.66.93 80:31844/TCP,443:30350/TCP 2d4h
my-release-ingress-nginx-controller-admission ClusterIP 10.3.241.55 <none> 443/TCP 2d4h
I believe I have set up the kubernetes cloud correctly
Below are my Jenkinsfile and the Pod YAML inside which I want to execute my build
Jenkinsfile
pipeline {
environment {
SHA = sh(returnStdout: true, script: "git rev-parse HEAD")
}
agent {
kubernetes {
idleMinutes 5
yamlFile 'buildPod.yaml'
}
}
stages {
stage('test') {
steps {
container('docker') {
sh 'docker build -t aayushpathak/frontend-test -f ./client/Dockerfile.dev ./client'
sh 'docker run aayushpathak/frontend-test -e CI=true npm test'
}
}
}
stage('build-push-production-images') {
steps {
container('docker') {
sh 'docker build -t aayushpathak/frontend-test -f ./client/Dockerfile.dev ./client'
sh 'docker run aayushpathak/frontend-test -e CI=true npm test'
}
}
}
stage('deploy') {
environment {
GC_HOME = '$HOME/google-cloud-sdk/bin'
GC_KEY = credentials('jenkins-secret')
}
steps {
container('docker') {
sh("rm -r -f /root/google-cloud-sdk")
sh("curl https://sdk.cloud.google.com | bash > /dev/null;")
sh("${GC_HOME}/gcloud components update kubectl")
sh("${GC_HOME}/gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("${GC_HOME}/kubectl apply -f k8s")
sh("${GC_HOME}/kubectl set image deployments/server-deployment server=aayushpathak/fullstack-server:${SHA}")
sh("${GC_HOME}/kubectl set image deployments/client-deployment client=aayushpathak/fullstack-client:${SHA}")
}
}
}
}
}
buildPod.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:19
imagePullPolicy: Always
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
I am not familiar with Jenkins, however there might be 2 issues.
Issue 1
Form Kubernetes points of view YAML is incorrect. When I used your YAML, Ive got an error like below:
$ kubectl apply -f tst.yaml
error: error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "", Namespace: "default"
After adding metadata.name
it worked.
apiVersion: v1
kind: Pod
metadata:
name: docker-container
spec:
containers:
- name: docker
image: docker:19
imagePullPolicy: Always
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
$ kubectl apply -f docker.yaml
pod/docker-container created
Issue 2
Your pod is getting restarted over and over as container exits when its main process exits. As your pods doing nothing it created pod, and as nothing more is to do it terminated it instantly.
Output:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
docker-container 0/1 CrashLoopBackOff 4 2m32s
To keep pod you can for example use sleep
command or sleep infinity
. More details and options can be found in documentation.
apiVersion: v1
kind: Pod
metadata:
name: docker-container
spec:
containers:
- name: docker
image: docker:19
command: ["/bin/sh"] #To run command inside container
args: ["-c", "sleep 3600"] #Specified sleep command
imagePullPolicy: Always
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
$ kubectl apply -f docker-sleep.yaml
pod/docker-container created
Output:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
docker-container 1/1 Running 0 2m15s
As in your logs occurs entry like:
Created Pod: default/multi-crud-39-lx36n-8src2-d8fr9
default/multi-crud-39-lx36n-8src2-d8fr9 Container docker was terminated (Exit Code: 0, Reason: Completed)
I'd say root cause is Issue 2
as docker don't have anything to do so it finished main process and exit.