I'm running a jenkins instance on GCE inside a docker container and would like to execute a multibranch pipeline from this Jenkinsfile and Github. I'm using the GCE jenkins tutorial for this. Here is my Jenkinsfile
node {
def project = 'xxxxxx'
def appName = 'gceme'
def feSvcName = "${appName}-frontend"
def imageTag = "eu.gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
checkout scm
sh("echo Build image")
stage 'Build image'
sh("docker build -t ${imageTag} .")
sh("echo Run Go tests")
stage 'Run Go tests'
sh("docker run ${imageTag} go test")
sh("echo Push image to registry")
stage 'Push image to registry'
sh("gcloud docker push ${imageTag}")
sh("echo Deploy Application")
stage "Deploy Application"
switch (env.BRANCH_NAME) {
// Roll out to canary environment
case "canary":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/canary/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/canary/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out to production
case "master":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out a dev environment
default:
// Create namespace if it doesn't exist
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
}
}
I always get an error docker not found
:
[apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ] Running shell script
+ docker build -t eu.gcr.io/xxxxx/apiservice:master.1 .
/var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ@tmp/durable-b4503ecc/script.sh: 2: /var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ@tmp/durable-b4503ecc/script.sh: docker: not found
What do I have to change to make docker work inside jenkins?
You need the docker client installed in the Jenkins agent image used for that node, ie. cloudbees/java-with-docker-client
And the docker socket mounted in the agent
That looks like DiD (Docker in Docker), which this recent issue points out as problematic.
See "Using Docker-in-Docker for your CI or testing environment? Think twice."
That same issue recommends to run in privilege mode.
And make sure your docker container in which you are executing does have docker installed.