This question is somewhat related to one of my previous questions as in it gives a clearer idea on what I am trying to achieve.. This question is about an issue I ran into when trying to achieve the task in that previous question...
I am trying to test if my kubectl
works from within the Jenkins container. When I start up my Jenkins container I use the following command:
docker run \
-v /home/student/Desktop/jenkins_home:/var/jenkins_home \
-v $(which kubectl):/usr/local/bin/kubectl \ #bind docker host binary to docker container binary
-v ~/.kube:/home/jenkins/.kube \ #docker host kube config file stored in /.kube directory. Binding this to $HOME/.kube in the docker container
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker -v ~/.kube:/home/root/.kube \
--group-add 998
-p 8080:8080 -p 50000:50000
-d --name jenkins jenkins/jenkins:lts
The container starts up and I can login/create jobs/run pipeline scripts all no issue.
I created a pipeline script just to check if I can access my cluster like this:
pipeline {
agent any
stages {
stage('Kubernetes test') {
steps {
sh "kubectl cluster-info"
}
}
}
}
When running this job, it fails with the following error:
+ kubectl cluster-info // this is the step
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
Thanks!
I'm not getting why there is:
-v $(which kubectl):/usr/local/bin/kubectl -v ~/.kube:/home/jenkins/.kube
/usr/local/bin/kubectl
is a kubectl binary and ~/.kube:/home/jenkins/.kube
should be the location where the kubectl binary looks for the cluster context file i.e. kubeconfig
. First, you should make sure that the kubeconfig
is mounted to the container at /home/jenkins/.kube
and is accessible to kubectl
binary. After appropriate volume mounts, you can verify by creating a session in the jenkins container with docker container exec -it jenkins /bin/bash
and test with kubectl get svc
. Make sure you have KUBECONFIG
env var set in the session with:
export KUBECONFIG=/home/jenkins/.kube/kubeconfig
Before you run the verification test and
withEnv(["KUBECONFIG=$HOME/.kube/kubeconfig"]) {
// Your stuff here
}
In your pipeline code. If it works with the session, it should work in the pipeline as well.
I would personally recommend to create a custom Docker image for Jenkins which will contain kubectl
binary and other utilities necessary (such as aws-iam-authenticator
for AWS EKS IAM-based authentication) for working with Kubernetes cluster. This creates isolation between your host system binaries and your Jenkins binaries.
Below is the Dockerfile
I'm using which contains, helm
, kubectl
and aws-iam-authenticator
.
# This Dockerfile contains Helm, Docker client-only, aws-iam-authenticator, kubectl with Jenkins LTS.
FROM jenkins/jenkins:lts
USER root
ENV VERSION v2.9.1
ENV FILENAME helm-${VERSION}-linux-amd64.tar.gz
ENV HELM_URL https://storage.googleapis.com/kubernetes-helm/${FILENAME}
ENV KUBE_LATEST_VERSION="v1.11.0"
# Install the latest Docker CE binaries
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce \
&& curl -o /tmp/$FILENAME ${HELM_URL} \
&& tar -zxvf /tmp/${FILENAME} -C /tmp \
&& mv /tmp/linux-amd64/helm /bin/helm \
&& rm -rf /tmp/linux-amd64/helm \
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& curl -L https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator -o /usr/local/bin/aws-iam-authenticator \
&& chmod +x /usr/local/bin/aws-iam-authenticator