I'm trying to use the Jenkins/Kubernetes plugin to orchestrate docker slaves with Jenkins.
I'm using this plugin: https://github.com/jenkinsci/kubernetes-plugin
My problem is that all the slaves are offline so the job can't execute:
I have tried this on my local box using minikube, and on a K8 Cluster hosted by our ops group. I've tried both Jenkins 1.9 and Jenkins 2. I always get the same result. The screenshots are from Jenkins 1.642.4, K8 v1.2.0
Here is my configuration... note that when I click 'test connection' I get a success. Also note I didn't need any credentials (this is the only difference I can see vs the documented example).
The Jenkins log shows the following over and over:
Waiting for slave to connect (11/100): docker-6b55f1b7fafce
Jul 20, 2016 5:01:06 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
Waiting for slave to connect (12/100): docker-6b55f1b7fafce
Jul 20, 2016 5:01:07 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
Waiting for slave to connect (13/100): docker-6b55f1b7fafce
Jul 20, 2016 5:01:08 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
When I run kubectl get events
I see this:
24s 24s 1 docker-6b3c2ff27dad3 Pod Normal Scheduled {default-scheduler } Successfully assigned docker-6b3c2ff27dad3 to 96.xxx.xx.159
24s 23s 2 docker-6b3c2ff27dad3 Pod Warning MissingClusterDNS {kubelet 96.xxx.xx.159} kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Pulled {kubelet 96.xxx.xx.159} Container image "jenkinsci/jnlp-slave" already present on machine
23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Created {kubelet 96.xxx.xx.159} Created container with docker id 82fcf1bd0328
23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Started {kubelet 96.xxx.xx.159} Started container with docker id 82fcf1bd0328
Any ideas?
UPDATE: more log info as suggested by csanchez
➜ docker git:(master) ✗ kubectl get pods --namespace default -o wide
NAME READY STATUS RESTARTS AGE NODE
docker-6bb647254a2a4 1/1 Running 0 1m 96.x.x.159
➜ docker git:(master) ✗ kubectl log docker-6bafbac10b392
Jul 20, 2016 6:45:10 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to 96.x.x.159:50000 (retrying:10)
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
I'll have to look at what this port 50000 is used for??
I know this is an old post, but no one of the above answers solved my issue with offline Jenkins agents Anyway, I have managed to solve this issue by my self and I will leave here the solution!
Note 1: My Kubernetes cluster is running on 3 VMs, locally, by using Hyper-V and I do not use Nginx!
Note 2: The Jenkins master is running on Kubernetes pod!
Note 3: You can checkout my git repository here: https://github.com/RazvanSebastian/Kubernetes_Cluster/tree/master/3_jenkins_setup
$ kubectl create namespace jenkins
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins
namespace: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
namespace: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
NodePort will let the Jenkins master to be accessible from the outside of the Kubernetes cluster
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30123
name: ui
selector:
app: master
type: NodePort
ClusterIp which will work like a discovery service for the internal Jenkins-Slaves. By default the master is listening on port 5000 for the inbound agents!
apiVersion: v1
kind: Service
metadata:
name: jenkins-discovery
namespace: jenkins
spec:
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: master
template:
metadata:
labels:
app: master
spec:
serviceAccountName: jenkins
containers:
- image: jenkins/jenkins:lts
name: jenkins
ports:
- containerPort: 8080
name: http-port
- containerPort: 50000
name: jnlp-port
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
OBSERVATION 1 : The Jenkins master container has to expose both ports: 8080 ( UI Jenkins port) and 50000 ( for the inbound agents). If you do not expose the 5000 port you will receive an offline status from Jenkins-Slaves
OBSERVATION 2 : The serviceAccountName: jenkins
line from the Deployment template will bind to Jenkins the authority to create/delete Kubernetes resources
$ kubectl get all -o wide --namespace=jenkins
Note Copy the IP from the Jenkins pod; we will need it to set up the Jenkins
$ kubectl cluster-info | grep master
Now we have to configure the Jenkins from UI. I will append the images but you can check the git repo also :)
7.1. Install packages (Manage Jenkins -> Manage Plugins -> Available) : Kubernetes and SSH Agent
7.2. Setup Master node (Manage Jenkins -> Manage Nodes and Clouds -> Tool Icon):
7.3. Setup Kubernetes plugin (Manage Jenkins -> Configure System -> At the bottom of the page click the Cloud section)
7.4. Make sure that when you are creating a Jenkins job the restriction like this:
I just want to add a bit more explanation to the above answers for newbies.
While exposing the jenkins UI, you also need to expose internal port 50000 Here is a simple service for a jenkins deployment:
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
name: "http"
nodePort: 30000
targetPort: 8080
- port: 50000
name: "slave"
nodePort: 30010
targetPort: 50000
selector:
app: jenkins
For external access to the Jenkins UI, nodePort
is being used in the above configuration. I'm exposing port 8080 to the nodePort 30000 (jenkins UI will now available at node_ip:30000) and exposing pod port 50000 to nodeport 30010.
Once the svc is created:
$ kubectl get svc -n jenkins
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins 10.233.5.94 <nodes> 8080:30000/TCP,50000:30010/TCP 23m
Now add jenkins_ip:30010
as Jenkins Tunnel
.
When running jenkins in Kubernetes, the service name is resolvable by both the jenkins master and slaves.
The best way to configure this is than the use the internal DNS and set the jenkins url to:
http://jenkins:8080
(assuming you called your service jenkins, and your port on the service is 8080)
No tunnel is required.
The benefit of this approach is that it will survive restarts of your jenkins without reconfiguration.
Secondary benefit is that you would not have to expose Jenkins to the outside world, thus limiting security risks.
Thanks to @csanchez I have the solution.
The problem was that I am running the jenkins server in k8 and I didn't specify a fixed port in k8 (I let k8 pick the port). So changing the config for the jenkins tunnel solved it.
A better solution is to have the port be fixed as suggested, making that change next.
You need to expose both port 8080 and 50000 as described in the plugin example config https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/kubernetes/jenkins.yml