Hi I recently deployed csanchez's jenkins-kubernetes build on a local kubernetes build (https://github.com/jenkinsci/kubernetes-plugin). This also means I used the provided jenkins-local.yml and service-local.yml. The build deployed well and everything is set up. However, when I try to run multiple jobs at once the jobs wait in queue and only one executor is spawned. Each of the jobs executes a shell script which prints "hello x friend" and then calls "sleep 1m or 30s".
Is there a certain criteria in which the plugin will spawn multiple containers? Is it supposed to spawn a container (as long as it doesn't surpass the container cap) for each job in queue?
Jenkins build: 1.642.2
Kubernetes plugin: 0.6
Kubernetes: 1.2
The kubernetes plugin points to the internal jenkins master at containter0ip:8080
The container cap is at 5
The docker image deployed is jenkin/jnlp-slave
Edit
When there are multiple jobs in queue, sometimes more than one executor becomes live. After reading the logs of the containers who die, all of them die because they cannot connect to containerip:8080/tcpSlaveAgentListener/.
Try addind this to your etc/sysconfig/jenkins
-Djava.hudson.slaves.NodeProvisioner.MARGIN=1
This helped back when I was using mesos and I kept it around for kubernetes.
There is additional configuration needed for Jenkins to spawn agents ASAP, see https://github.com/jenkinsci/kubernetes-plugin#over-provisioning-flags
In short add Jenkins startup parameters
-Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
and restart Jenkins server