How to prevent nodes from restarting in a Kubernetes Deployment

2/28/2017

I'm trying to port an existing framework designed to benchmark distributed systems over to Kubernetes. One of the feature of the framework is fault injection (Either remove or add nodes from the network). I am not able to use Deployments as when a node is finished it restarts automatically (Which you cannot prevent in a deployment as it is designed for tasks that never finish).

Naturally I turned myself to Jobs. My initial plan was to set the number of Tasks and parallelism to the same value (e.g. 200) so that they all run together and when they are done, they are not restarted.

However, with Kubernetes Jobs you cannot scale the number of total pods, just the parallelism. I want to be able to scale up the total number of pods (so for a job if I have 200 tasks and 200 parallelism, I want to go to 205 tasks and 205 parallelism while still having a restartPolicy = Never.)

Is there an other resource I could use in Kubernetes other than Deployments or Jobs ?

-- jocelynthode
docker
kubernetes
scale

0 Answers