Openshift: trigger pods restart sequentially

2/5/2018

My application loads data during startup. So I need to restart application to change data.

Data is loaded from Oracle schema and can be changed by other application.

If the data is changed, application becomes partially functional and needs to be restarted.

Requirement: restart should be done automatically without downtime (old pod should be killed, when new one pass readiness check).

How this requirement can be fulfilled?

Notes:

  1. I would really like to use liveness probe to check some URL with health check. Issue: AFAIK liveness probe kills pod as soon as check fails. So all pods will be killed simultaneously, which leads to a downtime during startup.
  2. The desired behavior can be reached by a rolling deployment. However I don't want to perform it manually.
  3. I don't want to implement loading data during pod operation for simplicity: it can load data only during startup. If pod state is not fully functional, it is killed and recreated.
-- idobr
kubernetes
openshift

1 Answer

2/5/2018

2 ways i can think of : - Use statefulsets, the pods will be restarted in order and killed in reverse order. - You can use deployment's spec.strategy.type = RollingUpgrade and pair it with maxUnavailable to greater than 1.

.spec.strategy.rollingUpdate.maxUnavailable

-- Baltazar Chua
Source: StackOverflow