helm deployments with readynessprobe

9/18/2018

I am using helm in my CI to upgrade deployments with newer versions of charts.

helm upgrade --wait --install .

Expected behavior: the --wait flag should wait for the readinessProbe defined in the new chart. See also: https://docs.helm.sh/helm/#helm-upgrade

However, it does not wait and simply deploys the new chart, even if the readinessProbe is failing.

Which results in a failed new chart and a killed old chart.

It has nothing to do with https://github.com/helm/helm/issues/3173, as the readinessProbe is properly executed and fails. But Helm does simply not wait for this.

Did anybody face issues like that? Thanks!

-- LeonG
kubernetes
kubernetes-helm

1 Answer

9/24/2018

The Issue was fixed by setting the following Kubernetes yaml description inside the deployment ressource:

  strategy:
   type: RollingUpdate
   rollingUpdate:
    maxSurge: 1
    maxUnavailable: 0

Kubernetes Deployment Documentation:

Note: The Deployment controller will stop the bad rollout automatically, and will stop scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (maxUnavailable specifically) that you have specified. Kubernetes by default sets the value to 1 and .spec.replicas to 1 so if you haven’t cared about setting those parameters, your Deployment can have 100% unavailability by default! This will be fixed in Kubernetes in a future version.

-- LeonG
Source: StackOverflow