How to deploy a release after changing the configurations?

5/9/2019

I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the documentation.

This time, while running the same old command:

# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub

helm upgrade --install $RELEASE jupyterhub/jupyterhub \
  --namespace $NAMESPACE  \
  --version=0.8.2 \
  --values jupyter-hub-config.yaml

where the jupyter-hub-config.yaml file is:

proxy:
  secretToken: "<a secret token>"
singleuser:
  image:
    # Get the latest image tag at:
    # https://hub.docker.com/r/jupyter/datascience-notebook/tags/
    # Inspect the Dockerfile at:
    # https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
    name: jupyter/datascience-notebook
    tag: 177037d09156

I get the following problem:

UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases

I then deleted the namespace via kubectl delete ns/jhub and the release via helm delete --purge jhub. Again tried this command in vain, again the same error.

I read few GH issues and found that either the YAML file was invalid or that the --force flag worked. However, in my case, none of these two are valid.

I expect to make this release and also learn how to edit the current releases.

Note: As you would find in the aforementioned documentation, there is a pvc created.

-- Aviral Srivastava
devops
jupyter
jupyterhub
kubernetes
kubernetes-helm

1 Answer

7/2/2019

After changes in kubeconfig the next solution worked for me

helm init --tiller-namespace=<ns> --upgrade

Works with kubectl 1.10.0 and helm 2.3.0. I guess this upgrades tiller to compatible helm version.

Don't forget to set KUBECONFIG variable before use this comman - this step itself may solve your issue if you didn't do this after changing your kubeconfig.

export KUBECONFIG=<*.kubeconfig>

In my case in the config cluster.server field has been changed, but context.name and current-context fields I left the same as in the previous config, not sure if it matters. And I faced the same issue on the firs try to deploy new release with helm, but after first successful deploy it's enough to change KUBECONFIG variable. I hope it helps.

-- Vladimir
Source: StackOverflow