Helm Hook to run kubectl command

10/22/2019

I would like to run kubectl command from a pre-upgrade helm hook, but I can't seem to any documentation on how to achieve this.

Do I have to create a docker image that contains kubectl in order to achieve this?

or is there some way of achieving this without using a container?

I have a basic helm hook which looks like this

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Chart.Name }}-change-pvc-hook
  labels:
    app: {{ .Chart.Name }}
  annotations:
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-delete-policy": hook-succeeded, before-hook-creation
spec:
  template:
    metadata:
      name: "{{.Release.Name}}"
      labels:
        app: {{ .Chart.Name }}
    spec:
      restartPolicy: Never
      containers:
        - name: pre-upgrade-change-pvc

if someone could explain how to run kubectl on without a container or how I can achieve this, that would be great

-- user3292394
kubectl
kubernetes
kubernetes-helm

3 Answers

10/31/2019

You can do it like Prometheus operator does the cleanup (pre-delete hook) in their helm chart: prometheus operator kubectl usage

basically, you can use the image = k8s.gcr.io/hyperkube:v1.12.1 something like this:

apiVersion: batch/v1
kind: Job
metadata:
  name:somename-operator-cleanup
  namespace: somenamespace
  annotations:
    "helm.sh/hook": pre-delete
    "helm.sh/hook-weight": "3"
    "helm.sh/hook-delete-policy": hook-succeeded
  labels:
    app: someapp-operator
spec:
  template:
    metadata:
      name: somename-operator-cleanup
      labels:
        app: someapp
    spec:
    {{- if .Values.global.rbac.create }}
      serviceAccountName: {{ template "prometheus-operator.operator.serviceAccountName" . }}
    {{- end }}
      containers:
        - name: kubectl
          image: "k8s.gcr.io/hyperkube:v1.12.1"
          imagePullPolicy: "IfNotPresent"
          command:
          - /bin/sh
          - -c
          - >
              kubectl your command here.
              kubectl delete alertmanager   --all;
              kubectl delete prometheus     --all;
              kubectl delete prometheusrule --all;
              kubectl delete servicemonitor --all;
              sleep 10;
              kubectl delete crd alertmanagers.monitoring.coreos.com;
              kubectl delete crd prometheuses.monitoring.coreos.com;
              kubectl delete crd prometheusrules.monitoring.coreos.com;
              kubectl delete crd servicemonitors.monitoring.coreos.com;
              kubectl delete crd podmonitors.monitoring.coreos.com;
      restartPolicy: OnFailure

Other option is to CURL to the Kubernetes API like here note you need automountServiceAccountToken: true and then you can use the Barear token from /var/run/secrets/kubernetes.io/serviceaccount/token

You just need an image with curl for that. You can use zakkg3/opennebula-alpine-bootstrap for this.

For example here i create a secret based on a file using curl instead of kubectl:

url -s -X POST -k https://kubernetes.default.svc/api/v1/namespaces/${NAMESPACE}/secrets \
                -H "Authorization: Bearer $( cat /var/run/secrets/kubernetes.io/serviceaccount/token )" \
                -H "Content-Type: application/json" \
                -H "Accept: application/json" \
                -d "{ \"kind\": \"Secret\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"{{ include "opennebula.fullname" . }}-ssh-keys\", \"namespace\": \"${NAMESPACE}\" }, \"type\": \"Opaque\", \"data\": { \"authorized_keys\": \"$( cat opennebula-ssh-keys/authorized_keys | base64 | tr -d '\n' )\", \"config\": \"$( cat opennebula-ssh-keys/config | base64 | tr -d '\n' )\", \"id_rsa\": \"$( cat opennebula-ssh-keys/id_rsa | base64 | tr -d '\n' )\", \"id_rsa.pub\": \"$( cat opennebula-ssh-keys/id_rsa.pub | base64 | tr -d '\n' )\" } }" > /dev/null

Note its good practice to output to > /dev/null otherwise you will end up with this output in your logging management (ELK / LOKI).

-- NicoKowe
Source: StackOverflow

10/31/2019

Do I have to create a docker image that contains kubectl in order to achieve this?

Yes, you have to create it because containers usually are Lightweight and contain the most basic stuff. You can create container with kubernetes using Dockerfile.

Second option is to create own mutating webhook which will modify PVC (using patch).

Mutating admission Webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults. After all object modifications are complete, and after the incoming object is validated by the API server, validating admission webhooks are invoked and can reject requests to enforce custom policies.

This way you could modify PVC before Helm install will create release.

-- PjoterS
Source: StackOverflow

10/23/2019

I am looking to do something like this with my helm hooks.

In my looking around I have found this question: Running kubectl commands Helm post install

But it doesn't offer much, I am currently looking into the helm plugins https://helm.sh/docs/related/#helm-plugins

Hope you find your answer

-- Niall_Maher
Source: StackOverflow