I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cronbox
  namespace: mynamespace
spec:
  serviceAccount: cron-z
  successfulJobsHistoryLimit: 1
  schedule: "*/5 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: cronbox
            image: alpine/helm:2.9.1
            args: ["delete", "--purge", "$(helm ls -a -q temppods.*)"
            env:
            - name: TILLER_NAMESPACE
              value: mynamespace-build
            - name: KUBECONFIG
              value: /kube/config
            volumeMounts:
            - mountPath: /kube
              name: kubeconfig
          restartPolicy: OnFailure
          volumes:
          - name: kubeconfig
            configMap:
              name: cronjob-kubeconfigI ran
oc create -f ./mycron.yamlThis created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i see in the logs of the pod is:
Error: invalid release name, must match regex ^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])+$ and the length must not longer than 53The CronJob container spec is trying to delete a release named (literally):
$(helm ls -a -q temppods.*)This release doesn't exist, and fails helms expected naming conventions.
The alpine/helm:2.9.1 container image has an entrypoint of helm. This means any arguments are passes directly to the helm binary via exec. No shell expansion ($()) occurs as there is no shell running.
To do what you are expecting you can use sh which is available in alpine images.
sh -uexc 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases'In a Pod spec this translates to:
spec:
  containers:
  - name: cronbox
    command: 'sh'
    args:
    - '-uexc'
    - 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases;'As a side note, helm is not the most reliable tool when clusters or releases get into vague states. Running multiple helm commands interacting with within the same release at the same time usually spells disaster and this seems on the surface like that is likely. Maybe there is a question in other ways to achieve this process your are implementing?