Helm appears to parse my chart differently depending on if I use --dry-run --debug?

10/17/2018

So I was deploying a new cronjob today and got the following error:

Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]

here's some output from running helm on the same chart, no changes made, but with the --debug --dry-run flags:

 NAME:   acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *

COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job

HOOKS:
MANIFEST:

---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
    app: generic-job
    chart: "generic-job-0.1.0"
    release: "acs-export-cronjob"
    heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
    spec:
    metadata:
        name: acs-export-cronjob
        labels:
        jobgroup: acs-export-jobs
        app: generic-job
        chart: "generic-job-0.1.0"
        release: "acs-export-cronjob"
        heritage: "Tiller"
    spec:
        template:
        metadata:
            labels:
            jobgroup: acs-export-jobs
            app: generic-job
            chart: "generic-job-0.1.0"
            release: "acs-export-cronjob"
            heritage: "Tiller"
            annotations:
            iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
        spec:
            restartPolicy: Never   #<----------this is not 'Always'!!
            serviceAccountName: acs-export-cronjob-sa
            tolerations:
            - key: sonic-node-group
            operator: Equal
            value: api
            effect: NoSchedule
            nodeSelector:
            sonic-node-group: api
            volumes:
            - name: config
            emptyDir: {}
            initContainers:
            - name: "get-users-vmargs-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
            volumeMounts:
            - mountPath: /config
                name: config
            - name: "get-users-yaml-appconfig-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
            volumeMounts:
            - mountPath: /config
                name: config
            containers:     #<--------this field is not missing!
            - image: <censored>.amazonaws.com/sonic/acs-export:latest
            imagePullPolicy: Always
            name: "users-batch"
            command:
            - "bash"
            - "-c"
            - 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
            env:
            - name: FRENV
                value: "batch"
            - name: STACKNAME
                value: eu1-test
            - name: SPRING_PROFILES
                value: "export-job"
            - name: NAMESPACE
                valueFrom:
                fieldRef:
                    fieldPath: metadata.namespace
            volumeMounts:
            - mountPath: /config
                name: config
            resources:
                limit:
                cpu: 100m
                memory: 1Gi

if you paid attention, you may have noticed line 101 (I added the comment afterwards) in the debug-output, which sets restartPolicy to Never, quite the opposite of Always as the error message claims it to be.

You may also have noticed line 126 (again, I added the comment after the fact) of the debug output, where the mandatory field containers is specified, again, much in contradiction to the error-message.

whats going on here?

-- Anders Martini
kubernetes
kubernetes-helm

2 Answers

10/17/2018

This may be due to formatting error. Look at the examples here and here. The structure is

jobTemplate:  
    spec:  
      template:  
        spec:  
          restartPolicy: Never

As per provided output you have spec and restartPolicy on the same line:

jobTemplate:
       spec:
        template:
        spec:
            restartPolicy: Never   #<----------this is not 'Always'!!

The same with spec.jobTemplate.spec.template.spec.containers Suppose helm uses some default values instead of yours. You can also try to generate yaml file, convert it to json and apply.

-- VKR
Source: StackOverflow

10/18/2018

hah! found it! it was a simple mistake actually. I had an extra spec:metadata section under jobtemplate which was duplicated. removing one of the dupes fixed my issues.

I really wish the error-messages of helm would be more helpful.

the corrected chart looks like:

 NAME:   acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *

COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job

HOOKS:
MANIFEST:

---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
    app: generic-job
    chart: "generic-job-0.1.0"
    release: "acs-export-cronjob"
    heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
   spec:
      template:
         metadata:
            labels:
            jobgroup: acs-export-jobs
            app: generic-job
            chart: "generic-job-0.1.0"
            release: "acs-export-cronjob"
            heritage: "Tiller"
            annotations:
            iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
        spec:
            restartPolicy: Never   
            serviceAccountName: acs-export-cronjob-sa
            tolerations:
            - key: sonic-node-group
            operator: Equal
            value: api
            effect: NoSchedule
            nodeSelector:
            sonic-node-group: api
            volumes:
            - name: config
            emptyDir: {}
            initContainers:
            - name: "get-users-vmargs-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
            volumeMounts:
            - mountPath: /config
                name: config
            - name: "get-users-yaml-appconfig-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
            volumeMounts:
            - mountPath: /config
                name: config
            containers:     
            - image: <censored>.amazonaws.com/sonic/acs-export:latest
            imagePullPolicy: Always
            name: "users-batch"
            command:
            - "bash"
            - "-c"
            - 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
            env:
            - name: FRENV
                value: "batch"
            - name: STACKNAME
                value: eu1-test
            - name: SPRING_PROFILES
                value: "export-job"
            - name: NAMESPACE
                valueFrom:
                fieldRef:
                    fieldPath: metadata.namespace
            volumeMounts:
            - mountPath: /config
                name: config
            resources:
                limit:
                cpu: 100m
                memory: 1Gi
-- Anders Martini
Source: StackOverflow