Error: YAML parse error on deployment.yaml: error converting YAML to JSON: yaml: line 50: mapping values are not allowed in this context

1/19/2022

I get an error when I install the helm chart I created.

helm install -f values.yaml --dry-run testbb ./

I change the indent and make it like yamls. I use "kubectl get -o yaml" many times, but it doesn't work.

Line 50 in yaml file contains volume name: frontend-http

Does anyone know how to solve this, please? Here is the entire yaml template file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "frontend-nginx.fullname" . }}
  labels:
    {{- include "frontend-nginx.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "frontend-nginx.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "frontend-nginx.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "frontend-nginx.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
      - name: {{ .Chart.Name }}
        securityContext:
          {{- toYaml .Values.securityContext | nindent 12 }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
        volumeMounts:
          - mountPath: /usr/local/nginx/conf/nginx.conf
            name: {{ .Values.nginxconfig.nginxcm }}-volume
            subPath: nginx.conf
          - mountPath: /usr/local/nginx/html/app
            name: data-storage
      volumes: 
      - configMap:
          defaultMode: 420
          name: frontend-http
        name: frontend-http-volume
      {{- if .Values.persistentVolume.enabled }}
      - name: data-storage
        persistentVolumeClaim:
          claimName: {{ .Values.persistentVolume.existingClaim | default (include "frontend-nginx.fullname" .) }}
      {{- else }}
      - name: data-storage
        emptyDir: {}
      {{- end }}            
{{- if .Values.persistentVolume.mountPaths }}
{{ toYaml .Values.persistentVolume.mountPaths | indent 12 }}
{{- end }}            
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
-- Mustaine
helm3
kubernetes
kubernetes-helm

1 Answer

1/19/2022

Try swapping "configMap" and "name":

      volumes: 
      - name: frontend-http-volume
        configMap:
          defaultMode: 420
          name: frontend-http
      {{- if .Values.persistentVolume.enabled }}
      - name: data-storage
        persistentVolumeClaim:
          claimName: {{ .Values.persistentVolume.existingClaim | default (include "frontend-nginx.fullname" .) }}
      {{- else }}
      - name: data-storage
        emptyDir: {}
      {{- end }} 
-- KubePony
Source: StackOverflow