How to add a PersistentVolumeClaim to a deployment running GitLab AutoDevops?

9/24/2021

What am I trying to achieve?

We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.

What have I tried?

  • Created a (hostPath) PersistentVolume (PV) on our cluster
  • Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")

Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.

Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.

<!-- begin snippet: js hide: false console: true babel: false --><!-- language: lang-html -->
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
  name: {{ template "trackableappname" . }}
  annotations:
    {{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
    {{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
  labels:
    app: {{ template "appname" . }}
    track: "{{ .Values.application.track }}"
    tier: "{{ .Values.application.tier }}"
    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
  selector:
    matchLabels:
      app: {{ template "appname" . }}
      track: "{{ .Values.application.track }}"
      tier: "{{ .Values.application.tier }}"
      release: {{ .Release.Name }}
{{- end }}
  replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
  strategy:
    type: {{ .Values.strategyType | quote }}
{{- end }}
  template:
    metadata:
      annotations:
        checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
        {{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
        {{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
      labels:
        app: {{ template "appname" . }}
        track: "{{ .Values.application.track }}"
        tier: "{{ .Values.application.tier }}"
        release: {{ .Release.Name }}
    spec:
      imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
      containers:
      - name: {{ .Chart.Name }}
        image: {{ template "imagename" . }}
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        {{- if .Values.application.secretName }}
        envFrom:
        - secretRef:
            name: {{ .Values.application.secretName }}
        {{- end }}
        env:
{{- if .Values.postgresql.managed }}
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: app-postgres
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: app-postgres
              key: password
        - name: POSTGRES_HOST
          valueFrom:
            secretKeyRef:
              name: app-postgres
              key: privateIP
{{- end }}
        - name: DATABASE_URL
          value: {{ .Values.application.database_url | quote }}
        - name: GITLAB_ENVIRONMENT_NAME
          value: {{ .Values.gitlab.envName | quote }}
        - name: GITLAB_ENVIRONMENT_URL
          value: {{ .Values.gitlab.envURL | quote }}
        ports:
        - name: "{{ .Values.service.name }}"
          containerPort: {{ .Values.service.internalPort }}
        livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
          httpGet:
            path: {{ .Values.livenessProbe.path }}
            scheme: {{ .Values.livenessProbe.scheme }}
            port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
          tcpSocket:
            port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
          exec:
            command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
          initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
          timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
        readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
          httpGet:
            path: {{ .Values.readinessProbe.path }}
            scheme: {{ .Values.readinessProbe.scheme }}
            port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
          tcpSocket:
            port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
          exec:
            command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
          initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
          timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
        resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
        volumeMounts:
        - mountPath: /data
          name: test-pvc
      volumes:
      - name: test-pvc
        persistentVolumeClaim:
          claimName: test-api-claim
      
<!-- end snippet -->

What is the problem?

Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:

Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key

Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:

    - name: POSTGRES_HOST
      valueFrom:
        secretKeyRef:
          name: app-postgres
          key: privateIP

The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..). I am not using tabs in the yaml file and I double checked the indentation.

Two questions

  • Could there be something wrong with my yaml?
  • Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)

General information

  • We use GitLab EE 13.12.11

  • For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7

Thanks in advance and have a nice day!

-- orerry13
gitlab-autodevops
gitlab-ci
kubernetes
persistent-volume-claims
persistent-volumes

1 Answer

3/16/2022

it seems that adding persistence is now supported in the default helm chart.

Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.

-- Jan Mikeš
Source: StackOverflow