DevOps CI/CD pipelines broken after Kubernetes upgrade to v1.22

2/10/2022

Present state

In v1.22 Kubernetes dropped support for v1beta1 API. That made our release pipeline crash and we are not sure how to fix it.

We use build pipelines to build .NET Core applications and deploy them to the Azure Container Registry. Then there are release pipelines that use helm to upgrade them in the cluster from that ACR. This is how it looks exactly.

Build pipeline: 1. .NET download, restore, build, test, publish 2. Docker task v0: Build task 3. Docker task v0: Push to the ACR task 4. Artifact publish to Azure Pipelines

Release pipeline: 1. Helm tool installer: Install helm v3.2.4 (check for latest version of Helm unchecked) and install newest Kubectl (Check for latest version checked) 2. Bash task:

az acr login --name <acrname>
az acr helm repo add --name <acrname>
  1. Helm upgrade task:
    • chart name <acrname>/<chartname>
    • version empty
    • release name `<servicename>

After the upgrade to Kubernetes v1.22 we are getting the following error in Release step 3.:

Error: UPGRADE FAILED: unable to recognize "": no matches for kind "Ingress" in version "extensions/v1beta1".

What I've already tried

Error is pretty obvious and from Helm compatibility table it states clearly that I need to upgrade the release pipelines to use at least Helm v3.7.x. Unfortunately in this version OCI functionality (about this shortly) is still in experimental phase so at least v3.8.x has to be used.

Bumping helm version to v3.8.0

That makes release step 3. report:

Error: looks like "https://<acrname>.azurecr.io/helm/v1/repo" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: unknown field "acrMetadata"

After reading Microsoft tutorial on how to live with helm and ACR I learned that az acr helm commands use helm v2 so are deprecated and OCI artifacts should be used.

Switching to OCI part 1

After reading that I changed release step 2. to a one-liner:

helm registry login <acrname>.azurecr.io --username <username> --password <password>

That now gives me Login Succeeded in release step 2. but release step 3. fails with

Error: failed to download "<acrname>/<reponame>".

Switching to OCI part 2

I thought that the helm task is incompatible or something with the new approach so I removed release step 3. and decided to make it from the command line in step 2. So now step 2. looks like this:

helm registry login <acrname>.azurecr.io  --username <username> --password <password>
helm upgrade --install --wait -n <namespace> <deploymentName> oci://<acrname>.azurecr.io/<reponame> --version latest --values ./values.yaml

Unfortunately, that still gives me:

Error: failed to download "oci://<acrname>.azurecr.io/<reponame>" at version "latest"

Helm pull, export, upgrade instead of just upgrade

The next try was to split the help upgrade into separately helm pull, helm export and then helm upgrade but

helm pull oci://<acrname>.azurecr.io/<reponame> --version latest

gives me:

Error: manifest does not contain minimum number of descriptors (2), descriptors found: 0

Changing docker build and docker push tasks to v2

I also tried changing the docker tasks in the build pipelines to v2. But that didn't change anything at all.

-- Rychu
azure-container-registry
azure-devops
azure-pipelines
kubernetes
kubernetes-helm

2 Answers

2/14/2022

Just to make the picture full - mentioned by @wubbalubba change in ingress' YAML in chart definition wasn't the only thing I had to do fixing our pipelines:

  1. So first, obviously, change the API to v1 in ingress' YAML file inside chart definition plus increment the chart version. Then pack it again and push it to the ACR:
helm package .
helm push .\generated-new-chart.tgz oci://<acrname>.azurecr.io/
  1. Next thing, learned from this guide, was to update, or rather I just removed, all the secrets and configmaps connected with my services:
kubectl delete secret -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
kubectl delete configmap -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
  1. Lastly, remove the deployment helm upgrade step. Instead shell script took its responsibility:
helm registry login $(ContainerRegistryUrl) --username $(ContainerRegistryUsername) --password $(ContainerRegistryPassword)
az aks get-credentials --resource-group $(Kubernetes__ResourceGroup) --name $(Kubernetes__Cluster)
helm upgrade --install --wait -n $(NamespaceName) $(ServiceName) oci://$(ContainerRegistryUrl)/services-generic-chart --version 2 -f ./values.yaml

Only then I was able to redeploy everything successfully.

-- Rychu
Source: StackOverflow

2/10/2022

Have you tried changing the Ingress object's apiVersion to networking.k8s.io/v1beta1 or networking.k8s.io/v1? Support for Ingress in the extensions/v1beta1 API version is dropped in k8s 1.22.

Our ingress.yaml file in our helm chart looks something like this to support multiple k8s versions. You can ignore the AWS-specific annotations since you're using Azure. Our chart has a global value of ingress.enablePathType because at the time of writing the yaml file, AWS Load Balancer did not support pathType and so we set the value to false.

{{- if .Values.global.ingress.enabled -}}
{{- $useV1Ingress := and (.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress") .Values.global.ingress.enablePathType -}}
{{- if $useV1Ingress -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
  name: example-ingress
  labels:
    {{- include "my-chart.labels" . | nindent 4 }}
  annotations:
    {{- if .Values.global.ingress.group.enabled }}
    alb.ingress.kubernetes.io/group.name: {{ required "ingress.group.name is required when ingress.group.enabled is true" .Values.global.ingress.group.name }}
    {{- end }}
    {{- with .Values.global.ingress.annotations }}
    {{- toYaml . | nindent 4 }}
    {{- end }}
    # Add these tags to the AWS Application Load Balancer
    alb.ingress.kubernetes.io/tags: k8s.namespace/{{ .Release.Namespace }}={{ .Release.Namespace }}
spec:
  rules:
    - host: {{ include "my-chart.applicationOneServerUrl" . | quote }}
      http:
        paths:
          {{- if $useV1Ingress }}
          - path: /
            pathType: Prefix
            backend:
              service:
                name: {{ $applicationOneServiceName }}
                port:
                  name: http-grails
          {{- else }}
          - path: /*
            backend:
              serviceName: {{ $applicationOneServiceName }}
              servicePort: http-grails
          {{- end }}
    - host: {{ include "my-chart.applicationTwoServerUrl" . | quote }}
      http:
        paths:
          {{- if $useV1Ingress }}
          - path: /
            pathType: Prefix
            backend:
              service:
                name: {{ .Values.global.applicationTwo.serviceName }}
                port:
                  name: http-grails
          {{- else }}
          - path: /*
            backend:
              serviceName: {{ .Values.global.applicationTwo.serviceName }}
              servicePort: http-grails
          {{- end }}
{{- end }}
-- wubbalubba
Source: StackOverflow