Unable to get Azure Key Vault integrated with Azure Kubernetes Service

2/14/2021

Stuck on getting this integration working. I'm following the documentation step-by-step.

The following is everything I have done starting from scratch, so if it isn't listed here, I haven't tried it (I apologize in advance for the long series of commands):

# create the resource group
az group create -l westus -n k8s-test

# create the azure container registery
az acr create -g k8s-test -n k8stestacr --sku Basic -l westus

# create the azure key vault and add a test value to it
az keyvault create --name k8stestakv --resource-group k8s-test -l westus
az keyvault secret set --vault-name k8stestakv --name SECRETTEST --value abc123

# create the azure kubernetes service
az aks create -n k8stestaks -g k8s-test --kubernetes-version=1.19.7 --node-count 1 -l westus --enable-managed-identity --attach-acr k8stestacr -s Standard_B2s

# switch to the aks context
az aks get-credentials -b k8stestaks -g k8s-test

# install helm charts for secrets store csi
helm repo add csi-secrets-store-provider-azure https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts
helm install csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --generate-name

# create role managed identity operator
az role assignment create --role "Managed Identity Operator" --assignee <k8stestaks_clientId> --scope /subscriptions/<subscriptionId>/resourcegroups/MC_k8s-test_k8stestaks_westus

# create role virtual machine contributor
az role assignment create --role "Virtual Machine Contributor" --assignee <k8stestaks_clientId> --scope /subscriptions/<subscriptionId>/resourcegroups/MC_k8s-test_k8stestaks_westus

# install more helm charts
helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts
helm install pod-identity aad-pod-identity/aad-pod-identity

# create identity
az identity create -g MC_k8s-test_k8stestaks_westus -n TestIdentity  

# give the new identity a reader role for AKV
az role assignment create --role "Reader" --assignee <TestIdentity_principalId> --scope /subscriptions/<subscription_id/resourceGroups/k8s-test/providers/Microsoft.KeyVault/vaults/k8stestakv

# allow the identity to get secrets from AKV
az keyvault set-policy -n k8stestakv --secret-permissions get --spn <TestIdentity_clientId>

That is pretty much it for az cli commands. Everything up to this point executes fine with no errors. I can go into the portal, see these roles for the MC_ group, the TestIdentity with read-only for secrets, etc.

After that, the documentation has you build secretProviderClass.yaml:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: azure-kvname
spec:
  provider: azure
  parameters:
    usePodIdentity: "true"                   
    useVMManagedIdentity: "false"             
    userAssignedIdentityID: ""                       
    keyvaultName: "k8stestakv"                
    cloudName: ""                               
    objects:  |
      array:
        - |
          objectName: SECRETTEST             
          objectType: secret                 
          objectVersion: ""                 
    resourceGroup: "k8s-test"     
    subscriptionId: "<subscriptionId>"         
    tenantId: "<tenantId>"       

And also the podIdentityBinding.yaml:

apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
    name: azureIdentity               
spec:
    type: 0                                 
    resourceID: /subscriptions/<subscriptionId>/resourcegroups/MC_k8s-test_k8stestaks_westus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/TestIdentity
    clientID: <TestIdentity_clientId>    
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
    name: azure-pod-identity-binding
spec:
    azureIdentity: azureIdentity     
    selector: azure-pod-identity-binding-selector

Then just apply them:

# this one executes fine
kubectl apply -f k8s/secret/secretProviderClass.yaml

# this one does not
kubectl apply -f k8s/identity/podIdentityBinding.yaml

Problem #1

With the last one I get:

unable to recognize "k8s/identity/podIdentityBinding.yaml": no matches for kind "AzureIdentity" in version "aadpodidentity.k8s.io/v1"
unable to recognize "k8s/identity/podIdentityBinding.yaml": no matches for kind "AzureIdentityBinding" in version "aadpodidentity.k8s.io/v1"

Not sure why because the helm install pod-identity aad-pod-identity/aad-pod-identity command was successful. Looking at my Pods however...

Problem #2

I've followed these steps three times and every time the issue is the same--the aad-pod-identity-nmi-xxxxx will not launch:

$ kubectl get pods
NAME                                                              READY   STATUS             RESTARTS   AGE
aad-pod-identity-mic-7b4558845f-hwv8t                             1/1     Running            0          37m
aad-pod-identity-mic-7b4558845f-w8mxt                             1/1     Running            0          37m
aad-pod-identity-nmi-4sf5q                                        0/1     CrashLoopBackOff   12         37m
csi-secrets-store-provider-azure-1613256848-cjlwc                 1/1     Running            0          41m
csi-secrets-store-provider-azure-1613256848-secrets-store-m4wth   3/3     Running            0          41m
$ kubectl describe pod aad-pod-identity-nmi-4sf5q
Name:         aad-pod-identity-nmi-4sf5q
Namespace:    default
Priority:     0
Node:         aks-nodepool1-40626841-vmss000000/10.240.0.4
Start Time:   Sat, 13 Feb 2021 14:57:54 -0800
Labels:       app.kubernetes.io/component=nmi
              app.kubernetes.io/instance=pod-identity
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=aad-pod-identity
              controller-revision-hash=669df55fd8
              helm.sh/chart=aad-pod-identity-3.0.3
              pod-template-generation=1
              tier=node
Annotations:  <none>
Status:       Running
IP:           10.240.0.4
IPs:
  IP:           10.240.0.4
Controlled By:  DaemonSet/aad-pod-identity-nmi
Containers:
  nmi:
    Container ID:  containerd://5f9e17e95ae395971dfd060c1db7657d61e03052ffc3cbb59d01c774bb4a2f6a
    Image:         mcr.microsoft.com/oss/azure/aad-pod-identity/nmi:v1.7.4
    Image ID:      mcr.microsoft.com/oss/azure/aad-pod-identity/nmi@sha256:0b4e296a7b96a288960c39dbda1a3ffa324ef33c77bb5bd81a4266b85efb3498
    Port:          <none>
    Host Port:     <none>
    Args:
      --node=$(NODE_NAME)
      --http-probe-port=8085
      --operation-mode=standard
      --kubelet-config=/etc/default/kubelet
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sat, 13 Feb 2021 15:34:40 -0800
      Finished:     Sat, 13 Feb 2021 15:34:40 -0800
    Ready:          False
    Restart Count:  12
    Limits:
      cpu:     200m
      memory:  512Mi
    Requests:
      cpu:     100m
      memory:  256Mi
    Liveness:  http-get http://:8085/healthz delay=10s timeout=1s period=5s #success=1 #failure=3
    Environment:
      NODE_NAME:         (v1:spec.nodeName)
      FORCENAMESPACED:  false
    Mounts:
      /etc/default/kubelet from kubelet-config (ro)
      /run/xtables.lock from iptableslock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from aad-pod-identity-nmi-token-8sfh4 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  iptableslock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kubelet-config:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/default/kubelet
    HostPathType:  
  aad-pod-identity-nmi-token-8sfh4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  aad-pod-identity-nmi-token-8sfh4
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  38m                    default-scheduler  Successfully assigned default/aad-pod-identity-nmi-4sf5q to aks-nodepool1-40626841-vmss000000
  Normal   Pulled     38m                    kubelet            Successfully pulled image "mcr.microsoft.com/oss/azure/aad-pod-identity/nmi:v1.7.4" in 14.677657725s
  Normal   Pulled     38m                    kubelet            Successfully pulled image "mcr.microsoft.com/oss/azure/aad-pod-identity/nmi:v1.7.4" in 5.976721016s
  Normal   Pulled     37m                    kubelet            Successfully pulled image "mcr.microsoft.com/oss/azure/aad-pod-identity/nmi:v1.7.4" in 627.112255ms
  Normal   Pulling    37m (x4 over 38m)      kubelet            Pulling image "mcr.microsoft.com/oss/azure/aad-pod-identity/nmi:v1.7.4"
  Normal   Pulled     37m                    kubelet            Successfully pulled image "mcr.microsoft.com/oss/azure/aad-pod-identity/nmi:v1.7.4" in 794.669637ms
  Normal   Created    37m (x4 over 38m)      kubelet            Created container nmi
  Normal   Started    37m (x4 over 38m)      kubelet            Started container nmi
  Warning  BackOff    3m33s (x170 over 38m)  kubelet            Back-off restarting failed container

I guess I'm not sure if both problems are related and I haven't been able to get the failing Pod to start up.

Any suggestions here?

-- cjones
azure
azure-aks
azure-keyvault
kubernetes

1 Answer

2/17/2021

Looks it is related to the default network plugin that AKS picks for you if you don't specify "Advanced" for network options: kubenet.

This integration can be done with kubenet outlined here:

https://azure.github.io/aad-pod-identity/docs/configure/aad_pod_identity_on_kubenet/

If you are creating a new cluster, enable Advanced networking or add the --network-plugin azure flag and parameter.

-- cjones
Source: StackOverflow