I created my AKS cluster in the Azure portal using the 'Create Kubernetes cluster' functionality and allowed it to create a new Service Principal.
I started to wonder about expiry of the credentials this principal uses. Hoping to avoid an issue with K8s talking to Azure on credential expiry, I started looking at the account which had been created.
What I'm seeing if I run:
az ad app show --id <app Id>
... is the account manifest apart from the password expiry. I don't need to see the password itself, just when it expires.
passwordCredentials, however, is an empty array.
What I was expecting to find was startDate and endDate properties like I do for accounts I create myself.
The PasswordCredential class described here:
Is the AKS Cluster creation process doing something different when it creates its service principal credentials which means they don't expire? Am I just not allowed to see the detail? Is there something fundamental that I've misunderstood?
First of all, I need to make an explanation about the passwordCredentials that you reference. It a property about the App Registration key. When you create the AKS cluster there no key created, so the passwordCredentials shows empty. If you create a key in App Registration, it will show like this:
In addition, when you deploy an AKS cluster the password will be never expired. But don't worry, you can create the key for App Registration in the setting and give an expiry time to it. Also can reset the time and the key password.
But you should take care when you reset the password using the CLI command az ad sp credential reset
. This command will overwrite all the keys, not the just reset the expiry time and password. It means that create a new key for you and delete all the keys created before, or just create a new key with the parameter --append
.
You can take a look at the document Azure Kubernetes Service (AKS) with Azure AD. Hope this will help you.
Bumped into the same Service principle expiry issue for the AKS.
As a quick workaround created new Key using Azure Portal and updated all the AKS nodes manually(/etc/kubernetes/azure.json) with new client secret and restarted one by one, moreover master node was not updated with new client_secret(obviously). Hence newly scaled up nodes were coming up with the expired client secret!!(Issue)
30.01.2019 Got response from Azure Support that they are adding new option in azure cli to update the service principal.
31.01.2019 Just upgraded my Azure CLI to check for the new feature, luckily it's there and updated my Test cluster and its works!
az aks update-credentials --reset-service-principal --service-principal <client-id> --client-secret <secret>
Note: the client-id and client-secret should be created by you
It basically update /etc/kubernetes/azure.json file on all the nodes and then reboot it one by one!
Tried with scale up as well and it works!