I've ACS kubernetes cluster running on azure vmss, recently I renewed my acs service principal by adding the new key in /etc/kubernetes/azure.json in master and worker nodes and restarted them but the issue is new nodes created as part of scaling are not able to get the new service principal key.
Updating azure.json is not enough.
In order to update your cluster with new credentials, you should use az aks update-credentials
command
az aks update-credentials \
--resource-group myResourceGroup \
--name myAKSCluster \
--reset-service-principal \
--service-principal $SP_ID \
--client-secret $SP_SECRET
After that cluster autoscaler will use updated principal for the new instances
Update:
For acs cluster you have to manually update service principal on each worker node.
Or you may use custom script extension, which you can integrate with Azure Resource Manager template or run by Azure Virtual Machines REST API