We have an AKS cluster and sometimes we end up with an issue where a deployment needs a restart (e.g. cached data has been updated and we want to refresh it or there is corrupt cache data we want to refresh).
I've been using the approach of scaling the deployment to 0 and then scaling it back up using the commands below:
kubectl scale deployments/<deploymentName> --replicas=0
kubectl scale deployments/<deploymentName> --replicas=1
This does what I expect it to do, but it feels hacky and it means we're not running any deployments while this process is taking place.
What's a better approach to doing this? For either a specific deployment and for all the deployments?
How to restart all deployments in a cluster (multiple namespaces):
kubectl get deployments --all-namespaces | tail +2 | awk '{ cmd=sprintf("kubectl rollout restart deployment -n %s %s", $1, $2) ; system(cmd) }'
If you have a strategy of RollingUpdate
on your deployments you can delete the pods in order to replace the pod and refresh it.
About the RollingUpdate strategy:
Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones.
RollingUpdate config:
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
maxSurge
specifies the maximum number of Pods that can be created over the desired number of Pods.
maxUnavailable
specifies the maximum number of Pods that can be unavailable during the update process.
Delete the pod:
kubectl delete pod <pod-name>
Edit:
Also, you can rollout the deployment, which will restart the pod but will create a new revision of the deployment as well.
Ex: kubectl rollout restart deployments/<deployment-name>