Auto rollout kubernes deployment if the deployment fails in azuredevops

11/11/2019

We have a release pipeline in AzureDevops to deploy the microservice to AKS and send the log of the microservice once its deployed. We are using below command to deploy the deployment with the template kubectl and command as "-f /home/admin/builds/$(build.buildnumber)/Myservice_Deployment.yml --record"

Here we noticed that the task is not waiting for the existing pod to terminate and crate the new pod, but its continuing and just finishing the job.

Our expected scenario

1) Deploy the microservice using kubectl apply -f /home/admin/builds/$(build.buildnumber)/Myservice_Deployment.yml --record

2) wait for the existing pod to terminate and ensure that the new pod is in running status.

3) once the new pod is running status collect the log of the pod by kubectl log command and sent to the team

4) if the pod is not is not in running state, roll back to previous stable state.

I tried with different shell scripts to achieve this in azuredevops, but didnt succeeded

Ex:

ATTEMPTS=0
ROLLOUT_STATUS_CMD="kubectl --kubeconfig /home/admin/kubernetes/Dev-kubeconfig rollout status deployment/My-service"
until $ROLLOUT_STATUS_CMD || [ $ATTEMPTS -eq 60 ]; do
  $ROLLOUT_STATUS_CMD
  ATTEMPTS=$((attempts + 1))
  sleep 10
done

Also need to get the log of the microservice using the kubectl log command and the file should be in the format with Date and need to be shared over mail..

-- Vowner
azure-devops
bash
kubernetes
linux
shell

1 Answer

11/11/2019

you have several questions mixed up in a single question, but you'd need to configure your deployment with liveness probe for your desired behaviour to happen

Reading: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command

-- 4c74356b41
Source: StackOverflow