Get output from Azure DevOps Deploy to Kubernetes task

1/27/2019

Setup description

I have the following scenario: Created a Build Pipeline in Azure DevOps and after setting up my Kubernetes cluster I want to get a specific pod name using kubectl. I am doing this via the "Deploy to Kubernetes" task V1, which looks like this:

steps:
- task: Kubernetes@1

  displayName: 'Get pod name'
  inputs:
    azureSubscriptionEndpoint: 'Azure Pay-as-you-Go (anonymized)'
    azureResourceGroup: MyK8sDEV
    kubernetesCluster: myCluster
    command: get
    arguments: 'pods -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}"'

So the task is running successfully and I want to get the output string of the above command. In the Pipeline visual designer it shows me an output variable of undefined.KubectlOutput that is being written to.

Problem statement

I have created a subsequent Bash script task directly after the above kubectl task. If I read the variable $KUBECTLOUTPUT or $UNDEFINED_KUBECTLOUTPUT it just returns an empty string. What am I doing wrong? I just need the output from the previous command as a variable.

My goal with the action

I am trying to make sure that the application I deployed with a helm chart in the previous step is up and running. In the next step I need to run some scripts inside the application pods (using kubectl exec) so I want to make sure that at least 1 pod hosting the app is up and running so that I can execute commands against it. In the meantime I realized that I can skip the checking step if I use the --wait flag when deploying the helm chart, but I still have issues using kubectl from within the bash script.

-- Razvan
azure-devops
kubernetes

2 Answers

1/28/2019

After a couple of hours of different attempts at figuring out how azure devops connects to the the AKS cluster, I figured out that it is using an OAuth access token as far as I can tell. One can access this token using the System.AccessToken variable (if the Agent Job is allowed access to the token - this is a configuration option and its off by default). What I could not figure out is how to use this token with kubectl inside a script, so I have abandoned this path for now.

Also the job is running on a hosted Ubuntu agent (as in Microsoft hosted) so it might be avoiding downloading the config file for security reasons, even though Microsoft itself maintains that the agents are single-use VMs and that "The virtual machine is discarded after one use" see MS docs here.

What works on the hosted agent (I would still recommend some encryption for production scenarios) - using azure CLI commands to log in and get the cluster credentials:

az login
az aks get-credentials --resource-group=MyClusterDEV --name myCluster
kubectl […]

The alternative solution I used is to run the scripts on a local agent that already has the Kubernetes config file pre-configured. For this I simply created an additional agent job to run my scripts, so now I have:

  1. A general agent job (Hosted Ubuntu 16) doing the helm init and other basic setup tasks
  2. A local agent job (Windows) running more complex scripts against specific pods
-- Razvan
Source: StackOverflow

1/27/2019

this is what I've been using:

config=`find . -name config`

kubectl --kubeconfig $config get -n $(k8sEnv) deploy --selector=type=$(containerType) -o | jq '.items[].metadata.name' \
  | xargs -L 1 -i kubectl --kubeconfig $config set -n $(k8sEnv) image deploy/{} containername=registry.azurecr.io/$(containerImage):$(BUILD.BUILDNUMBER) --record=true

this will find all the deployments with the specific label and run kubectl set on each one of these, you can adapt this to your needs easily. the only prerequisite, you have to have kubectl task before this task, so your agent downloads kubectl config from Azure Devops.
this above has to run in this directory:

/home/vsts/work/_temp/kubectlTask
-- 4c74356b41
Source: StackOverflow