I'm using Azure DevOps, to handle PBI, repos, PRS, and builds, but all my infrastructure, including Kubernetes is managed by AWS.
There's not documentation, neither "the right and easy way" of how to deploy to AWS EKS using Azure DevOps Tasks.
I found this solution, its a good solution, but would be awesome to know how you guys resolve it, or if there are more approaches.
After a research and try and failure, I found another way to do it, without messing around with shell scripts.
You just need to apply the following to Kubernetes, It will create a ServiceAccount and bind it to a custom Role, that role will have the permissions to create/delete deployments and pods (tweak it for services permissions).
deploy-robot-conf.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: deploy-robot
automountServiceAccountToken: false
---
apiVersion: v1
kind: Secret
metadata:
name: deploy-robot-secret
annotations:
kubernetes.io/service-account.name: deploy-robot
type: kubernetes.io/service-account-token
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: deploy-robot-role
namespace: default
rules: # ## Customize these to meet your requirements ##
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: global-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: deploy-robot
namespace: default
roleRef:
kind: Role
name: deploy-robot-role
apiGroup: rbac.authorization.k8s.io
This will have the minimum permissions needed for Azure DevOps be able to deploy to the cluster.
Note: Please tweak the rules at the role resource to meet your need, for instance services resources permissions.
Then go to your release and create a Kubernetes Service Connection:
Fill the boxes, and follow the steps required to get your secret from the service account, remember that is deploy-robot if you didn't change the yaml file.
And then just use your Kubernetes Connection:
Another option would be to use 'kubeconf' based authentication, where 'kubeconf' file can be obtained with following AWS CLI command:
aws eks --region region update-kubeconfig --name cluster_name --kubconfig ~/.kube/AzureDevOpsConfig