I'm trying to create a Kubernetes deployment with an associated ServiceAccount, which is linked to an AWS IAM role. This yaml produces the desired result and the associated deployment (included at the bottom) spins up correctly:
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account
namespace: example
annotations:
eks.amazonaws.com/role-arn: ROLE_ARN
However, I would like to instead use the Terraform Kubernetes provider to create the ServiceAccount:
resource "kubernetes_service_account" "this" {
metadata {
name = "service-account2"
namespace = "example"
annotations = {
"eks.amazonaws.com/role-arn" = "ROLE_ARN"
}
}
}
Unfortunately, when I create the ServiceAccount this way, the ReplicaSet for my deployment fails with the error:
Error creating: Internal error occurred: Internal error occurred: jsonpatch add operation does not apply: doc is missing path: "/spec/volumes/0"
I have confirmed that it does not matter whether the Deployment is created via Terraform or kubectl
; it will not work with the Terraform-created service-account2
, but works fine with the kubectl
-created service-account
. Switching a deployment back and forth between service-account
and service-account2
correspondingly makes it work or not work as you might expect.
I have also determined that the eks.amazonaws.com/role-arn
is related; creating/assigning ServiceAccounts that do not try to link back to an IAM role work regardless of whether they were created via Terraform or kubectl
.
Using kubectl
to describe the Deployment, ReplicaSet, ServiceAccount, and associated Secret, I don't see any obvious differences, though I will admit I'm not entirely sure what I might be looking for.
Here is a simple deployment yaml that exhibits the problem:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: example
namespace: example
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: example
spec:
serviceAccountName: service-account # or "service-account2"
containers:
- name: nginx
image: nginx:1.7.8
I had the same problem, and I solved it specifying automount_service_account_token = true
in the terraform kubernetes service account resource.
Try crating the following service account:
resource "kubernetes_service_account" "this" {
metadata {
name = "service-account2"
namespace = "example"
annotations = {
"eks.amazonaws.com/role-arn" = "ROLE_ARN"
}
}
automount_service_account_token = true
}
Adding automountServiceAccountToken: true
to the pod spec in your deployment should fix this error. This is usually enabled by default on service accounts, but Terraform defaults it to off. See this issue on the mutating web hook that adds the required environment variables to your pods: https://github.com/aws/amazon-eks-pod-identity-webhook/issues/17