I am running a self hosted kubernetes cluster on AWS. I want to pass AWS IAM roles to containers.
Right now what we do is, a role is given to all the worker nodes and the containers running on the machines would get the credentials from the machine itself.
This creates an issue. I have to allow a superset of all the permissions required by all the containiers that are running in the cluster. so if there is one container that requires a permission to a S3 bucket and an other container that requires permissions to SQS then I need to give both the permissions to the cluster. This in turn gives all the permissions to all the containers running in the cluster, which is not desirable.
So my question is that is there a way to pass different roles to individual containers.
I know we can use Access keys and Secret keys to do this. I am looking for a way to do this without using them.
If you run your containers in EKS you can pass IAM roles to Pods binding them to k8s Service Accounts, see here
For self-managed clusters there are 3rd-party open-source projects solving the same issue kiam and kube-aws-iam-controller