k8s cluster running locally on Minkube should have AWS credentials to access resources on AWS

4/10/2019

During development process - I'm running k8s cluster on my dev machine inside Minikube, running several services.

The services should access AWS resources, like S3 bucket. For that - the pods should somehow get AWS credentials.

What are the options to authenticate the pods with AWS user? should I pass aws_access_key_id and aws_secret_access_key in the docker env?

How would it work on production (inside k8s on EKS)? does the node's role passed into the pods?

-- Satumba
amazon-eks
amazon-iam
kubernetes
minikube

1 Answer

4/10/2019

A good way to authenticate locally is to create a Kubernetes Secret containing the AWS credentials. You can then reference the secret in the environment variables of the deployment of your service, e.g.:

name: AWS_ACCESS_KEY
valueFrom:
  secretKeyRef:
    name: my-aws-secret
    key: access-key    

In EKS, all pods can access the role from the Node. This is of course not ideal for a production situation as most likely you want a more restricted set of permissions for a specific pod. You can check out kube2iam as a project you can use to restrict the AWS capabilities of a single pod.

-- Blokje5
Source: StackOverflow