Access S3 bucket without running aws configure with kubernetes

1/24/2020

I have an S3 bucket with some sql scripts and some backup files using mysqldump.

I also have a .yaml file that deploys a fresh mariadb image.

As I'm not very experienced with kubernetes yet, if I want to restore one of those backup files into the pod, I need to bash into it, run aws cli, insert my credentials, then sync the bucket locally and run mysql < backup.sql

This, obviously, destroys the concept of full automated deployment.

So, the question is... how can I securely make this pod immediately configured to access S3?

-- Ricardo Adão
amazon-s3
aws-cli
kubernetes

1 Answer

1/27/2020

I think you should consider mounting S3 bucket inside the pod.

This can be achieved by for example s3fs-fuse.

There are two nice articled about Mounting a S3 bucket inside a Kubernetes pod and Kubernetes shared storage with S3 backend, I do recommend reading both to understand how this works.

You basically have to build your own image from Dockerfile and supply necessary S3 bucket info and AWS security credentials.

Once you have the storage mounted you will be able to call scripts from it in a following way:

apiVersion: v1
kind: Pod
metadata:
  name: test-world
spec:  # specification of the pod’s contents
  restartPolicy: Never
  containers:
  - name: hello
    image: debian
    command: ["/bin/sh","-c"]
    args: ["command one; command two && command three"]
-- Crou
Source: StackOverflow