I am working on a Flask application which communicates with Google cloud storage using python client library. Currently, on a local development, I am using a service account for authenticating the application and making interactions.
I am planning to build a docker image of the application and deploy it on a kubernetes cluster. My concern is that, how should I provide the Google credentials?
I might be wrong here, When I ran this python file on a VM it was able to create a new bucket without a need for credentials or service accounts.
# Imports the Google Cloud client library
from google.cloud import storage
# Instantiates a client
storage_client = storage.Client()
# The name for the new bucket
bucket_name = 'my-new-bucket'
# Creates the new bucket
bucket = storage_client.create_bucket(bucket_name)
print('Bucket {} created.'.format(bucket.name))
If I dockerize the same code into flask application and deploy it on a cluster, will it still take the default google credentials? I would like to know the best practice of doing this on a kubernetes cluster.
The best way is to deploy a kubernetes secret
apiVersion: v1 kind: Secret metadata: name: GOOGLE_APPLICATION_CREDENTIALS data: key.json: "Your service acount key.json"
For the Pod/ Deployment
volumes: - name: GOOGLE_APPLICATION_CREDENTIALS secret: secretName: GOOGLE_APPLICATION_CREDENTIALS
Then, for the image you can set the variable as os.['GOOGLE_APPLICATION_CREDENTIALS']
which will take the os variable inside the container and the python code will use that key.
Once you build the image push it to container registry.
That should work.