I have a bash script that uses the gcloud
command-line tool to perform maintenance operations.
This script works fine.
This script is in a docker image based on google/cloud-sdk
, executed automatically directly through the container entrypoint.
The goal is to have it executed periodically through a Kubernetes CronJob. This works too.
I have currently not setup anything regarding authentication, so my script uses the Compute Engine default service account.
So far so good, however, I need to stop using this default service account, and switch to a separate service account, with an API key file. That's where the problems start.
My plan was to mount my API key in the container through a Kubernetes Secret, and then use the GOOGLE_APPLICATION_CREDENTIALS
(documented here) to have it loaded automatically, with the following (simplified) configuration :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-name
spec:
schedule: "0 1 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: some-name
image: some-image-path
imagePullPolicy: Always
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/credentials/credentials.json"
volumeMounts:
- name: credentials
mountPath: /credentials
volumes:
- name: credentials
secret:
secretName: some-secret-name
But apparently, the gcloud
tool behaves differently from the programming-languages SDKs, and ignores this env variable completely.
The image documentation isn't much help either, since it only gives you a way to change the gcloud config location.
Moreover, I'm pretty sure that I'm going to need a way to provide some extra configuration to gcloud down the road (project, zone, etc…), so I guess my solution should give me the option to do so from the start.
I've found a few ways to work around the issue :
Change the entrypoint script of my image, to read environment variables, and execute env preparation with gcloud
commands :
That's the simplest solution, and the one that would allow me to keep my Kubernetes configuration the cleanest (each environment only differs by some environment variables). It requires however maintaining my own copy of the image I'm using, which I'd like to avoid if possible.
Override the entrypoint of my image with a Kubernetes configMap mounted as a file :
This option is probably the most convenient : execute a separate configmap for each environment, where I can do whatever environment setup I want (such as gcloud auth activate-service-account --key-file /credentials/credentials.json
). Still, it feels hacky, and is hardly readable compared to env variables.
Manually provide configuration files for gcloud
(in /root/.config/gcloud
) :
I suppose this would be the cleanest solution, however, the configuration syntax doesn't seem really clear, and I'm not sure how easy it would be to provide this configuration through a configMap.
As you can see, I found ways to work around my issue, but none of them satisfies me completely. Did I miss something ?
For the record, here is the solution I finally used, although it's still a workaround in my opinion :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-name
spec:
schedule: "0 1 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: some-name
image: some-image-path
imagePullPolicy: Always
command: ["/bin/bash", "/k8s-entrypoint/entrypoint.sh"]
volumeMounts:
- name: credentials
mountPath: /credentials
- name: entrypoint
mountPath: /k8s-entrypoint
volumes:
- name: credentials
secret:
secretName: some-secret-name
- name: entrypoint
configMap:
name: entrypoint
With the following ConfigMap :
apiVersion: v1
kind: ConfigMap
metadata:
name: entrypoint
data:
entrypoint.sh: |
#!/bin/bash
gcloud auth activate-service-account --key-file /credentials/credentials.json
# Chainload the original entrypoint
exec sh -c /path/to/original/entrypoint.sh