I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials
?
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json
as an identity token, put it to the CI config and used it within container with gcloud
CLI installed. I used the key file/token to log in to GCP and let gcloud
generate kubectl
config - the same approach used for GCP container registry login.
This works fine but using kubectl
in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl
and it's config in the CI jobs.
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.