I have a GCP project. In there I use the GKE with a Teamcity container running. This Teamcity container is my build server and the location where I run my build steps/scripts.
One of the build steps wants to push a docker image to the Google Container Registry. While doing so it fails cause of this error:
denied: Token exchange failed for project 'coopr-mod'. Caller does not have permission 'storage.buckets.create'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
I read up the mentioned instructions links but just can't find out how to solve the problem in my case.
For completion I hereby write down the build steps that are executed:
Step 1:
# Create environment variable for correct distribution
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"
# Add the Cloud SDK distribution URI as a package source
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
# Import the Google Cloud Platform public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# Update the package list and install the Cloud SDK
sudo apt-get -y update && sudo apt-get -y install google-cloud-sdk
Step 2:
gcloud --quiet auth configure-docker
Step 3: `docker build myimage:1
Step 4: docker tag myimage:1 eu.gcr.io/my-project/myimage:1
Step 5: (The failing step) docker push eu.gcr.io/coopr-mod/myimage:1
Results in:
denied: Token exchange failed for project 'coopr-mod'. Caller does not have permission 'storage.buckets.create'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
I read about giving the GKE read-write
permission for the Google Storage but I can't find the guide where it tells me "how" to do that.
There is decent documentation about how to both push and pull images with GCR and GKE. Also, this answer is a similar answer for regular GCE instances.
Assuming your node pool is configured with instances using the default GCE account, this is a simple matter of configuring the pool with the read-write
access scope when you create the pool.
A few ways to do this:
--scopes https://www.googleapis.com/auth/devstorage.read_write
(Alternatively, you can enable 'all scopes' using this value: https://www.googleapis.com/auth/cloud-platform
, but that is exceptionally permissive)... omitted many scope choices ...
If, for whatever reason, you can't just tear down your node pool, the instructions about how to migrate jobs to a new machine type should work for you (in this case, the "new machine type" just has the new access permissions). The basic steps are:
That said, it might make sense to go a bit beyond this and use a dedicated service account (and key) for pushing images, if you don't want any pod on your cluster to have this sort of access. Likewise, this won't require destroying and recreating the node pool.
This is a decent amount more complicated, but the steps would roughly be:
cat keyfile.json | docker login -u _json_key --password-stdin https://eu.gcr.io
(or whatever the correct GCR repository hostname is for you)