I'm trying to get Kubernetes to download images from a Google Container Registry from another project. According to the docs you should create an image pull secret using:
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
But I wonder what DOCKER_USER
and DOCKER_PASSWORD
I should use for authenticating with Google Container Registry? Looking at the GCR docs it says that the password is the access token that you can get by running:
$ gcloud auth print-access-token
This actually works... for a while. The problem seems to be that this access token expires after (what I believe to be) one hour. I need a password (or something) that doesn't expire when creating my image pull secret. Otherwise the Kubernetes cluster can't download the new images after an hour or so. What's the correct way to do this?
This is really tricky but after a lot of trail and error I think I've got it working.
~/secret.json
)Now login to GCR using Docker from command-line:
$ docker login -e your@email.se -u _json_key -p "$(cat ~/secret.json)" https://eu.gcr.io
This will generate an entry for "https://eu.gcr.io" in your ~/.docker/config.json
file.
Copy the JSON structure under "https://eu.gcr.io" into a new file called "~/docker-config.json", remove newlines! For example:
{"https://eu.gcr.io": { "auth": "<key>","email": "your@email.se"}}
Base64 encode this file:
$ cat ~/docker-config.json | base64
This will print a long base64 encoded string, copy this string and paste it into an image pull secret definition (called ~/pullsecret.yaml
):
apiVersion: v1
kind: Secret
metadata:
name: mykey
data:
.dockercfg: <paste base64 encoded string here>
type: kubernetes.io/dockercfg
Now create the secret:
$ kubectl create -f ~/pullsecret.yaml
Now you can use this pull secret from a pod, for example:
apiVersion: v1 kind: Pod metadata: name: foo namespace: awesomeapps spec: containers: - image: "janedoe/awesomeapp:v1" name: foo imagePullSecrets: - name: mykey
or add it to a service account.
It is much easier with kubectl
kubectl create secret docker-registry mydockercfg \
--docker-server "https://eu.gcr.io" \
--docker-username _json_key \
--docker-email not@val.id \
--docker-password=$(cat your_service_account.json)
One important detail after you download your_service_account.json from google is to join all the lines in the json into one row. For this you could replace cat
with paste
:
--docker-password=$(paste -s your_service_account.json)
You can also grant the service account your cluster runs as access to the GCS bucket:
eu.artifacts.{project-id}.appspot.com
This answer has a few gsutil
commands to make that happen.
This answer ensures that only one set of docker credentials gets included in your Kubernetes secret, and handles trimming newlines for you.
Follow the same first three steps from Johan's great answer:
Go to the Google Developer Console > Api Manager > Credentials and click "Create credentials" and create a "service account key"
Under "service account" select new and name the new key "gcr" (let the key type be json)
Create the key and store the file on disk (from here on we assume that it was stored under ~/secret.json
)
Next, run these commands to generate and inject the required Docker credentials into your cluster:
export GCR_KEY_JSON=$(cat ~/secret.json | tr -d '\n')
mv ~/.docker/config.json ~/.docker/config-orig.json
cat >~/.docker/config.json <<EOL
{
"auths": {
"gcr.io": {}
}
}
EOL
docker login --username _json_key --password "$GCR_KEY_JSON" https://gcr.io
export DOCKER_CONFIG_JSON_NO_NEWLINES=$(cat ~/.docker/config.json | tr -d '\n')
mv ~/.docker/config-orig.json ~/.docker/config.json
cat >secrets.yaml <<EOL
apiVersion: v1
kind: Secret
metadata:
name: gcr-key
data:
.dockerconfigjson: $(echo -n ${DOCKER_CONFIG_JSON_NO_NEWLINES} | base64 | tr -d '\n')
type: kubernetes.io/dockerconfigjson
EOL
kubectl create -f secrets.yaml
When you specify Pods that pull images from GCR, include the gcr-key
secret name in your spec
section:
spec:
imagePullSecrets:
- name: gcr-key
containers:
- image: ...
From the official ways, you can:
$ docker login -e 1234@5678.com -u _json_key -p "$JSON_KEY" https://gcr.io
Note: The e-mail is not used, so you can put whatever you want in it.
Change gcr.io
to whatever is your domain shown in your Google Container Registry (e.g. eu.gcr.io
).
To get that $JSON_KEY
:
Docker Registry (read-only)
keyfile.json
JSON_KEY=$(cat keyfile.json | tr '\n' ' ')
Once logged in you can just run docker pull
. You can also copy the updated ~/.dockercfg
to preserve the settings.
No image pull secret is needed, it can be done by an IAM configuration
I tried other answers but I can't get the Image Pull Secret approach working.
However I found that this can be done by Granting access to the Compute Engine default service account in the project where the Kubernetes cluster is. This service account was created automatically by GCP.
As described here: https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry
You need to execute the following command to grant access to the Cloud Storage bucket serving the Container Registry
gsutil iam ch serviceAccount:[EMAIL-ADDRESS]:objectViewer gs://[BUCKET_NAME]
BUCKET_NAME:
artifacts.[PROJECT-ID].appspot.com for images pushed to gcr.io/[PROJECT-ID], or
[REGION].artifacts.[PROJECT-ID].appspot.com, where [REGION] is:
us for registry us.gcr.io
eu for registry eu.gcr.io
asia for registry asia.gcr.io
EMAIL-ADDRESS:
The email address of the service account called: **Compute Engine default service account** in the GCP project where the Kubernetes cluster run