GCloud: Failed to pull image (400) - Permission "artifactregistry.repositories.downloadArtifacts" denied

7/8/2021

My pod can't be created because of the following problem:

Failed to pull image "europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0": rpc error: code = Unknown desc = Error response from daemon: Get https://europe-west3-docker.pkg.dev/v2/<PROJECT_ID>/<REPO_NAME>/my-app/manifests/1.0.0: denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/<PROJECT_ID>/locations/europe-west3/repositories/<REPO_NAME>" (or it may not exist)

I've never experienced anything like it. Maybe someone can help me out.

Here is what I did:

  1. I set up a standrd Kubernetes cluster on Google Cloud in the Zone europe-west-3-a
  2. I started to follow the steps described here https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
  3. I built the docker imager and pushed it to the Artifcats repository
  4. I can confirm the repo and the image are present, both in the Google Console as well as pulling the image with docker
  5. Now I want to deploy my app, here is the deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
  1. The pod fails to create due to the error mentioned above.

What am I missing?

-- berserk23
docker
gcloud
google-kubernetes-engine
kubernetes

2 Answers

7/8/2021

I think the tutorial is in error.

I was able to get this working by:

  • Creating a Service Account and key
  • Assigning the account Artifact Registry permissions
  • Creating a Kubernetes secret representing the Service Account
  • Using imagePullSecrets
PROJECT=[[YOUR-PROJECT]]
REPO=[[YOUR-REPO]]
LOCATION=[[YOUR-LOCATION]]

# Service Account and Kubernetes Secret name
ACCOUNT="artifact-registry" # Or ...

# Email address of the Service Account
EMAIL=${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com

# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--display-name="Read Artifact Registry" \
--description="Used by GKE to read Artifact Registry repos" \
--project=${PROJECT}

# Create Service Account key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}

# Grant Service Account role to reader Artifact Reg
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader

# Create a Kubernetes Secret representing the Service Account
kubectl create secret docker-registry ${ACCOUNT} \
--docker-server=https://${LOCATION}-docker.pkg.dev \
--docker-username=_json_key \
--docker-password="$(cat ${PWD}/${ACCOUNT}.json)" \
--docker-email=${EMAIL} \
--namespace=d{NAMESPACE}

Then:

IMAGE="${LOCATION}-docker.pkg.dev/${PROJECT}/${REPO}/my-app:1.0.0"

echo "
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      imagePullSecrets:
        - name: ${ACCOUNT}
      containers:
      - name: my-app
        image: ${IMAGE}
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
" | kubectl apply --filename=- --namespace=${NAMESPACE}

NOTE There are other ways to achieve this.

You could use the cluster's default (Compute Engine) Service Account instead of a special-purpose Service Account as here but the default Service Account is more broadly used and granting it greater powers may be too broad.

You could add the imagePullSecrets to the GKE namespace's default service account. This would give any deployment in that namespace the ability to pull from the repository and that may also be too broad.

I think there's a GKE-specific way to grant a cluster service account GCP (!) roles.

-- DazWilkin
Source: StackOverflow

10/23/2021

I encountered the same problem, and was able to get it working by executing:

gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader

with ${PROJECT} = the project name and ${EMAIL} = the default service account, e.g. something like 123456789012-compute@developer.gserviceaccount.com.

I suspect I may have removed some "excess permissions" too eagerly in the past.

-- JW.
Source: StackOverflow