I am running into problems getting my containers from gcr.io
$ kubectl get po
NAME READY STATUS RESTARTS AGE
api-deployment-74d8cf8768-x8bsk 0/2 ImagePullBackOff 4 2m43s
I create these deployments with the following yml file (deployment.yml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: eu.gcr.io/api:latest
imagePullPolicy: Always
ports:
- containerPort: 5060
from GKE - ErrImagePull pulling from Google Container Registry I am guessing this is mostly a permission issue.
If I do
kubectl describe pod api-deployment-74d8cf8768-x8bsk
I get
rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/<project-dev>/api, repository does not exist or may require 'docker login': denied: Permission denied for "latest" from request "/v2/<project-dev>/api/manifests/latest"
However, it is not clear how to set the appropriate service-account using terraform.
My set-up is as follows. I created a terraform administration project in GCP (terraform-admin) with a service account
tf-admin@terraform-admin.iam.gserviceaccount.com
that contains the remote terraform state etc. The service-account has numerous roles such as:
Compute Network Admin
Kubernetes Engine Cluster Admin
...
Then I create my actual development project project-dev (using the credentials of that service-account). In project-dev tf-admin@terraform-admin.iam.gserviceaccount.com is also an iam account as an
Owner
Compute Network Admin
Kubernetes Engine Cluster Admin
However, it is not a service account. The only service account I see is
<project-dev-ID>-compute@developer.gserviceaccount.com
which is a "Compute Engine default service account" that probably does not have the appropriate permissions. On project-dev I also have the container registry that contains my private containers.
As said, I create my GKE cluster using Terraform. Below is my (abbreviated) yml file.
resource "google_container_cluster" "primary" {
name = "gke-cluster"
location = "${var.region}-b"
node_locations = [
"${var.region}-c",
"${var.region}-d",
]
node_version = var.node_version
initial_node_count = 3
network = var.vpc_name
subnetwork = var.subnet_name
addons_config {
horizontal_pod_autoscaling {
disabled = false
}
}
master_auth {
username = 'user'
password = 'password'
}
node_config {
# I HAVE TRIED ADDING THIS, BUT IT RESULT IN AN ERROR
# Error: googleapi: Error 400: The user does not have access to service account
# service_account = "tf-admin@terraform-admin.iam.gserviceaccount.com"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
labels = {
env = var.gke_label["${terraform.workspace}"]
}
disk_size_gb = 10
machine_type = var.gke_node_machine_type
tags = ["gke-node"]
}
}
Now, should I try (and if so, how) to add my tf-admin service account as a service account in project-dev or should I add a specific service account (again, how?) to the project-dev for kubernetes?
You can use the default compute service account <projectID>-compute@developer.gserviceaccount.com
which does have all the required permissions to access GCR. Just make sure you use the default scopes for the cluster or make sure the gcr scopes are enabled along with storage read permissions.
Alternatively, you can use Terraform to create a service account with sufficient permissions (such as the storage viewer role) and then assign that service account to the node pool. In this case, you'll want to set the oauth_scopes to cloud_platform to ensure the scopes don't interfere with the IAM permissions.
You can view the default GKE scopes here