Access denied when pulling private registry image using helm with gitlab runner helm chart and ci job

2/1/2019

I have a kubernetes cluster with 1 master and 2 workers. All nodes have their IP address. Let's call them like this:

  • master-0
  • worker-0
  • worker-1

The network pod policy and all my nodes communication are setting up correctly, all works perfectly. If I specify this infrastructure, it's just to be more specific about my case.

Using helm I have created a chart which deploy a basic nginx. It's a docker image that I build on my private gitlab registry.

With the gitlab ci, I have created a job which used two functions:

# Init helm client on k8s cluster for using helm with gitlab runner
function init_helm() {
  docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
  mkdir -p /etc/deploy
  echo ${kube_config} | base64 -d > ${KUBECONFIG}
  kubectl config use-context ${K8S_CURRENT_CONTEXT}
  helm init --client-only
  helm repo add stable https://kubernetes-charts.storage.googleapis.com/
  helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
  helm repo update
}

# Deploy latest tagged image on k8s cluster
function deploy_k8s_cluster() {
  echo "Create and apply secret for docker gitlab runner access to gitlab private registry ..."
  kubectl create secret -n "$KUBERNETES_NAMESPACE_OVERWRITE" \
    docker-registry gitlab-registry \
    --docker-server="https://registry.gitlab.com/v2/" \
    --docker-username="${CI_DEPLOY_USER:-$CI_REGISTRY_USER}" \
    --docker-password="${CI_DEPLOY_PASSWORD:-$CI_REGISTRY_PASSWORD}" \
    --docker-email="$GITLAB_USER_EMAIL" \
    -o yaml --dry-run | kubectl replace -n "$KUBERNETES_NAMESPACE_OVERWRITE" --force -f -
  echo "Build helm dependancies in $CHART_TEMPLATE"
  cd $CHART_TEMPLATE/
  helm dep build
  export DEPLOYS="$(helm ls | grep $PROJECT_NAME | wc -l)"
  if [[ ${DEPLOYS}  -eq 0 ]]; then
    echo "Creating the new chart ..."
    helm install --name ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml
  else
  echo "Updating the chart ..."
    helm upgrade ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml
  fi
} 

The first function allow the gitlabrunner to login with docker, init helm and kubectl. The second to deploy on the cluster my image.

All the process works well, e-g my jobs are passed on the gitlab ci, no error occurred except for the deployment of the pod.

Indeed I have this error:

Failed to pull image "registry.gitlab.com/path/to/repo/project/image:TAG_NUMBER": rpc error: code
= Unknown desc = Error response from daemon: Get https://registry.gitlab.com/v2/path/to/repo/project/image/manifests/image:TAG_NUMBER: denied: access forbidden

To be more specific, I am using gitlab-runner helm chart and this the config of the chart:

## GitLab Runner Image
##
## By default it's using gitlab/gitlab-runner:alpine-v{VERSION}
## where {VERSION} is taken from Chart.yaml from appVersion field
##
## ref: https://hub.docker.com/r/gitlab/gitlab-runner/tags/
##
# image: gitlab/gitlab-runner:alpine-v11.6.0

## Specify a imagePullPolicy
## 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
imagePullPolicy: IfNotPresent

## The GitLab Server URL (with protocol) that want to register the runner against
## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-register
##
gitlabUrl: https://gitlab.com/

## The Registration Token for adding new Runners to the GitLab Server. This must
## be retrieved from your GitLab Instance.
## ref: https://docs.gitlab.com/ce/ci/runners/README.html#creating-and-registering-a-runner
##
runnerRegistrationToken: "<token>"

## The Runner Token for adding new Runners to the GitLab Server. This must
## be retrieved from your GitLab Instance. It is token of already registered runner.
## ref: (we don't yet have docs for that, but we want to use existing token)
##
# runnerToken: ""
#
## Unregister all runners before termination
##
## Updating the runner's chart version or configuration will cause the runner container
## to be terminated and created again. This may cause your Gitlab instance to reference
## non-existant runners. Un-registering the runner before termination mitigates this issue.
## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-unregister
##
unregisterRunners: true

## Set the certsSecretName in order to pass custom certficates for GitLab Runner to use
## Provide resource name for a Kubernetes Secret Object in the same namespace,
## this is used to populate the /etc/gitlab-runner/certs directory
## ref: https://docs.gitlab.com/runner/configuration/tls-self-signed.html#supported-options-for-self-signed-certificates
##
# certsSecretName:

## Configure the maximum number of concurrent jobs
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
concurrent: 10

## Defines in seconds how often to check GitLab for a new builds
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
checkInterval: 30

## Configure GitLab Runner's logging level. Available values are: debug, info, warn, error, fatal, panic
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
# logLevel:

## For RBAC support:
rbac:
  create: true

  ## Run the gitlab-bastion container with the ability to deploy/manage containers of jobs
  ## cluster-wide or only within namespace
  clusterWideAccess: true

  ## Use the following Kubernetes Service Account name if RBAC is disabled in this Helm chart (see rbac.create)
  ##
  serviceAccountName: default

## Configure integrated Prometheus metrics exporter
## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server
metrics:
  enabled: true

## Configuration for the Pods that that the runner launches for each new job
##
runners:
  ## Default container image to use for builds when none is specified
  ##
  image: ubuntu:16.04

  ## Specify one or more imagePullSecrets
  ##
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  imagePullSecrets: ["namespace-1", "namespace-2", "default"]

  ## Specify the image pull policy: never, if-not-present, always. The cluster default will be used if not set.
  ##
  # imagePullPolicy: ""

  ## Specify whether the runner should be locked to a specific project: true, false. Defaults to true.
  ##
  # locked: true

  ## Specify the tags associated with the runner. Comma-separated list of tags.
  ##
  ## ref: https://docs.gitlab.com/ce/ci/runners/#using-tags
  ##
  tags: my-tag-1, my-tag-2"

  ## Run all containers with the privileged flag enabled
  ## This will allow the docker:dind image to run if you need to run Docker
  ## commands. Please read the docs before turning this on:
  ## ref: https://docs.gitlab.com/runner/executors/kubernetes.html#using-docker-dind
  ##
  privileged: true

  ## The name of the secret containing runner-token and runner-registration-token
  # secret: gitlab-runner

  ## Namespace to run Kubernetes jobs in (defaults to the same namespace of this release)
  ##
  # namespace:

  # Regular expression to validate the contents of the namespace overwrite environment variable (documented following).
  # When empty, it disables the namespace overwrite feature
  namespace_overwrite_allowed: overrided-namespace-*

  ## Distributed runners caching
  ## ref: https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/autoscale.md#distributed-runners-caching
  ##
  ## If you want to use s3 based distributing caching:
  ## First of all you need to uncomment General settings and S3 settings sections.
  ##
  ## Create a secret 's3access' containing 'accesskey' & 'secretkey'
  ## ref: https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/
  ##
  ## $ kubectl create secret generic s3access \
  ##   --from-literal=accesskey="YourAccessKey" \
  ##   --from-literal=secretkey="YourSecretKey"
  ## ref: https://kubernetes.io/docs/concepts/configuration/secret/
  ##
  ## If you want to use gcs based distributing caching:
  ## First of all you need to uncomment General settings and GCS settings sections.
  ##
  ## Access using credentials file:
  ## Create a secret 'google-application-credentials' containing your application credentials file.
  ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section
  ## You could configure
  ## $ kubectl create secret generic google-application-credentials \
  ##   --from-file=gcs-applicaton-credentials-file=./path-to-your-google-application-credentials-file.json
  ## ref: https://kubernetes.io/docs/concepts/configuration/secret/
  ##
  ## Access using access-id and private-key:
  ## Create a secret 'gcsaccess' containing 'gcs-access-id' & 'gcs-private-key'.
  ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section
  ## You could configure
  ## $ kubectl create secret generic gcsaccess \
  ##   --from-literal=gcs-access-id="YourAccessID" \
  ##   --from-literal=gcs-private-key="YourPrivateKey"
  ## ref: https://kubernetes.io/docs/concepts/configuration/secret/
  cache: {}
    ## General settings
    # cacheType: s3
    # cachePath: "cache"
    # cacheShared: true

    ## S3 settings
    # s3ServerAddress: s3.amazonaws.com
    # s3BucketName:
    # s3BucketLocation:
    # s3CacheInsecure: false
    # secretName: s3access

    ## GCS settings
    # gcsBucketName:
    ## Use this line for access using access-id and private-key
    # secretName: gcsaccess
    ## Use this line for access using google-application-credentials file
    # secretName: google-application-credential

  ## Build Container specific configuration
  ##
  builds:
    # cpuLimit: 200m
    # memoryLimit: 256Mi
    cpuRequests: 100m
    memoryRequests: 128Mi

  ## Service Container specific configuration
  ##
  services:
    # cpuLimit: 200m
    # memoryLimit: 256Mi
    cpuRequests: 100m
    memoryRequests: 128Mi

  ## Helper Container specific configuration
  ##
  helpers:
    # cpuLimit: 200m
    # memoryLimit: 256Mi
    cpuRequests: 100m
    memoryRequests: 128Mi
    image: gitlab/gitlab-runner-helper:x86_64-latest

  ## Service Account to be used for runners
  ##
  # serviceAccountName:

  ## If Gitlab is not reachable through $CI_SERVER_URL
  ##
  # cloneUrl:

  ## Specify node labels for CI job pods assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  nodeSelector: {}
    # gitlab: true

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  # limits:
  #   memory: 256Mi
  #   cpu: 200m
  requests:
    memory: 128Mi
    cpu: 100m

## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
  # Example: The gitlab runner manager should not run on spot instances so you can assign
  # them to the regular worker nodes only.
  # node-role.kubernetes.io/worker: "true"

## List of node taints to tolerate (requires Kubernetes >= 1.6)
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
  # Example: Regular worker nodes may have a taint, thus you need to tolerate the taint
  # when you assign the gitlab runner manager with nodeSelector or affinity to the nodes.
  # - key: "node-role.kubernetes.io/worker"
  #   operator: "Exists"

## Configure environment variables that will be present when the registration command runs
## This provides further control over the registration process and the config.toml file
## ref: `gitlab-runner register --help`
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html
##
envVars:
  - name: RUNNER_EXECUTOR
    value: kubernetes

As you can see, I created a secret on my ci job, no error occurred here too. In my chart, I declare this same secret (by his name) in values.yaml file, which allow deployment.yaml to use it.

So I do not understand where I am wrong. Why I get this error ?

-- french_dev
gitlab-ci
gitlab-ci-runner
kubernetes
kubernetes-helm

1 Answer

2/13/2019

Extending my last comment, I suppose that TAG_NUMBER variable is somewhere in your CI Gitlab job. However, you are not able to be authorized with the assigned variables in --docker-username and --docker-password docker flags. Have you checked the credentials used for the connection to docker-registry? Or it might be the option to manage secret within a GitLab Runner Helm Chart template.

-- mk_sta
Source: StackOverflow