Kubernetes: Pull images from internal registry with on-premise deployment

10/6/2018

tl;dr How do you reference an image in a Kubernetes Pod when the image is from a private docker registry hosted on the same k8s cluster without a separate DNS entry for the registry?

In an on-premise Kubernetes deployment, I have setup a private Docker registry using the stable/docker-registry helm chart using a self-signed certificate. This is on-premise and I can't setup a DNS record to give the registry it's own URL. I wish to use these manifests as templates, so I don't want to hardcode any environment specific config.

The docker registry service is of type ClusterIP and looks like this:

apiVersion: v1
kind: Service
metadata:
  name: docker-registry
  labels:
    app: docker-registry
spec:
  type: ClusterIP
  ports:
    - port: 443
      protocol: TCP
      name: registry
      targetPort: 5000
  selector:
    app: docker-registry

If I've pushed an image to this registry manually (or in the future via a Jenkins build pipeline), how would I reference that image in a Pod spec?

I have tried:

containers:
- name: my-image
  image: docker-registry.devops.svc.cluster.local/my-image:latest
  imagePullPolicy: IfNotPresent

But I received an error about the node host not being able to resolve docker-registry.devops.svc.cluster.local. I think the Docker daemon on the k8s node can't resolve that URL because it is an internal k8s DNS record.

Warning  Failed     20s (x2 over 34s)  kubelet, ciabdev01-node3  
Failed to pull image "docker-registry.devops.svc.cluster.local/hadoop-datanode:2.7.3": 
rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry.devops.svc.cluster.local/v2/: dial tcp: lookup docker-registry.devops.svc.cluster.local: no such host
Warning  Failed     20s (x2 over 34s)  kubelet, node3  Error: ErrImagePull

So, how would I reference an image on an internally hosted docker registry in this on-premise scenario?

Is my only option to use a service of type NodePort, reference one of the node's hostname in the Pod spec, and then configure each node's docker daemon to ignore the self signed certificate?

-- mpalumbo7
docker
kubernetes

1 Answer

10/8/2018

Docker uses DNS settings configured on the Node, and, by default, it does not see DNS names declared in the Kubernetes cluster.

You can try to use one of the following solutions:

  1. Use the IP address from ClusterIP field in "docker-registry" Service description as docker registry name. This address is static until you recreate the service. Also, you can add this IP address to /etc/hosts on each node.

    For example, you can add my-docker-registry 10.11.12.13 line to /etc/hosts file. Therefore, you can use 10.11.12.13:5000 or my-docker-registry:5000 as docker registry name for image field in Pods description.

  2. Expose "docker-registry" Service outside the cluster using type: NodePort. Than use localhost:<exposed_port> or <one_of_nodes_name>:<exposed_port> as docker registry name for image field in Pods description.

-- Artem Golenyaev
Source: StackOverflow