Kubernetes minikube - can pull from docker registry manually, but rolling deployments won't pull

10/7/2017

I have a Kubernetes minikube running a deployment / service.

When I try to update the image to a new version (from my registry on a separate machine) as follows:

kubectl set image deployment/flask-deployment-yaml flask-api-
endpoint=192.168.1.201:5000/test_flask:2

It fails with the errors:

Failed to pull image "192.168.1.201:5000/test_flask:2": rpc error: 
code = 2 desc = Error: image test_flask:2 not found

If I log on to my minikube server and manually pull the docker image as follows:

$ docker pull 192.168.1.201:5000/test_flask:2
2: Pulling from test_flask
280aca6ddce2: Already exists
3c0df3e97827: Already exists
669c8479e3f7: Pull complete
83323a067779: Pull complete
Digest: sha256:0f9650465284215d48ad0efe06dc888c50928b923ecc982a1b3d6fa38d
Status: Downloaded newer image for 192.168.1.201:5000/test_flask:2

It works, and then my deployment update suddently succeeds, presumably because the image now exists locally.

I'm not sure why the deployment update doesn't just work straight away...

More deployment details:

Name:                   flask-deployment-yaml
Namespace:              default
CreationTimestamp:      Sat, 07 Oct 2017 15:57:24 +0100
Labels:                 app=front-end
Annotations:            deployment.kubernetes.io/revision=2
Selector:               app=front-end
Replicas:               4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:       app=front-end
  Containers:
   flask-api-endpoint:
    Image:              192.168.1.201:5000/test_flask:2
    Port:               5000/TCP
    Environment:        <none>
    Mounts:             <none>
  Volumes:              <none>
Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet:  flask-deployment-yaml-1174202895 (4/4 replicas created)
-- Xerphiel
kubernetes
minikube

1 Answer

10/8/2017

You should either delete your minikube cluster and start it again with the --insecure-registry flag, to allow pulling from insecure registries, or use one that is reachable through localhost and port forward into the minikube cluster, as it won't refuse to pull from localhost. More details here: - https://github.com/kubernetes/minikube/blob/master/docs/insecure_registry.md - https://github.com/kubernetes/minikube/issues/604

And more details and illustrations on the problem and how to fix here: https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615

-- vascop
Source: StackOverflow