Docker push intermittent failure to private docker registry on kubernetes (docker-desktop)

2/16/2019

I'm running a kubernetes cluster on docker-desktop (mac).

It has a local docker registry inside it.

I'm able to query the registry no problem via the API calls to get the list of tags.

I was able to push an image before, but it took multiple attempts to push.

I can't push new changes now. It looks like it pushes successfully for layers, but then doesn't acknowledge the layer has been pushed and then retries.

Repo is called localhost:5000 and I am correctly port forwarding as per instructions on https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615/

I'm ot using ssl certs as this is for development on local machine.

(The port forwarding is proven to work otherwise API call would fail)

e086a4af6e6b: Retrying in 1 second 
35c20f26d188: Layer already exists 
c3fe59dd9556: Pushing [========================>                          ]  169.3MB/351.5MB
6ed1a81ba5b6: Layer already exists 
a3483ce177ce: Retrying in 16 seconds 
ce6c8756685b: Layer already exists 
30339f20ced0: Retrying in 1 second 
0eb22bfb707d: Pushing [==================================================>]  45.18MB
a2ae92ffcd29: Waiting 
received unexpected HTTP status: 502 Bad Gateway

workaround (this will suffice but not ideal, as have to build each container

apiVersion: v1
kind: Pod
metadata:
  name: producer
  namespace: aetasa
spec:
  containers:
  - name: kafkaproducer
    image: localhost:5000/aetasa/cta-user-create-app
    imagePullPolicy: Never // this line uses the built container in docker
    ports:
        - containerPort: 5005

Kubectl logs for registry

10.1.0.1 - - [20/Feb/2019:19:18:03 +0000] "POST /v2/aetasa/cta-user-create-app/blobs/uploads/ HTTP/1.1" 202 0 "-" "docker/18.09.2 go/go1.10.6 git-commit/6247962 kernel/4.9.125-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.2 \x5C(darwin\x5C))" "-"
2019/02/20 19:18:03 [warn] 12#12: *293 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000011, client: 10.1.0.1, server: localhost, request: "PATCH /v2/aetasa/cta-user-create-app/blobs/uploads/16ad0e41-9af3-48c8-bdbe-e19e2b478278?_state=qjngrtaLCTal-7-hLwL9mvkmhOTHu4xvOv12gxYfgPx7Ik5hbWUiOiJhZXRhc2EvY3RhLXVzZXItY3JlYXRlLWFwcCIsIlVVSUQiOiIxNmFkMGU0MS05YWYzLTQ4YzgtYmRiZS1lMTllMmI0NzgyNzgiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTktMDItMjBUMTk6MTg6MDMuMTU2ODYxNloifQ%3D%3D HTTP/1.1", host: "localhost:5000"
2019/02/20 19:18:03 [error] 12#12: *293 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.0.1, server: localhost, request: "PATCH /v2/aetasa/cta-user-create-app/blobs/uploads/16ad0e41-9af3-48c8-bdbe-e19e2b478278?_state=qjngrtaLCTal-7-hLwL9mvkmhOTHu4xvOv12gxYfgPx7Ik5hbWUiOiJhZXRhc2EvY3RhLXVzZXItY3JlYXRlLWFwcCIsIlVVSUQiOiIxNmFkMGU0MS05YWYzLTQ4YzgtYmRiZS1lMTllMmI0NzgyNzgiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTktMDItMjBUMTk6MTg6MDMuMTU2ODYxNloifQ%3D%3D HTTP/1.1", upstream: "http://10.104.68.90:5000/v2/aetasa/cta-user-create-app/blobs/uploads/16ad0e41-9af3-48c8-bdbe-e19e2b478278?_state=qjngrtaLCTal-7-hLwL9mvkmhOTHu4xvOv12gxYfgPx7Ik5hbWUiOiJhZXRhc2EvY3RhLXVzZXItY3JlYXRlLWFwcCIsIlVVSUQiOiIxNmFkMGU0MS05YWYzLTQ4YzgtYmRiZS1lMTllMmI0NzgyNzgiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTktMDItMjBUMTk6MTg6MDMuMTU2ODYxNloifQ%3D%3D", host: "localhost:5000"
-- Yoker
docker
docker-desktop
kubernetes
private
registry

3 Answers

2/19/2019

It may be a disk space issue. If you store docker images inside the Docker VM you can fill up the disk space quite fast.

By default, docker-desktop VM disk space is limited to 64 gigabytes. You can increase it up to 112Gb on the "Disk" tab in Docker Preferences.

-- VAS
Source: StackOverflow

2/20/2019

I have encountered this issues quite few times and unfortunately couldn't get to the permanent fix.

Most likely the image should have been corrupted in the registry. As a work around, i suggest you delete the image from registry and do a fresh push. it would work and subsequent pushes would work too.

This issue must be related to the missing layers of the image. sometimes we delete the image using --force option, in that case it is possible that some of the common layers might get deleted and would affect other images that share the deleted layers.

-- P Ekambaram
Source: StackOverflow

2/25/2019

Try configure --max-concurrent-uploads=1 for your docker client. You are pushing quite large layers (350MB), so probably you are hitting some limits (request sizes, timeouts) somewhere. Single concurrent upload may help you, but it is only a work around. Real solution will be configuration (buffer sizes, timeouts, ...) of registry + reverse proxy in front of registry eventually.

-- Jan Garaj
Source: StackOverflow