Speed of "gcloud docker push"

5/4/2016

New to Google Container Registry and Docker ecosystem in general. I'm pushing an existing image to gcr.io and I'd expect the time to complete the task to be close to 0 seconds, as all the bits are already on gcr.io. The context is running dev code in the Cloud, on lots of cores at the same time, as opposed to the 4 cores my Mac laptop has. I'm running a no-op to isolate the bottlenecks, the real usage has about 6M new bytes. It is slow, 14 seconds to perform a no-op. Is there a way to cut this no-op down to less than a second?

$ time gcloud docker push gcr.io/ai2-general/euclid:latest
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
The push refers to a repository [gcr.io/ai2-general/euclid]
3a67b2b013f5: Layer already exists 
b7c8985fbf02: Layer already exists 
fef418d1a9e8: Layer already exists 
c58360ce048c: Layer already exists 
0030e912789f: Layer already exists 
5f70bf18a086: Layer already exists 
0ece0aa9121d: Layer already exists 
ef63204109e7: Layer already exists 
694ead1cbb4d: Layer already exists 
591569fa6c34: Layer already exists 
998608e2fcd4: Layer already exists 
c12ecfd4861d: Layer already exists 
latest: digest: sha256:04a831f4bf3e3033c40eaf424e447dd173e233329440a3c9796bf1515225546a size: 10321

real    0m14.742s
user    0m0.622s
sys 0m0.181s

14 seconds is long time. Using plain docker push is faster, but still wastes 5 precious seconds.

$ time docker push gcr.io/ai2-general/euclid:latest
The push refers to a repository [gcr.io/ai2-general/euclid]
3a67b2b013f5: Layer already exists 
b7c8985fbf02: Layer already exists 
fef418d1a9e8: Layer already exists 
c58360ce048c: Layer already exists 
0030e912789f: Layer already exists 
5f70bf18a086: Layer already exists 
0ece0aa9121d: Layer already exists 
ef63204109e7: Layer already exists 
694ead1cbb4d: Layer already exists 
591569fa6c34: Layer already exists 
998608e2fcd4: Layer already exists 
c12ecfd4861d: Layer already exists 
latest: digest: sha256:04a831f4bf3e3033c40eaf424e447dd173e233329440a3c9796bf1515225546a size: 10321

real    0m5.014s
user    0m0.030s
sys 0m0.011s

I suspect the difference is caused by the 7 login attempts, which take a while to process, afterwards it feels like the docker push overhead.

For reference:

$ gcloud --version
Google Cloud SDK 107.0.0

bq 2.0.24
bq-nix 2.0.24
core 2016.04.21
core-nix 2016.03.28
gcloud 
gsutil 4.19
gsutil-nix 4.18
kubectl 
kubectl-darwin-x86_64 1.2.2
-- Cristian Petrescu-Prahova
google-kubernetes-engine

2 Answers

10/3/2016

Docker for Mac? Try restarting the daemon.

I find I have to restart Docker (1.12) about once-a-day, or things begin to slow down. I believe the Docker team is aware of the problem and tracking the issue.

https://forums.docker.com/t/slow-upload-push-to-hub-docker/12072/14

-- Byron Dover
Source: StackOverflow

5/18/2016

I think this is related to the .docker/config.json file which gets created (or over written) every time we run the 'gcloud docker' command

-- Dinis Cruz
Source: StackOverflow