How big can a GKE container image get before it's a problem?

7/26/2019

This question is admittedly somewhat vague. If you have suggestions how to better word it, please by all means, give me feedback...

I want to understand how big a GKE container image can get before there may be problems, either serious or minor. For example, I've built a docker image (not deployed yet) that is 683 MB.

(As an aside, the reason it's so big is that I'm running a computer vision library licensed from a company with certain attributes: (1) uses native libraries that are not compatible with Alpine; (2) uses Java; (3) uses Node.js to run a required licensing daemon in same container; (4) has some very large machine learning model files.)

Although the service will have auto-scaling enabled, I expect the auto-scaling to be fairly light. It might add a new pod occasionally, but not major spikes up and down.

-- jacob
docker
google-kubernetes-engine
kubernetes

2 Answers

7/27/2019

The size of the container will determine how many resources to assign it and thus how much CPU, memory and disk space your nodes.must have. I have seen containers require over 2 GB of memory and still work fine within the cluster.

There probably is an upper limit but the containers would have to be enormous, your container size should not pose any issues aside from possibly container startup

-- Patrick W
Source: StackOverflow

9/12/2019

In practice, you're going to have issues pushing an image to GCR before you have issues running it on GKE, but there isn't a hard limit outside the storage capabilities of your nodes. You can get away with O(GB) pretty easily.

-- jsand
Source: StackOverflow