I have a kubernetes cluster hosted by Google Cloud which I'm running 4 small services on. For some reason some of my pods have just crashed and can't be recreated due to no IP addresses being available to the network. Why would this be?
Looking at my Google quotas, I have enough IP addresses available. This happened once before, and the only way I was able to resolve it was by destroying the cluster and recreating it. It's strange because the services run fine for a while then this issue crops up seemingly randomly.
Here is the error:
Error syncing pod, skipping: failed to "SetupNetwork" for "myapp" with SetupNetworkError: "Failed to setup network for pod \"myapp(8ba3a1aa-8ed4-11e6-9d08-42010af0015a)\" using network plugins \"kubenet\": Error adding container to network: no IP addresses available in network: kubenet; Skipping pod"
The pod that's failed has been restarted 70 times, could it be possible that IP addresses are not being released back into the pool? I'm not a network guy, so forgive my ignorance ;)
This error has to do with the private IP addresses that are managed by kubenet. This sounds like it may be due to Kubernetes Issue #34278.
You can check if this is the problem by looking in /var/lib/cni/networks/kubenet/
to see if it is full of IPs that aren't actually being used.