Kubernetes cluster is broken: FailedSync and SandboxChanged

9/28/2017

I have a Kubernetes 1.7.5 cluster which has somehow gotten into a semi-broken state. Scheduling a new deployment on this cluster partially fails: 1/2 pods starts normally, but the second pod does not start. The events are:

default   2017-09-28 03:57:02 -0400 EDT   2017-09-28 03:57:02 -0400 EDT   1         hello-4059723819-8s35v   Pod       spec.containers{hello}   Normal    Pulled    kubelet, k8s-agentpool1-18117938-2   Successfully pulled image "myregistry.azurecr.io/mybiz/hello"
default   2017-09-28 03:57:02 -0400 EDT   2017-09-28 03:57:02 -0400 EDT   1         hello-4059723819-8s35v   Pod       spec.containers{hello}   Normal    Created   kubelet, k8s-agentpool1-18117938-2   Created container
default   2017-09-28 03:57:03 -0400 EDT   2017-09-28 03:57:03 -0400 EDT   1         hello-4059723819-8s35v   Pod       spec.containers{hello}   Normal    Started   kubelet, k8s-agentpool1-18117938-2   Started container
default   2017-09-28 03:57:13 -0400 EDT   2017-09-28 03:57:01 -0400 EDT   2         hello-4059723819-tj043   Pod                 Warning   FailedSync   kubelet, k8s-agentpool1-18117938-3   Error syncing pod
default   2017-09-28 03:57:13 -0400 EDT   2017-09-28 03:57:02 -0400 EDT   2         hello-4059723819-tj043   Pod                 Normal    SandboxChanged   kubelet, k8s-agentpool1-18117938-3   Pod sandbox changed, it will be killed and re-created.
default   2017-09-28 03:57:24 -0400 EDT   2017-09-28 03:57:01 -0400 EDT   3         hello-4059723819-tj043   Pod                 Warning   FailedSync   kubelet, k8s-agentpool1-18117938-3   Error syncing pod
default   2017-09-28 03:57:25 -0400 EDT   2017-09-28 03:57:02 -0400 EDT   3         hello-4059723819-tj043   Pod                 Normal    SandboxChanged   kubelet, k8s-agentpool1-18117938-3   Pod sandbox changed, it will be killed and re-created.
[...]

The last two log messages just keep repeating themselves.

The dashboard of the failed pod shows:

Dashboard of failed pod

Eventually the dashboard shows the error:

Error: failed to start container "hello": Error response from daemon: {"message":"cannot join network of a non running container: 7e95918c6b546714ae20f12349efcc6b4b5b9c1e84b5505cf907807efd57525c"}

This cluster is running on Azure using the CNI Azure networking plugin. Everything was working fine up until some time after I enabled the --runtime-config=batch/v2alpha1=true in order to use the CronJob functionality. Now, even after removing that API level, and rebooting the master, the problem still occurs.

The kubelet log on the node shows that an IP address cannot be allocated:

E0928 20:54:01.733682    1750 pod_workers.go:182] Error syncing pod 65127a94-a425-11e7-8d64-000d3af4357e ("hello-4059723819-xx16n_default(65127a94-a425-11e7-8d64-000d3af4357e)"), skipping: failed to "CreatePodSandbox" for "hello-4059723819-xx16n_default(65127a94-a425-11e7-8d64-000d3af4357e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"hello-4059723819-xx16n_default(65127a94-a425-11e7-8d64-000d3af4357e)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"hello-4059723819-xx16n_default\" network: Failed to allocate address: Failed to delegate: Failed to allocate address: No available addresses"
-- Raman
kubernetes

1 Answer

9/28/2017

This is a bug with Azure CNI not always correctly recycling IP addresses from terminated pods. See this issue: https://github.com/Azure/azure-container-networking/issues/76.

The reason this occurred after enabling CronJob functionality is that cronjob containers are (usually) short-lived, and are allocated an IP every time they run. If those IPs are not recycled and re-usable by the underlying networking system -- in this case CNI -- they quickly run out.

-- Raman
Source: StackOverflow