Create a batch of Pods in the same cluster, and some Pod startup time exceeds 40s

4/15/2020

Create a batch of Pods in the same cluster, and some Pod startup time exceeds 40s.Does anyone know why the same batch of Pods only takes 2s to create, while others require 60s?

enter image description here Does anyone know exactly what was done during these two periods? Cause a lot of time

I0412 18:31:21.700978    3651 kuberuntime_manager.go:599] SyncPod received new pod "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)", will create a sandbox for it
I0412 18:31:21.700988    3651 kuberuntime_manager.go:608] Stopping PodSandbox for "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)", will start new one
I0412 18:31:21.701011    3651 kuberuntime_manager.go:660] Creating sandbox for pod "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:21.702856    3651 docker_service.go:453] Setting cgroup parent to: "/kubepods/burstable/podc04c217e-7ca8-11ea-8ed5-fa2020001233"

I0412 18:31:29.991713    3651 generic.go:153] GenericPLEG: c04c217e-7ca8-11ea-8ed5-fa2020001233/7c3b1d99a34155e82f5e55b2bf8ec84080c2ea4c4af983cdb85e6feef2135d9f: non-existent -> exited
I0412 18:31:29.992736    3651 kuberuntime_manager.go:854] getSandboxIDByPodUID got sandbox IDs ["7c3b1d99a34155e82f5e55b2bf8ec84080c2ea4c4af983cdb85e6feef2135d9f"] for pod "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:31.902790    3651 generic.go:386] PLEG: Write status for moons-85276-384931/moonshub: &container.PodStatus{ID:"c04c217e-7ca8-11ea-8ed5-fa2020001233", Name:"moons-85276-384931", Namespace:"moonshub", IP:"", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*v1alpha2.PodSandboxStatus{(*v1alpha2.PodSandboxStatus)(0xc002ac0be0)}} (err: <nil>)
I0412 18:31:31.902870    3651 kubelet.go:1981] SyncLoop (PLEG): "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)", event: &pleg.PodLifecycleEvent{ID:"c04c217e-7ca8-11ea-8ed5-fa2020001233", Type:"ContainerDied", Data:"7c3b1d99a34155e82f5e55b2bf8ec84080c2ea4c4af983cdb85e6feef2135d9f"}
I0412 18:31:31.902900    3651 kubelet_pods.go:1317] Generating status for "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:31.902908    3651 kubelet_pods.go:1379] start convertStatusToAPIStatus status for "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:31.902939    3651 kubelet_pods.go:1411] end convertStatusToAPIStatus status for "moons-85276R-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:31.902949    3651 kubelet_pods.go:1334] start getPhase for "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:31.902961    3651 kubelet_pods.go:1336] stop getPhase for "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:31.902968    3651 kubelet_pods.go:1346] start UpdatePodStatus for "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"
I0412 18:31:31.902976    3651 kubelet_pods.go:1348] stop UpdatePodStatus for "moons-85276-384931_moonshub(c04c217e-7ca8-11ea-8ed5-fa2020001233)"

I0412 18:31:46.170010    3651 factory.go:116] Using factory "docker" for container "/kubepods/burstable/podc04c217e-7ca8-11ea-8ed5-fa2020001233/7c3b1d99a34155e82f5e55b2bf8ec84080c2ea4c4af983cdb85e6feef2135d9f"
I0412 18:31:46.171207    3651 manager.go:1011] Added container: "/kubepods/burstable/podc04c217e-7ca8-11ea-8ed5-fa2020001233/7c3b1d99a34155e82f5e55b2bf8ec84080c2ea4c4af983cdb85e6feef2135d9f" (aliases: [k8s_POD_moons-85276-384931_moonshub_c04c217e-7ca8-11ea-8ed5-fa2020001233_0 7c3b1d99a34155e82f5e55b2bf8ec84080c2ea4c4af983cdb85e6feef2135d9f], namespace: "docker")
I0412 18:31:46.171349    3651 handler.go:325] Added event &{/kubepods/burstable/podc04c217e-7ca8-11ea-8ed5-fa2020001233/7c3b1d99a34155e82f5e55b2bf8ec84080c2ea4c4af983cdb85e6feef2135d9f 2020-04-12 10:31:21.704726664 +0000 UTC containerCreation {<nil>}}
-- moons
kubernetes
kubernetes-pod

1 Answer

4/16/2020

This also depends on the OS, container runtime you are running. Also please post the cluster informations like

  • Kubernetes Version
  • Cluster Provider
  • Container Runtime
  • Operating System
  • Pod and Service IP CIDRs
  • QPS on the API Server etc.

This might help in speculating some reasons initially.

-- gkarthiks
Source: StackOverflow