Is it normal docker daemon kill/restart containers on a short time span?

8/16/2018

We started to monitor docker events in our k8s cluster and noticed that are a lot of Kill/Die/Stop/Destroy for various containers in a short time period.

Is that normal? (I assume it's not)

Aparently is not a capacity problem:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 16 Aug 2018 11:19:30 -0300   Tue, 14 Aug 2018 14:02:37 -0300   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 16 Aug 2018 11:19:30 -0300   Tue, 14 Aug 2018 14:02:37 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 16 Aug 2018 11:19:30 -0300   Tue, 14 Aug 2018 14:02:37 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 16 Aug 2018 11:19:30 -0300   Fri, 11 May 2018 16:37:48 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 16 Aug 2018 11:19:30 -0300   Tue, 14 Aug 2018 14:02:37 -0300   KubeletReady                 kubelet is posting ready status

All Pods shows status "Running"

Any tips on how debug it further?

-- Heron Rossi
docker
kubernetes

1 Answer

8/16/2018

You can inspect the docker container status as following commands on the node hosts where runs pods on.

docker inspect <container id>

More option is here

And events logs and journal logs are helpful to debug.

kubectl get events

journalctl --no-pager 
-- Daein Park
Source: StackOverflow