I created a Minikube instance in my local machine on top of Virtual Box. I have a couple of replication controllers with resource limits specified for both of them. First I created one instance per each replication controller A and B. Then I increased the replication count of rc B to 6. Once it is being scaling up, I can see that the Pod created by rc A is killed by Kubernetes. Only log I can see in kubeclt logs is
/opt/app-server/bin/app-server.sh: line 159: 28 Killed $JAVA_HOME/bin/java -server $JVM_OPTS $XDEBUG $SERVER_OPTS -Djava.endorsed.dirs=$SERVER_ENDORSED -classpath $SERVER_CLASSPATH org.adroitlogic.appserver.AppServerI have no clue what caused for the eviction of this pod. I'm guessing this is related to the resource allocation of pods and limited resources on VM. But I need to confirm that. Where can I find the logs related to the reason of eviction of this pod? I searched in journalctl -u localkube
Jan 09 11:00:55 minikube localkube[3421]: I0109 11:00:55.136114 3421 docker_manager.go:2524] checking backoff for container "ipsweb" in pod "ipsweb-m3234"
Jan 09 11:00:55 minikube localkube[3421]: I0109 11:00:55.136525 3421 docker_manager.go:2538] Back-off 5m0s restarting failed container=ipsweb pod=ipsweb-m3234_default(e6961157-d650-11e6-8bee-080027bc9720)
Jan 09 11:00:55 minikube localkube[3421]: E0109 11:00:55.136571 3421 pod_workers.go:184] Error syncing pod e6961157-d650-11e6-8bee-080027bc9720, skipping: failed to "StartContainer" for "ipsweb" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=ipsweb pod=ipsweb-m3234_default(e6961157-d650-11e6-8bee-080027bc9720)"kubectl describe pod gives following events
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 1h 1 {kubelet minikube} spec.containers{ipsweb} Normal Started Started container with docker id 2ca5ccaa11a1
1h 1h 1 {kubelet minikube} spec.containers{ipsweb} Normal Created Created container with docker id 2ca5ccaa11a1; Security:[seccomp=unconfined]
1h 1h 1 {kubelet minikube} spec.containers{ipsweb} Normal Started Started container with docker id 86d4bdfa014e
1h 1h 1 {kubelet minikube} spec.containers{ipsweb} Normal Created Created container with docker id 86d4bdfa014e; Security:[seccomp=unconfined]
50m 50m 1 {kubelet minikube} spec.containers{ipsweb} Normal Started Started container with docker id a570e4f59e96
50m 50m 1 {kubelet minikube} spec.containers{ipsweb} Normal Created Created container with docker id a570e4f59e96; Security:[seccomp=unconfined]
1h 49m 2 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ipsweb" with CrashLoopBackOff: "Back-off 10s restarting failed container=ipsweb pod=ipsweb-m3234_default(e6961157-d650-11e6-8bee-080027bc9720)"
49m 49m 1 {kubelet minikube} spec.containers{ipsweb} Normal Started Started container with docker id b91cc20a8bb3
49m 49m 1 {kubelet minikube} spec.containers{ipsweb} Normal Created Created container with docker id b91cc20a8bb3; Security:[seccomp=unconfined]
1h 48m 4 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ipsweb" with CrashLoopBackOff: "Back-off 20s restarting failed container=ipsweb pod=ipsweb-m3234_default(e6961157-d650-11e6-8bee-080027bc9720)"
48m 48m 1 {kubelet minikube} spec.containers{ipsweb} Normal Started Started container with docker id cf24faa31718
48m 48m 1 {kubelet minikube} spec.containers{ipsweb} Normal Created Created container with docker id cf24faa31718; Security:[seccomp=unconfined]
1h 46m 7 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ipsweb" with CrashLoopBackOff: "Back-off 40s restarting failed container=ipsweb pod=ipsweb-m3234_default(e6961157-d650-11e6-8bee-080027bc9720)"But non of the above gives me a hint about the cause and why this is happening. Any suggestion?
To get logs of each pod in simple way, run following command,
$ minikube dashboard
then kubernetes dashboard will open up in your browser. then goto Pods, under containers tab, there will be option View logs from where you can actually see what's going on inside container or why it's failing
So I don't know what is the correct way to do it but here is what I do when k8s cannot give me enough information.
The easiest thing to do is to see docker logs of that pod. I see that certain pod is failing again and again. So I would goto minikube machine and run
docker ps -a | grep 'some_identifiable_name'
this is where I get the container id. Once I have that I get logs of that pod.
docker logs <id_found_above>
This should be done so quickly because k8s might garbage collect the container before you can even perform all of above steps.