How to debug minikube errors?

1/18/2018

I'm trying to run pod with Cassandra database, below is its deployment description:

- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    name: cassandra
    namespace: test
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 2
    selector:
      matchLabels:
        app: cassandra
    template:
      metadata:
        labels:
          app: cassandra
      spec:
        containers:
        - env:
          - name: MAX_HEAP_SIZE
            value: 1024M
          - name: HEAP_NEWSIZE
            value: 1024M
          image: cassandra:3.10
          name: cassandra
          ports:
          - containerPort: 9042
            protocol: TCP

The pod gets created and then goes into CrashLoopBackOff. When I try kubectl describe here's what I see:

Name:           cassandra-6b5f5c46cf-zpwlx
Namespace:      test
Node:           minikube/192.168.99.102
Start Time:     Thu, 18 Jan 2018 15:26:05 +0200
Labels:         app=cassandra
                pod-template-hash=2619170279
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"test","name":"cassandra-6b5f5c46cf","uid":"22f28f45-fc53-11e7-ae64-08002798f...
Status:         Running
IP:             172.17.0.7
Controlled By:  ReplicaSet/cassandra-6b5f5c46cf
Containers:
  cassandra:
    Container ID:   docker://b3477788391622145350e870c00e19561ee662946aa5a307cc8bea28fc874544
    Image:          cassandra:3.10
    Image ID:       docker-pullable://cassandra@sha256:af21476b230507c6869d758e4dec134886210bd89d56deade90bc835a1c0af37
    Port:           9042/TCP
    State:          Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 18 Jan 2018 15:26:26 +0200
      Finished:     Thu, 18 Jan 2018 15:26:28 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 18 Jan 2018 15:26:11 +0200
      Finished:     Thu, 18 Jan 2018 15:26:14 +0200
    Ready:          False
    Restart Count:  2
    Environment:
      MAX_HEAP_SIZE:  1024M
      HEAP_NEWSIZE:   1024M
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-77lfg (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  default-token-77lfg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-77lfg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason                 Age               From               Message
  ----     ------                 ----              ----               -------
  Normal   Scheduled              28s               default-scheduler  Successfully assigned cassandra-6b5f5c46cf-zpwlx to minikube
  Normal   SuccessfulMountVolume  28s               kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-77lfg"
  Normal   Pulled                 7s (x3 over 27s)  kubelet, minikube  Container image "cassandra:3.10" already present on machine
  Normal   Created                7s (x3 over 27s)  kubelet, minikube  Created container
  Normal   Started                6s (x3 over 27s)  kubelet, minikube  Started container
  Warning  BackOff                4s (x2 over 18s)  kubelet, minikube  Back-off restarting failed container
  Warning  FailedSync             4s (x2 over 18s)  kubelet, minikube  Error syncing pod

The error reporting is completely useless: it's just some generic messages that tell nothing about the problem.

There's a suspicious paragraph in pod's description: volumes. I didn't ask to mount any volumes for this container. However, after some web search, I think that whatever is mounted in this container is just some technical aspect of how Kubernetes works and has no actual meaning.

Whatever the case: how can I get more information from minikube about what it was trying to do, and what failed?

-- wvxvw
debugging
kubernetes
minikube

1 Answer

1/18/2018

Your pod is in CrashLoopBackoff state. This means that the container inside your pod is terminating its execution, kubernetes is trying to run it again, but it terminates again, giving you a Crash Loop.

I suggest you to take a look at the container's output by running:

kubectl -n test logs -f cassandra-6b5f5c46cf-zpwlx

That should be cassandra's output and should explain the reason cassandra is not running.

-- whites11
Source: StackOverflow