Single-Instance stateful application - Container CrashLoopBackOff

9/3/2018

I am trying to follow the kubernetes tutorial for single-Instance stateful application: https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/

The problem is, after I apply all the yaml listed there, I end up with my pod unavailable, as shown below,

kubectl get deployments

NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mysql                       1         1         1            0           1h

kubectl get pods
NAME                                        READY     STATUS             RESTARTS   AGE
mysql-fb75876c6-tpdzc                       0/1       CrashLoopBackOff   17         1h

kubectl describe deployment mysql
Name:               mysql
Namespace:          default
CreationTimestamp:  Mon, 03 Sep 2018 10:50:22 +0000
Labels:             <none>
Annotations:        deployment.kubernetes.io/revision=1
Selector:           app=mysql
Replicas:           1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:  app=mysql
  Containers:
   mysql:
    Image:      mysql:5.6
    Port:       3306/TCP
    Host Port:  0/TCP
    Environment:
      MYSQL_ROOT_PASSWORD:  password
    Mounts:
      /var/lib/mysql from mysql-persistent-storage (rw)
  Volumes:
   mysql-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-pv-claim
    ReadOnly:   false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      False   MinimumReplicasUnavailable
OldReplicaSets:  <none>
NewReplicaSet:   mysql-fb75876c6 (1/1 replicas created)
Events:          <none>


kubectl describe pods mysql-fb75876c6-tpdzc

Name:               mysql-fb75876c6-tpdzc
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               wombat-dev-kubeadm-worker-1/142.93.56.123
Start Time:         Mon, 03 Sep 2018 10:50:22 +0000
Labels:             app=mysql
                    pod-template-hash=963143272
Annotations:        <none>
Status:             Running
IP:                 192.168.1.14
Controlled By:      ReplicaSet/mysql-fb75876c6
Containers:
  mysql:
    Container ID:   docker://08d630190a83fb5097bf8a98f7bb5f474751e021aec68b1be958c675d3f26f27
    Image:          mysql:5.6
    Image ID:       docker-pullable://mysql@sha256:2e48836690b8416e4890c369aa174fc1f73c125363d94d99cfd08115f4513ec9
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Mon, 03 Sep 2018 12:04:24 +0000
      Finished:     Mon, 03 Sep 2018 12:04:29 +0000
    Ready:          False
    Restart Count:  19
    Environment:
      MYSQL_ROOT_PASSWORD:  password
    Mounts:
      /var/lib/mysql from mysql-persistent-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6t8pg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mysql-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-pv-claim
    ReadOnly:   false
  default-token-6t8pg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6t8pg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age                From                                  Message
  ----     ------   ----               ----                                  -------
  Warning  BackOff  1m (x334 over 1h)  kubelet, wombat-dev-kubeadm-worker-1  Back-off restarting failed container

Question is: what should I do? Running kubectl logs mysql-fb75876c6-tpdzc returns no output at all.

Any help ?

This is the version of kubeadm

kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:14:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
-- Nicola Atorino
kubernetes
persistent-volumes

3 Answers

9/3/2018

Exit code 137 might mean it's a memory issue. Try and increase the amount of ram you have dedicated to the machine.

Minikube defaults to just 1gb of ram, so if you want more try something like: minikube start --memory 4096

-- Charlino
Source: StackOverflow

9/3/2018

Use kubectl logs -p to view previous state log.

-- Nima Hashemi
Source: StackOverflow

9/3/2018

The container is exiting with a Exit Code 137, that means a SIGTERM (equivalent to a kill -9 <process>)is sent to the process executed in the container. Usually that means the OOM Killer came in to kill it because it was using more memory than the available. Do you have enough memory available on the machine?

-- mprenditore
Source: StackOverflow