How to view logs for failure of kubernetes container

1/17/2019

My kubernetes yaml file is getting created successfully but container shows error ,here is yaml file for reference,basically its a multi container with memory limits defined in yaml file

apiVersion: batch/v1
kind: Job
metadata:
name: command-demo
spec:
ttlSecondsAfterFinished: 100
template:
spec:
  volumes:
    - name: docker-sock
      emptyDir: {}
  restartPolicy: Never
  containers:
    - name: command-demo-container
      image: tarunkumard/fromscratch6.0
      volumeMounts:
        - mountPath: /opt/gatling-fundamentals/build/reports/gatling/
          name: docker-sock
      imagePullPolicy: Never
      resources:
        requests:
          memory: "950Mi"
        limits:
          memory: "1Gi"
    - name: ubuntu
      image: ubuntu:16.04
      command: [ "/bin/bash", "-c", "--" ]
      args: [ "while true; do sleep 10; done;" ]
      volumeMounts:
        - mountPath: /docker-sock
          name: docker-sock
      imagePullPolicy: Never
      env:
      - name: JVM_OPTS
        value: "-Xms950M -Xmx1G"

tried below commands

 vagrant@ubuntu-xenial:~/pods$ kubectl create -f gat.yaml
 job.batch/command-demo created
 vagrant@ubuntu-xenial:~/pods$ kubectl get pods
 NAME                 READY   STATUS   RESTARTS   AGE
 command-demo-bptqj   1/2     Error    0          2m33s 

Here is output of describe pod

vagrant@ubuntu-xenial:~/pods$ kubectl describe pods command-demo-bptqj
Name:           command-demo-bptqj
Namespace:      default
Node:           ip-172-31-8-145/172.31.8.145
Start Time:     Thu, 17 Jan 2019 02:03:28 +0000
Labels:         controller-uid=152e2655-19fc-11e9-b787-02d8b37d95a0
                job-name=command-demo
Annotations:    kubernetes.io/limit-ranger: LimitRanger plugin set: memory request for container ubuntu; memory limit for container ubuntu
Status:         Running
IP:             10.1.40.91
Controlled By:  Job/command-demo
Containers:
  command-demo-container:
    Container ID:   docker://108004b18788b8410a9ecd0ebb06242463b5e12b193ed3f9d54fe99d1fd1f6b1
    Image:          tarunkumard/fromscratch6.0
    Image ID:       docker-pullable://tarunkumard/fromscratch6.0@sha256:94cc06dde5e242c23e03742365d48008a9a31ffd9d79593838ebb4d651d932c9
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 17 Jan 2019 02:03:30 +0000
      Finished:     Thu, 17 Jan 2019 02:03:30 +0000
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  1Gi
    Requests:
      memory:     950Mi
    Environment:  <none>
    Mounts:
      /opt/gatling-fundamentals/build/reports/gatling/ from docker-sock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-w6jt6 (ro)
  ubuntu:
    Container ID:  docker://fdedb595698ae6697ee3ac9bbf01d25e073bc3d6342a0d14c54a427264f1175d
    Image:         ubuntu:16.04
    Image ID:      docker-pullable://ubuntu@sha256:e547ecaba7d078800c358082088e6cc710c3affd1b975601792ec701c80cdd39
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -c
      --
    Args:
      while true; do sleep 10; done;
    State:          Running
      Started:      Thu, 17 Jan 2019 02:03:30 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1Gi
    Requests:
      memory:  1Gi
    Environment:
      JVM_OPTS:  -Xms950M -Xmx1G
    Mounts:
      /docker-sock from docker-sock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-w6jt6 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  docker-sock:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  default-token-w6jt6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-w6jt6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                      Message
  ----    ------     ----   ----                      -------
  Normal  Scheduled  5m32s  default-scheduler         Successfully assigned default/command-demo-bptqj to ip-172-31-8-145
  Normal  Pulled     5m31s  kubelet, ip-172-31-8-145  Container image "tarunkumard/fromscratch6.0" already present on machine
  Normal  Created    5m30s  kubelet, ip-172-31-8-145  Created container
  Normal  Started    5m30s  kubelet, ip-172-31-8-145  Started container
  Normal  Pulled     5m30s  kubelet, ip-172-31-8-145  Container image "ubuntu:16.04" already present on machine
  Normal  Created    5m30s  kubelet, ip-172-31-8-145  Created container
  Normal  Started    5m30s  kubelet, ip-172-31-8-145  Started container

How do i know what exact is issue with my container or how to see logs

-- Margaret real
kubectl
kubernetes

3 Answers

1/17/2019

To see previous container logs of a pod use command:

kubectl logs podname -c container-name -p

-- Harish Anchu
Source: StackOverflow

1/17/2019

Return snapshot logs from pod nginx with multi containers:

$ kubectl logs podname --all-containers=true

Return snapshot logs from pod nginx with indicated containers:

$ kubectl logs podname -c container-name
-- baozhenli
Source: StackOverflow

1/17/2019

Hey Margaret as the rest described you can use kubectl.

As a added idea you can ssh into the worker node and do a docker inspect on the container to see some additional logs.

If all of that does not give you what you need you can kubectl exec -it {pod_name} which will give you an interactive terminal into the docker container where you can check the /var/logs/ or other related OS logs.

-- David Webster
Source: StackOverflow