kubernetest,the sharing Volume by Containers in one pod

10/23/2019

I get a question about sharing Volume by Containers in one pod.

Here is my yaml, pod-volume.yaml

apiVersion: v1
kind: Pod
metadata:
  name: volume-pod
spec:
  containers:
  - name: tomcat
    image: tomcat
    imagePullPolicy: Never
    ports:
    - containerPort: 8080
    volumeMounts:
    - name: app-logs
      mountPath: /usr/local/tomcat/logs
  - name: busybox
    image: busybox
    command: ["sh", "-c", "tail -f /logs/catalina.out*.log"]
    volumeMounts:
    - name: app-logs
      mountPath: /logs
  volumes:
  - name: app-logs
    emptyDir: {}

create pod:

kubectl create -f pod-volume.yaml

wacth pod status:

watch kubectl get pod -n default

finally,I got this:

NAME         READY   STATUS             RESTARTS   AGE
redis-php    2/2     Running            0          15h
volume-pod   1/2     CrashLoopBackOff   5          6m49s

then,I check logs about busybox container:

kubectl logs pod/volume-pod -c busybox

tail: can't open '/logs/catalina.out*.log': No such file or directory
tail: no files

I don't know where is went wrong. Is this an order of container start in pod, please help me, thanks

-- Daiql
docker
kubernetes

1 Answer

10/23/2019

For this case:

Catalina logs file is : catalina.$(date '+%Y-%m-%d').log

And in shell script you should not put * into.

So please try:

command: ["sh", "-c", "tail -f /logs/catalina.$(date '+%Y-%m-%d').log"]

-- Thanh Nguyen Van
Source: StackOverflow