I am new to Kubernetes and decided to use it for a POC on a small project I am currently working on.
I have a bash script which is containerized and it is executed with an argument.
The Kubernetes inventory file looks like follow:
---
apiVersion: v1
kind: Pod
metadata:
name: device-pod
labels:
name: device-pod
spec:
containers:
- image: azurecr.io/device:1.02
name: device-0
args: ["0"]
containers:
- image: azurecr.io/device:1.02
name: device-1
args: ["1"]
containers:
- image: azurecr.io/device:1.02
name: device-2
args: ["2"]
containers:
- image: azurecr.io/device:1.02
name: device-3
args: ["3"]
As you can see from the inventory file above, I am creating a pod named "device-pod" which is suppose to host and run 4 container with name: device-n (where n is 0..4)
I deploy the pod and it works fine, but the issue is after being deployed I somehow only see container "device-3" running, I can't seem to find any other running container in the pod. I would have assumed that there would be 4 containers running in the Pod.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
18m 18m 1 default-scheduler Normal Scheduled Successfully assigned device-pod to k8s-agent-abe168bc-3
18m 18m 1 kubelet, k8s-agent-abe168bc-3 spec.containers{device-3} Normal Created Created container with id 770ce7568a7dfe73bacdcd5232e8961fd3098486c82cce56465c04c1c4434659
18m 18m 1 kubelet, k8s-agent-abe168bc-3 spec.containers{device-3} Normal Started Started container with id 770ce7568a7dfe73bacdcd5232e8961fd3098486c82cce56465c04c1c4434659
13m 13m 1 kubelet, k8s-agent-abe168bc-3 spec.containers{device-3} Normal Started Started container with id 17c1ae7caa8f017a0ca81925962ecf229ff42a498af7de0dfe93a11fdaa9f43e
13m 13m 1 kubelet, k8s-agent-abe168bc-3 spec.containers{device-3} Normal Created Created container with id 17c1ae7caa8f017a0ca81925962ecf229ff42a498af7de0dfe93a11fdaa9f43e
9m 9m 1 kubelet, k8s-agent-abe168bc-3 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "device-3" with CrashLoopBackOff: "Back-off 10s restarting failed container=device-3 pod=device-pod_default(922476fb-a4fb-11e7-8ca8-000d3a25fb55)"
9m 9m 1 kubelet, k8s-agent-abe168bc-3 spec.containers{device-3} Normal Created Created container with id 5f4db92d4318537eb541dbf11b5b4e4cb7eaa93fcc26061c2e7b970505f27d5e
9m 9m 1 kubelet, k8s-agent-abe168bc-3 spec.containers{device-3} Normal Started Started container with id 5f4db92d4318537eb541dbf11b5b4e4cb7eaa93fcc26061c2e7b970505f27d5e
I don't seem to see any events for Container: device-0, device-1 and device-2
What am I doing wrong here? Any ideas would be appreciated.
The problem might arise from the fact that in your spec you have "containers" multiply defined, instead of having one "containers" spec with multiple container declarations. It is indeed intended to be an array/list, like so:
---
apiVersion: v1
kind: Pod
metadata:
name: device-pod
labels:
name: device-pod
spec:
containers:
- image: azurecr.io/device:1.02
name: device-0
args: ["0"]
- image: azurecr.io/device:1.02
name: device-1
args: ["1"]
- image: azurecr.io/device:1.02
name: device-2
args: ["2"]
- image: azurecr.io/device:1.02
name: device-3
args: ["3"]
One possible explanation of the behavior you're experiencing could be the sequential parsing and execution of all of your "containers" declaration when there should only be one running. So you have multiple containers starting/running/terminating because of the sequence, but only the last one (last declaration) is kept running.