Using kubernetes
1.21
and istio
1.11.3
I have an istio
-enabled namespace, into which I create the following Pod
via kubectl create -f mypod.yaml
where mypod.yaml
as below:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
restartPolicy: Never
The pod ends up in a NotReady
state
NAME READY STATUS RESTARTS AGE
command-demo 1/2 NotReady 0 41m
Why is this happening?
My assumption is that this could be related to the following containerStatuses
containerStatuses:
- containerID: containerd://121a8cb31bed60048116d45d0c500d5e3e11e41d790cdc4b506d37a9cb036e4f
image: docker.io/library/debian:latest
imageID: docker.io/library/debian@sha256:2906804d2a64e8a13a434a1a127fe3f6a28bf7cf3696be4223b06276f32f1f2d
lastState: {}
name: command-demo-container
ready: false
restartCount: 0
started: false
state:
terminated:
containerID: containerd://121a8cb31bed60048116d45d0c500d5e3e11e41d790cdc4b506d37a9cb036e4f
exitCode: 0
finishedAt: "2022-01-02T17:23:04Z"
reason: Completed
startedAt: "2022-01-02T17:23:04Z"
- containerID: containerd://5d0defc42b32c62c421b8328b8d177c12a042ba98c4e666b96e999be569478bd
image: docker.io/istio/proxyv2:1.11.3
imageID: docker.io/istio/proxyv2@sha256:28513eb3706315b26610a53e0d66b29b09a334e3164393b9a0591f34fe47a6fd
lastState: {}
name: istio-proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-01-02T17:23:04Z"
hostIP: 10.8.1.9
The (main) container exits, but the istio
sidecar is still reported as running.
Is this behaviour normal / expected? Any way for a graceful pod shutdown in such istio
-enabled namespaces / use-cases?
P.S. Why is containerStatuses[0].started
= false
since it is reported as state.terminated
?