I have just started working on kubernetes.
My pod spec file
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "test-cfg",
"labels": {
"app": "swelite"
}
},
"spec": {
"containers": [
{
"name": "config-agent",
"image": "img1",
"command": [
"/etc/init.d/docker-init"
],
"imagePullPolicy": "Never"
},
{
"name": "other-proc",
"image": "img2",
"command": [
"/etc/init.d/docker-init"
],
"imagePullPolicy": "Never"
}
]
}
}
I have created two docker containers with img1 and img2 and they are running fine When I try to create pod with containers using these two images, it keeps on crashing
I get this in the description of containers in pod
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
I suspect some issue with pod specs file because image is working if I bringup one docker container manually.
I don't get any useful information in logs as well. The log looks fine.
kubectl describe output
[root@node1 abs]# kubectl describe pod test-cfg
Name: test-cfg
Namespace: default
Priority: 0
Node: node1/10.0.0.30
Start Time: Wed, 06 May 2020 12:04:12 +0000
Labels: app=swelite
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cni0",
"ips": [
"10.233.90.8"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cni0",
"ips": [
"10.233.90.8"
],
"default": true,
"dns": {}
}]
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"swelite"},"name":"test-cfg","namespace":"default"},"spec":{"...
Status: Running
IP: 10.233.90.8
IPs:
IP: 10.233.90.8
Containers:
config-agent:
Container ID: docker://dbd3ddf4c9f65fb0c97c30af3ab8e85da660e242b774be99f89e22b530a174e9
Image: img1
Image ID: docker://sha256:5eaaa7ee097877cb8b1628ed79f281006f01f39d8a30605d52dc40be5ae2da9f
Port: <none>
Host Port: <none>
Command:
/etc/init.d/docker-init
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 06 May 2020 12:10:47 +0000
Finished: Wed, 06 May 2020 12:11:30 +0000
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-648ql (ro)
other-proc:
Container ID: docker://a6d4b63920b75e39fdbca37558cbbab23057d16fdb781144e51898f46b173aae
Image: img2
Image ID: docker://sha256:a33a6a51fb7a4398881c9ed0d201f03c490bb29581f7cb1c857edbd6cb7a5d48
Port: <none>
Host Port: <none>
Command:
/etc/init.d/docker-init
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 06 May 2020 12:10:48 +0000
Finished: Wed, 06 May 2020 12:11:31 +0000
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-648ql (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-648ql:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-648ql
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-cfg to node1
Normal Pulled 6m19s (x3 over 8m) kubelet, node1 Container image "img1" already present on machine
Normal Created 6m19s (x3 over 8m) kubelet, node1 Created container config-agent
Normal Started 6m18s (x3 over 8m) kubelet, node1 Started container config-agent
Normal Pulled 6m18s (x3 over 8m) kubelet, node1 Container image "img2" already present on machine
Normal Created 6m18s (x3 over 7m59s) kubelet, node1 Created container other-proc
Normal Started 6m17s (x3 over 7m59s) kubelet, node1 Started container other-proc
Warning BackOff 5m34s (x3 over 6m31s) kubelet, node1 Back-off restarting failed container
Warning BackOff 3m (x10 over 6m31s) kubelet, node1 Back-off restarting failed container
I think this maybe occuring because my main process in the docker-init script doesn't start What I am trying to understand here is as to what difference is affecting pod containers but not docker container Something to do with resource utilization?
After adding long sleep to my docker-init script, the script doesn't exit and my containers don't restart.